In the past edition of the AcademyDay#8 I decided to put the spotlight on one of the topics that I believe will be of great relevance in our industry in the next few years: the Artificial Intelligence.
The reason is very simple: the computing capacity we have in machines goes beyond the one in the human brain in various areas. This means that the machines are able to assimilate information and make faster operations than humans. Let’s think for example to modern cars that can drive autonomously without a driver: this is just due to intensive use of AI.
Then, in the medical field thanks to the crossing of an incredible amount of data, we are now able to predict and accurately diagnose whether patients will be subject to certain diseases and prepare in time treatments that may save their lives.
It is no coincidence that companies like Google (which we had the honor of hosting on the AcademyDay stage with Alexander Mordvintsev), Facebook, Tesla, NVIDIA are massively investing in this kind of technology to be more effective with their products and give us a virtual experience that is more akin to our tastes. After all, by studying our behaviour, computers can offer us what we are actually more interested in. The market leader in this type of technology is who at the moment can theoretically ensure a virtually infinite computing power thanks to the GPU computing: NVIDIA.
One of the applications of AI is directly related to what we do every day: the rendering.
I’m very impressed with what the new denoising algorithms based on AI are able to do. Last year at Siggraph an amazing application was presented which could render a scene almost one sample per pixel and thus allowing the AI to reconstruct the image in the missing pixels.
Nowadays, the denoising that we know is limited to blur the pixels making sure that in the final picture there isn’t enough detail. Instead thanks to AI, the denoising can generate pixels so that the final product will be as if we had rendered the image with high-quality settings and at a higher resolution.
Anyway, Nvidia isn’t the only one going in this direction because even Disney is moving on this line and the application of this technology has already been brought to the big screen with the film Finding Dory.
What does this become for us as users? It’s easy to understand.
In the future which I imagine for all people dealing with 3D CG I see actual real time. I see us moving towards a world where the rendering we know today will change and will no longer be a process that will take minutes, hours, days of calculation: On the contrary, it will become a matter of moments because the AI will rebuild for us what the GPU and / or CPU failed to render. It will be like using Photoshop: I do something in the viewport and see the result immediately in the canvas. If so, we won’t even need to worry about the settings in the rendering process.
I suppose there will be rendering engine interfaces in which there will be just one “render” button to click to create render elements for compositing. In fact, if we think carefully about it, all the parameters that we have today in the various renderers derive from the approximation that we have to apply every time in order to limit the calculation time of our images.
Do you think that this scenario could be scary? Do you think that this will in some way level out the quality of our work?
In general my opinion about that is for sure it will be easier to produce images and animations depending on the computing power that one can afford and also we’ll be finally able to concentrate on the creative aspect, which we believe is more important during production.
The real question is, “and if one day machines, given their impressive speed of learning, were able to create images alone on the basis of few inputs provided by man?”
Gianpiero Peo Monopoli