After the successful release of DLSS 3.5, NVIDIA’s VP of Applied Deep Learning Research Bryan Catanzaro has claimed that by the time DLSS 10 is out, generative AI would be responsible for most of the rendering in video games. A lofty claim, and while there is a chance that it may be so, I’m skeptical. First, here’s a snippet of the interview from Digitial Foundry’s stream:
Back in 2018 at the NeurIPS conference, we actually put together a really cool demo of a world that was being rendered by a neural network, like, completely but it was being driven by a game engine. So, basically, what we were doing was using the game engine to generate information about where things are and then using that as an input to a neural network that would do all the rendering, so it was responsible basically for every part of the rendering process. Just getting that thing to run in real time in 2018 was kind of a visionary thing. The image quality that we got from it certainly wasn’t anything close to Cyberpunk 2077, but I think long term this is where the graphics industry is going to be headed. We’re going to be using generative AI more and more for the graphics process. Again, the reason for that is going to be the same as it is for every other application of AI, we’re able to learn much more complicated functions by looking at huge data sets than we can by manually constructing algorithms bottom up.
I think we’re going to have increased realism and also hopefully make it cheaper to make awesome AAA environments by moving to much, much more neural rendering. I think that’s going to be a gradual process. The thing about the traditional 3D Pipeline and the game engines is that it’s controllable: you can have teams of artists build things and they have coherent stories, locations, everything. You can actually build a world with these tools.
We’re going to need those tools for sure. I do not believe that AI is gonna build games in a way where you just write a paragraph about making a cyberpunk game and then pop comes out something as good as Cyberpunk 2077. I do think that let’s say DLSS 10 in the far future is going to be a completely neural rendering system that interfaces with a game engine in different ways, and because of that, it’s going to be more immersive and more beautiful.
Catanzaro refers to this ‘driving game’ first showcased at December 2018’s NeurIPS conference in Montreal, Canada. Needless to say, the quality wasn’t great, but AI is capable of major improvements in a relatively brief amount of time.Via YouTube
Generative AI is all well and good, but it’s not very useful when it comes to rendering unique environments with a set dataset that’s close to what the developers intend. There aren’t many advantages to using a neural network for a predefined set of environments.
More importantly, a proprietary upscaler like DLSS will never become the norm for consoles, thereby limiting its use in mainstream games, and let’s face it. Most games are ported from consoles to PCs, and console hardware is primarily AMD which uses the open-source FSR upscaler.
When it comes to originality, generative AI straight-up sucks. A number of well-known writers of popular RPGs like Dragon Age and Mass Effect have attested to this. It just results in repetitive content. Unless we see a revolutionary breakthrough in this field, generative AI won’t spit out anything worth using in games.
Finally, and more recently. Even DLSS 3.5 produces a lot of ghosting and artifacts at times. While we expect these issues to be ironed out soon enough, I strongly believe that in a field like computer graphics, no proprietary technology won’t last as long as Catanzaro believes if it’s unavailable to the wider console industry.