NVIDIA DLSS 2.0: Improved Quality and Temporal Vectors

Overall, NVIDIA’s RTX Turing graphics cards have been a mixed bag: Till now less than a handful of titles support ray-tracing and that too with barely playable frame rates. DLSS, the new AI-based upscaling technology was supposed to make up for the performance drop incurred by RTX, but it didn’t pan out as planned. It significantly blurred the textures, reducing the visual fidelity, at times looking even worse than the original resolution prior to upscaling. There has been a lot of criticism and bad press surrounding DLSS, and NVIDIA had been uncharacteristically quiet about it. Now, we know why. DLSS 2.0 is here and it overhauls the entire upscaling process.

DLSS 1.0: Per Game Training and Optimization

Both DLSS 1.0 and 2.0 have the same objective: To upscale lower resolution images while maintaining the highest level of quality and offer the best performance. However, the way they achieve this is quite different. DLSS 1.0 implements a separate neural network for every game, as well as the target resolution. For example, To upscale Metro Exodus (hypothetically) with DLSS 1.0, NVIDIA had to train one network for 1440p to 4K, one for 1080p to 1440p and another for 720p to 1080p upscaling. That’s three networks for just one game and optimizing and training each of them to perfection takes time and resources.

DLSS 1.0 implements a separate neural network for every game, and to upscale to a particular target resolution

As a result, at the time of a game’s launch, when these networks were still being trained, the output quality was subpar. Other than that, one of the main problems with DLSS 1.0 was that it was based on the idea that games are deterministic, that is, easy to predict. While this is true in certain cases, for example, the weather cycle, NPC behavior, and scripted scenes, there are a lot of other factors at play.

What is Ray-Tracing and How is it Different from Rasterization: A Look at the Working of NVIDIA’s RTX GPUs

You need to consider the weather effects, explosions and other events that could occur when a player is free-roaming. This is where DLSS 1.0 fell short and produced sub-par results. As games become more and more complex, with increased random events, this technique was bound to become obsolete.

DLSS 2.0: One Network to Upscale them All

DLSS 2.0, although still an upscaler that works by comparing 16K images against the base resolution, makes a couple of core changes to how the algorithm works. With DLSS 1.0, a separate neural network had to be trained for every game and resolution. DLSS 2.0 uses a single generic network for the entire library of DLSS supported games and incorporates a temporal filter with it.

The lack of per-game data means that ultimately there are fewer specific examples the network can learn from. To make up for this, NVIDIA has integrated motion vectors into DLSS 2.0. Motion vectors are used in temporal anti-aliasing where the previous frame is projected onto the next one. By comparing the two and approximating the resulting data, objects that are constantly in motion from frame to frame can be anti-aliased.

DLSS 2.0 uses a single generic network for the entire library of DLSS supported games and incorporates a temporal filter with it.

The main advantage of a single, generic network means that NVIDIA can optimize it as much as it wants and carry over the results across all the games. Of course, this means that the network will be relatively ineffective in games that have a markedly different visual design, but that will hopefully change in time.

Unlike DLSS 1.0, its successor also allows hybrid modes and since there’s only one network that handles every game and resolution, it’s easier to implement too. Now you can upscale games from 1080p to 4K (4x) or 1440p to 4K (2x). In comparison, DLSS mostly just upscaled games by 2x, either from 1080p to 1440p or 1440p to 4K.

Depending on your performance, you can now use the quality preset which will upscale by 2x or the performance preset which increased the output resolution by 4x.

NVIDIA also that claims that DLSS 2.0 runs faster than DLSS 1.0, reducing the overall overhead added by the upscaler. However, it’s likely that the quality preset will be slower than DLSS 1.0, albeit with notably better image quality.

The developer implementation is also different in DLSS 2.0. It requires the motion vector data similar to TAA. Considering that TAA is present in almost every game, it shouldn’t be hard for devs to implement it. They just have to supply the same motion vector data to DLSS 2.0 as TAA.

Lastly, as far as games supporting DLSS 2.0 are concerned, Young Blood and Deliver Us to the Moon already have it. Control and Mech Warriors are the next two titles to integrate it in the coming weeks. NVIDIA claims that DLSS 2.0 offers better quality than the native output resolution, but let’s be honest. DLSS or not, an upscaler will almost always offer worse quality than a natively rendered image. That isn’t going to change anytime soon.

Areej Syed

Processors, PC gaming, and the past. I have been writing about computer hardware for over seven years with more than 5000 published articles. Started off during engineering college and haven't stopped since. Mass Effect, Dragon Age, Divinity, Torment, Baldur's Gate and so much more... Contact:
Back to top button