Okay, so this is exciting news. It’s Big Navi, based on RDNA 2 with ray-tracing support, VRS and AI-based upscaling. Before we continue, yes this is an unverified rumor, but it’s a juicy one, so I decided to cover it nonetheless. Turns out AMD will be unveiling (not launching mind you) three, not one RDNA 2 based Radeon graphics cards at their Financial Day three days from now. These monster GPUs will compete in the enthusiast space with the GeForce RTX 2080 Super and the RTX 2080 Ti. The source claims that the top-end GPU will have an FP32 rating of 18 TFLOPs with an MSRP of $999. That’s just insane. If (a big if) this rumor turns to be true, then Big Navi will easily crush the Turing flagship. Even a Super variant won’t save it. This will be one insane card. For reference, the Radeon RX 5700 XT is a 9.x TFLOP GPU. That’s twice as much!
This number does seem too high, doesn’t it? That’s one of the reasons I’m not convinced just yet. Regardless of its validity, there are two main aspects highlighted in the post. Yes, you guessed it right. Ray-tracing and VRS. You can read more about these features below. I won’t dive deep into their function in this post:
What is the Difference Between DirectX 11 vs DirectX 12: In-depth Analysis
What Exactly is Ray-Tracing: NVIDIA RTX, DirectX 12 and Everything you need to know about Ray-Tracing in Gaming
RDNA 2 is supposedly going to be 30 to 50% more efficient at ray-tracing. Like NVIDIA, AMD’s ray-tracing solution will also leverage Microsoft DirectX Ray-tracing (DXR), however, the implementation is said to be more effective. Apparently, RDNA 2 was specifically created for Microsoft’s Xbox Series X consoles and high-end Radeon GPUs (read: Big Navi). However, it seems that the scenes won’t be fully ray-traced. Like Crytek, they might use a hybrid solution: a combination of rasterized screen-space reflections and RT ray-tracing. Of course, you get the option to switch between rasterization and native ray-tracing, but the latter seems to be a tamer approach than NVIDIA’s RTX solution. Regardless, if the game runs better and looks nearly as good I’d call that a win.
Opinion: This sounds like a marketing gimmick, to be honest. NVIDIA uses RTCores to accelerate the BVH traversal and ray/triangle intersection testing (ray casting). The Tensor cores are used for DLSS and denoising the image to remove the artifacts due to low ray-counts. AMD says that RDNA 2 will be more efficient than RTX, but on closer inspection, it looks though they’re just reducing the ray-count and substituting the technology with SSR to improve performance. I’m not complaining but the marketing is a bit misleading.
Deep-learning for upscaling, animation interpolation and NPC AI. The first part sounds too much like DLSS, so we can be fairly confident that’s what it is. The second part is more interesting. Adding 60 FPS support to 30 FPS games using animation interpolation is a really innovative idea. There are many classics that don’t run natively at 60 FPS. This should allow AMD to run them at buttery smooth 60 FPS without effectively breaking them. NPC AI, on the other hand, is just based on how well the developer codes the character AI. It has nothing to do with the GPU or ray-tracing.
VRS will allow 4K 120 FPS in certain titles. I’m highly skeptical about this one. While VRS is a neat little technology, the performance gains from it are nowhere as big to allow 120 FPS at 4K. I doubt whether even 4K will be possible without severe reductions in quality.
xCloud ray-tracing and Azure. Lastly, there’s the matter of xCloud and Azure, Microsoft’s alternative to Stadia and GeForce Now. The source claims that xCloud systems will utilize full-scene ray-tracing, thanks to the use of top-of-the-line RDNA GPUs. Considering how Stadia and GeForce now are doing, I wouldn’t get too excited about xCloud just yet.
This rumor has a lot of buts and ifs. However, we only have to wait for three days to confirm it. As per the OP, all this will be unveiled at AMD’s Financial day a few days from now. Cheers!