GamingGPUs

AMD Navi vs NVIDIA Turing: Comparing the Radeon and GeForce Graphics Architectures

What is the difference between NVIDIA and AMD’s latest graphics cards? The present-gen Navi cards are based on the RDNA architecture while the GeForce RTX 20 series GPUs are based on the Turing microarchitecture. In this post, we have a look at the latest GeForce and Radeon graphics architectures and note the similarities and differences. At a higher level, both the GPUs look the same, but the finer details highlight a vastly different approach to performing the same task.

AMD Navi vs NVIDIA Turing : Introduction

In simple terms, both NVIDIA and AMD’s GPU architectures consist of the same components which perform more or less the same operations. You’ve got the GPU containing execution units, fed by the schedulers and dispatchers. Then there is the cache memory connecting the GPU to the graphics memory and post-processing units, Texture Units, Render Output Units and Rasterizers performing the last set of operations before sending the data to the display.

If you magnify the above image and have a closer look at the execution units, the cache hierarchy, and the graphics pipelines, that’s where everything becomes complicated:

AMD Navi 10 vs NVIDIA Turing TU102

AMD Navi vs NVIDIA Turing GPU Architectures: SM vs CU

One of the main differences between NVIDIA and AMD’s GPU architectures is with respect to the cores/shaders and Compute Units (NVIDIA calls it SM or Streaming Multiprocessor). NVIDIA’s shaders (execution units) are called CUDA cores while AMD uses stream processors.

CUDA Cores vs Stream Processors: Super-scalar & Vector

Furthermore, AMD’s GPUs are vector-based processors while NVIDIA’s architecture is super-scalar in nature. While in theory, the former leverages the SIMD execution model and the latter relies on SIMT, the practical differences are few. In an AMD DCU, there will always be room for 64 work items regardless of how many threads are executed per cycle. There might be 32, 40 or 52 threads being executed by an application per cycle, but the model supports 64 at a native level. Overall, the work is issued per CU.

What is SIMD? How Does it Work and How is it Different from SIMT?

With an NVIDIA SM, unless there’s no more work left, all the 128 work-queues will always be saturated no matter which application is being used. Here the threads are independent of one another and can yield or converge with threads from other SMs as needed. This is the advantage of using a super-scalar architecture. The level of parallelism is retained and the utilization is also better.

One NVIDIA Turing SM has FP32 cores, INT32, and two tensor cores. There’s also the load/store, Special function unit, the warp scheduler, and dispatch. Like Volta, it takes two cycles to execute instructions. There are separate cores for INT and FP compute, and they work in tandem. As such, NVIDIA’s Turing SMs can execute both floating-point and integer instructions per cycle. This is NVIDIA’s implementation of Asynchronous Compute. While it’s not exactly the same thing, the purpose of both technologies is to improve GPU utilization.

AMD’s Dual CUs, on the other hand, consist of four SIMDs, each containing 32 shaders or execution lanes. There are no separate shaders for INT or FP, and as a result, the Navi stream processors can run either FP or INT per cycle. However, unlike the older GCN design, the execution happens every cycle, greatly increasing the throughput.

Turing vs Navi: Graphics and Compute Pipeline

In AMD’s Navi architecture, the Graphics Command Processor takes care of the standard graphics pipeline (rendering, pixel, vertex, and hull shaders) and the ACE (Asynchronous Compute Engine) issues Compute tasks via separate pipelines. These work along with the HWS (Hardware Schedulers) and the DMA (Direct Memory Access) to allow concurrent execution of compute and graphics workloads. Furthermore, there’s the geometry processor. It resolves complex geometry workloads including tesselation.

In Turing, the warp scheduler on an SM level and the Gigathread engine on the GPU level manage both Compute and graphics workloads. While concurrent compute isn’t the same as Async Compute, it functions similarly, with support for concurrent floating-point (mainly graphics) and integer (mainly compute) workloads.

Of Warps and Waves

In the case of AMD’s Navi, the workload items are issued in the form of a group of threads called waves. Each wave includes 32 threads (one for each shader in the SIMD), either compute or graphics and are sent to Dual Compute Units for execution. Since each CU has two SIMDs, it can handle two waves while a Dual Compute Unit can process four.

In NVIDIA’s case, the Gigathread Engine with the help of the Warp Schedulers manages thread scheduling. Each collection of 32 threads is called a warp. As there are four warp schedulers in every SM with their individual INT32 and FP32 core clusters, each Streaming Multiprocessor can handle four 32 thread warps. Furthermore, each thread is independent and convergence is handled similar to Volta. Similar warp threads are grouped together into SIMT units and can yield or reconverge.

Green vs Red: Cache Hierarchy

With the new RDNA based Navi design, AMD has been rather generous with the cache memory. By adding another block of L1 cache between L2 and L0, the latency has significantly improved over GCN. The L0 cache is exclusive to a Dual Compute Unit while the L1 cache is shared between four DCUs. A larger block of 4MB L2 cache is accessible globally to each CU.

NVIDIA’s Turing L2 cache size is 50% larger than Navi but there’s no intermediate in between complementing the shader cache. The L1 cache is reconfigurable between shared memory and L1 as per workloads and there is one block of 96KB L1 cache per SM. The L2 cache is common across all SMs.

The main difference between shared memory and the L1 is that the contents of shared memory can be handled by the developer, whereas the L1 cache is automatically managed. Basically, shared memory gives developers more control over the GPU resources which is a core part of DX12.

The larger cache memory helps to keep more and more assets on chips, reducing the bandwidth requirement and improving TDP. Tiled rendering is one common application.

Rasterizers, Tesselators and Texture Units

Other than the Execution Units, Cache and the Graphics Engines, there are a few other components such as the Rasterizers, Tesselators, Texture Units and Render Backend. These components perform the final steps of the graphics pipeline such as depth effects, texture mapping, tesselation, and rasterization.

Texture Mapping is handled by Texture Units

Each Compute Unit in the Navi GPUs (and Turing SM for NVIDIA) contains four TMUs. There are two rasterizers per shader engine for AMD and one for every GPC (Graphics Processing Cluster) in the case of the Turing GPU block. In AMD’s Navi, there are also RBs (Render Backends) that handle pixel and color blending, among other post-processing effects.

With Turing, NVIDIA turned over the responsibilities of the individual shaders like the vertex, hull, and tesselation over to the new mesh shader. This allows for lower CPU draw calls per scene and a higher polygon count. AMD, on the other hand, has doubled down on that front by adding a geometry processor for culling unnecessary tessellation and managing geometry.

Process Nodes and Conclusion

There is another architectural difference between the NVIDIA Turing and AMD Navi GPU architectures with respect to the process node. While NVIDIA’s Turing TU102 die is much bigger than Navi 10, the number of transistors per unit mm2 is higher for the latter.

This is because AMD’s Navi architecture leverages the newer 7nm node from TSMC. NVIDIA, on the other hand, is still using the older 14nm process. Despite that though, NVIDIA GPUs are more energy-efficient than competing Radeon RX 5700 series graphics cards.

Thanks to the 7nm node, AMD has significantly reduced the gap but it’s still a testament to how efficient NVIDIA’s GPU architecture really is.

Video Encode and Decode

Both the Turing and Navi GPUs feature a specialized engine for video encoding and decoding.

In Navi 10 (RX 5600 & 5700), unlike Vega, the video engine supports VP9 decoding. H.264 streams can be decoded at 600 frames/sec for 1080p and 4K at 150 fps. It can simultaneously encode at about half the speed: 1080p at 360 fps and 4K at 90 fps. 8K decode is available at 24 fps for both HVEC and VP9.

For streamers, Turing had a big surprise. The Turing video encoder allows 4K streaming while exceeding the quality of the X264 encoder. 8K 30FPS HDR support is another sweet addition. This is an advantage over Navi only in theory though. No one streams at 8K.

Two more features that come with Turing are Virtual Link and NVLink SLI. The former combines the different cables needed to connect your GPU to a VR headset into one while the latter improves SLI performance by leveraging the high bandwidth of the NVLink interface.

VirtualLink supports up to four lanes of High Bit Rate 3 (HBR3) DisplayPort along with the SuperSpeed USB 3 link to the headset for motion tracking. In comparison, USB-C only supports four lanes of HBR3 DisplayPort or two lanes of HBR3 DisplayPort + two lanes SuperSpeed USB 3.

Read also:

AMD Navi Deep Dive: How is RDNA Different from the GCN Architecture; Built From the Ground Up for Gaming

NVIDIA Turing RTX 20 Series Architectural Deep Dive: Ray-Tracing, AI and Improved DirectX 12 Support

Areej

Computer Engineering dropout (3 years), writer, journalist, and amateur poet. I started Techquila while in college to address my hardware passion. Although largely successful, it suffered from many internal weaknesses. Left and now working on Hardware Times, a site purely dedicated to. Processor architectures and in-depth benchmarks. That's what we do here at Hardware Times!
Back to top button
Close