What is the difference between NVIDIA and AMD graphics cards? This is one of the most commonly asked questions. In this post, we have a look at the latest GeForce and Radeon architectures and compare the two: the similarities, differences and everything in between. AMD’s latest Radeon microarchitecture is the RDNA 1.0 used in the Navi 10 based RX 5600 and 5700 series GPUs. NVIDIA’s newest design is the Turing microarchitecture. At a high level, both the GPUs have the same basic design, but the finer details highlight a vastly different approach to performing the same task.
AMD Radeon vs NVIDIA GeForce: Introduction
If you look at NVIDIA and AMD’s GPU architectures from a higher level, they both consist of the same components and perform more or less the same operations. You’ve got the GPU containing execution units, fed by the schedulers and dispatchers. Then there is the cache memory connecting the GPU to the graphics memory and post-processing units, Texture Units, Render Output Units and Rasterizers performing the last set of operations before sending the data to the display.
If you magnify the above image and have a closer look at the execution units, the cache hierarchy, and the graphics pipelines, that’s where everything becomes complicated:
AMD Navi vs NVIDIA Turing GPU Architectures: SM vs CU
One of the main differences between NVIDIA and AMD’s GPU architectures is with respect to the cores/shaders and Compute Units (NVIDIA calls it SM or Streaming Multiprocessor). NVIDIA’s shaders (execution units) are called CUDA cores while AMD uses stream processors.
Vector and Super-scalar
Furthermore, AMD’s GPUs are vector in name while NVIDIA’s architecture is super-scalar in nature. While in theory, the former leverages the SIMD execution model and the latter relies on SIMT, the practical differences are few. In an AMD dCU, there will always be room for 64 work items regardless of how many threads are executed per cycle. There might be 32, 40 or 52 threads being executed by an application per cycle, but the model supports 64 at a native level. Overall, the work is issued per CU.
With an NVIDIA SM, unless there’s no more work left, all the 128 work-queues will always be saturated no matter which application is being used. Here the threads are independent of one another and can yield or converge with threads from other SMs as needed. This is the advantage of using a super-scalar architecture. The level of parallelism is retained and the utilization is also better.
One NVIDIA Turing SM has FP32 cores, INT32, and two tensor cores. There’s also the load/store, Special function unit, the warp scheduler, and dispatch. Like Volta, it takes two cycles to execute instructions. There are separate cores for INT and FP compute, and they work in tandem. As such, NVIDIA’s Turing SMs can execute both floating-point and integer instructions per cycle. This is NVIDIA’s implementation of Asynchronous Compute. While it’s not exactly the same thing, the purpose of both technologies is to improve GPU utilization.
AMD’s Dual CUs, on the other hand, consist of four SIMDs, each containing 32 shaders or execution lanes. There are no separate shaders for INT or FP, and as a result, the Navi stream processors can run either FP or INT per cycle. However, unlike the older GCN design, the execution happens every cycle, greatly increasing the throughput.
Turing vs Navi: Graphics and Compute Pipeline
In AMD’s Navi architecture, the Graphics Command Processor takes care of the standard graphics pipeline (rendering, pixel, vertex, and hull shaders), the ACE (Asynchronous Compute Engine) issues Compute tasks via separate pipelines. These work along with the HWS (Hardware Schedulers) and the DMA (Direct Memory Access) to allow concurrent execution of compute and graphics workloads.
In Turing, the wrap scheduler along with the Gigathread engine manages both Compute and graphics workloads. While concurrent compute isn’t the same as Async Compute, it functions similarly, with support for concurrent floating-point (mainly graphics) and integer (mainly compute) workloads.
In the case of AMD’s Navi, the workload items are issued in the form of a group of threads called waves. Each wave includes 32 threads (one for each shader in the SIMD), either compute or graphics and are sent to Dual Compute Units for execution. Since each CU has four SIMDs, it can handle two waves while a Dual Compute Unit can process four.
In NVIDIA’s case, the Gigathread Engine with the help of the Warp Schedulers manages thread scheduling. Each collection of 32 threads is called a warp. As there are four warp schedulers in every SM with their individual INT32 and FP32 core clusters, each Streaming Multiprocessor can handle four 32 thread warps. Furthermore, each thread is independent and convergence is handled similar to Volta. Similar warp threads are grouped together into SIMT units and can yield and reconverge.
Green vs Red: Cache Hierarchy
With the new RDNA based Navi design, AMD has been rather generous with the cache memory. By adding another block of L1 cache between L2 and L0, the latency has significantly improved over GCN. The L0 cache is exclusive to a Dual Compute Unit while the L1 cache is shared between four DCUs. A larger block of 4MB L2 cache is accessible globally to each CU.
NVIDIA’s Turing L2 cache size is notably larger than larger the Navi but then again there’s no intermediate in between complementing the shader cache. The L1 cache is reconfigurable as per workloads and there is one block of 96KB L1 cache per SM. The L2 cache is common across all SMs.
Rasterizers, Tesselators and Texture Units
Other than the Execution Units, Cache and the Graphics Engines, there are a few other components such as the Rasterizers, Tesselators, Geometry Processor, Texture Units and Render Backend. These components perform the final steps of the graphics pipeline such as depth effects, texture mapping, tesselation, and rasterization.
Each Compute Unit in the Navi GPUs (and Turing SM for NVIDIA) contains four TMUs. There are two rasterizers per shader engine for AMD and one for every GPC (Graphics Processing Cluster) in the case of the Turing GPU block. In AMD’s Navi, there are also RBs (Render Backends) that handle pixel and color blending, among other post-processing effects.
With Turing, NVIDIA turned over the responsibilities of the individual shaders like the vertex, hull, and tesselation over to the new mesh shader. This allows for lower CPU draw calls per scene and a higher polygon count. AMD, on the other hand, has doubled down on that front by adding a geometry processor and culling unnecessary tessellation and other geometry.
Process Nodes and Conclusion
There is another architectural difference between the NVIDIA Turing and AMD Navi GPU architectures with respect to the process node. While NVIDIA’s Turing TU102 die is much bigger than Navi 10, the number of transistors per unit mm2 is higher for the latter.
This is because AMD’s Navi architecture leverages the newer 7nm node from TSMC. NVIDIA, on the other hand, is still using the older 14nm process. Despite that though, NVIDIA GPUs are more energy-efficient than competing Radeon RX 5700 series graphics cards.
Thanks to the 7nm node, AMD has significantly reduced the gap but it’s still a testament to how efficient NVIDIA’s GPU architecture really is.
Video Encode and Decode
Both the Turing and Navi GPUs feature a specialized engine for video encoding and decoding.
In Navi 10 (RX 5600 & 5700), unlike Vega, the video engine supports VP9 decoding. H.264 streams can be decoded at 600 frames/sec for 1080p and 4K at 150 fps. It can simultaneously encode at about half the speed: 1080p at 360 fps and 4K at 90 fps. 8K decode is available at 24 fps for both HVEC and VP9.
For streamers, Turing had a big surprise. The Turing video encoder allows 4K streaming while exceeding the quality of the X264 encoder. 8K 30FPS HDR support is another sweet addition. This is an advantage over Navi only in theory though. No one streams at 8K.
Two more features that come with Turing are Virtual Link and NVLink SLI. The former combines the different cables needed to connect your GPU to a VR headset into one while the latter improves SLI performance by leveraging the high bandwidth of the NVLink interface.
VirtualLink supports up to four lanes of High Bit Rate 3 (HBR3) DisplayPort along with the SuperSpeed USB 3 link to the headset for motion tracking. In comparison, USB-C only supports four lanes of HBR3 DisplayPort or two lanes of HBR3 DisplayPort + two lanes SuperSpeed USB 3.