NVIDIA’s flagship GPUs have become considerably more expensive over the past 2-3 generations. The GeForce GTX 980 Ti (Maxwell) was priced at $649, the GTX 1080 Ti (Pascal) at $699 but the Turing based RTX 2080 Ti costs and insane $1,000+ and that too while not offering as much of a performance upgrade. Sure, the latter comes with the fancy RTCores and Tensors for ray-tracing and DLSS, respectively but these new technologies can hardly be called mainstream just yet.
So what about the sheer raster performance? Let’s find out. We’ll be putting the Turing and Pascal flagships side by side in seven of the latest titles (mostly DX12) and see if the outrageous price tag of the RTX 2080 Ti actually does it justice.
- CPU: AMD Ryzen 7 3700X (read the review here)
- Motherboard: ASRock X570 Taichi
- RAM: 16GB Trident Z Royal @ 3600MHz
- PSU: Corsair HX1000i
- HDD: WD Black 4TB
NVIDIA RTX 2080 Ti vs GTX 1080 Ti: Gaming Performance and Benchmarks
We’ll be conducting all the benchmarks at 4K at the highest quality preset unless specified otherwise. Furthermore, we’ll be sticking to the newer DX12 API wherever possible.
As you can see, the GeForce RTX 2080 Ti clearly faster than the GTX 1080 Ti, in some games by just 20-30% but in few titles like Metro Exodus and Deus Ex: Mankind Divided, the gulf is as wide as 50%. I did a bit of checking and these are the same titles that make good use of Asynchronous Compute.
With Maxwell, NVIDIA didn’t have proper Async support and had to resort to context switching which induced a latency penalty in games using the DX12 tech. And so in most games, Async was disabled for NVIDIA cards.
The reason being that AMD cards implemented it on a hardware-level while competing GeForce cards used the drivers to do the job which wasn’t very efficient. The debate over Async Compute is a big one, one that I won’t go into at the moment.
NVIDIA seems to have significantly improved the Async utilization with Turing. Regardless, the RTX 2080 Ti in most games is around 20-30% faster than the GTX 1080 Ti. The latter sold for $650 to $700 for the most part while the Turing flagship sells for well over $1,000. So you’re essentially paying 2x more for another 20-30% performance.
No wonder NVIDIA has been marketing ray-tracing as a major feature of its Turing lineup, as without it the newer GPUs seem like a rather luke-warm upgrade. We can only hope that AMD comes up with that promised Big Navi (Navi 20) as soon as possible so that the flagship space once again becomes viable for people who actually work for money.
NVIDIA’s Ampere lineup is expected to land in the first half of 2020, based on the 7nm EUV process from Samsung. We expect it to be a much beefier upgrade compared to Pascal, and of course, you can expect improved ray-tracing capabilities for upcoming games like Cyberpunk 2077.