Uncategorized

Intel Confirms Xe Graphics Cards Launching in Late 2020; GPU Architecture to be unveiled at GDC 2020

There has been a lot of debate on Intel’s upcoming Xe graphics cards: the specs, efficiency, target audience and the launch window. Last we heard, the 1st wave of Xe graphics cards, namely Tiger Lake and the DG1 derivatives are going to launch in 2020. It seems that the roadmap is intact and we’ll indeed see the first Xe graphics cards by late 2020. This was confirmed in a GDC announcement where Intel is going to brief game developers and engineers on the features and various architectural aspects of the Xe GPUs.

A while back some documents were leaked by Digital Trends that indicated the use of the Gen11 architecture and tiles (an MCM design) in case of the 1st Gen Xe graphics cards. While the latter may be true, I’m quite sure that the GPUs will leverage the Gen12 architecture, the same as Tiger Lake-U.

While Intel’s base GPU architecture is fairly solid, there are some key concerns. Firstly, scaling. How well will the HP GPUs scale with higher core counts, say 512 EUs (or 2,048 ALUs)? What will the performance per watt and TDP requirements be like compared to existing NVIDIA and AMD GPUs?

There’s also the case of production. It’s no secret that Intel’s Foundries are already full ramping up 10nm production as well as supplying the existing and upcoming 14nm based chips. Will Intel outsource the Xe GPU production or will they handle it themselves? There’s also the matter of the heatsinks? Will there be a simple blower-style heatsink or diverse designs, and who will manufacture them? Lastly, pricing. How will be 1st Gen Xe graphics cards be priced? It’s highly probable that the GPUs being the first of their kind, won’t be able to keep up with contemporary NVIDIA and AMD GPUs, both in terms of performance and efficiency. For Xe to start on a positive note, Intel needs to price them very competitively.

Some other interesting Intel programs at GDC are:

Towards Real-time Ray Tracing at Scale on CPU+GPU: This seems to be an approach to leverage both the CPU as well as the GPU for RT ray-tracing acceleration via Intel’s new OneAPI. A similar technology (Embree) is used in World of Tanks a while back, although that used just the CPU cores and not the GPU.

Variable Rate Shading Tier 1: Along with NVIDIA’s Turing GPUs, Intel’s Gen11 Ice Lake GPUs are the only devices to support DX12 based Variable Rate Shading. This will be a step-by-step workshop to demonstrate the integration of VRS in DX12 based game engines. For more on VRS, read this:

NVIDIA Turing RTX 20 Series Architectural Deep Dive: Ray-Tracing, AI and Improved DirectX 12 Support

What is the Difference Between DirectX 11 vs DirectX 12: In-depth Analysis

GDC kicks off on 16th March. That’s where Intel will detail their Xe architecture. I don’t expect a launch date anytime soon. That should be announced at Computex, in June. Cheers!

Source
Intel

Areej Syed

Processors, PC gaming, and the past. I have been writing about computer hardware for over seven years with more than 5000 published articles. Started off during engineering college and haven't stopped since. Mass Effect, Dragon Age, Divinity, Torment, Baldur's Gate and so much more... Contact: areejs12@hardwaretimes.com.
Back to top button