With the launch of Intel’s first discrete graphics, the Iris Xe Max, the company is looking to maximize the target market for the DG1. Today, the server GPU, called the H3X XG310 was launched. Technically speaking, it’s not a GPU. The XG310 features four DG1 (or Iris Max) GPUs on a single PCB, leveraging the PCIe PCIe 3.0 standard.
That brings the total core count to 768 x4= 3,072 shaders or 96 x4= 384 EUs, with a total of 8GB of LPDDR4x per GPU or 32GB for the entire part. In comparison, the Iris Max features half as much memory. The card packs all 16 PCIe lanes and has a bus width of 128-bit. It’s unclear how Intel plans to synchronize the four GPUs as multi-GPU solutions such as SLI and XFX are all but obsolete at this point.
In line with the capabilities of Xe-LP, the server GPU supports AVC, HEVC, MPEG2, VP9 encode and decode, and AV1 decode, allowing for use in media farms. The primary focus, however, is to offer Android game streaming using a combo of Intel Xeon and H3X XG310 GPUs. The graphics part is primarily handled by the DG1 GPUs using the Android in Container (AIC).
The Intel Cloud Rendering API receives the inputs from the player, as well as combines the rendered frames from the AIC with the audio, creating the final stream. The interesting question is why would Cloud companies use Intel’s infant GPU when there are already a ton of offerings from both NVIDIA and AMD, as well as Arm?
It’s likely that Intel is offering as part of the Xeon CPUs at dirt-cheap prices. Four cards or 16 DG1 GPUs can be used with a single Xeon processor. If the scaling is decent, that should amount to an impressive graphics throughput.
Intel claims that a single GPU can run 60 instances of most Android games at a time at 30 FPS. When four cards are used with a Xeon processor, that should push that number up to 240. Chinese giant, Tencent has already decided to go along with Intel’s claims and will be the first major customer of the H3X XG310 GPUs.