CPUsNews

Intel Launches Cascade Lake-SP Xeon Scalable CPUs for 4S & 8S Configs; 10nm Ice Lake for 1S & 2S in H2-2020

Intel today launched its Cascade Lake-SP, forming the Cedar Island platform and the first wave of 3rd Gen Xeon Scalable processors. These chips will be for 4S to 8S scalable socket configurations, while Ice Lake-SP slated to launch in H2 2020 will be for 1 and 2 socket server platforms, and form the Whitley Platform.

There are three main upgrades that come with Cooper Lake: BFloat16 for AI workloads, wider CPU to CPU bandwidth (likely a response to AMD’s Epyc) and updated memory and Optane support.

NVIDIA Ampere Architectural Analysis: A Look at the A100 Tensor Core GPU

BFloat16 is essentially a new data unit that utilizes the BF16 format to offer the precision of FP32 at the speed of FP16. This is very much like NVIDIA’s TF32 compute introduced with the A100 which is supposed to be 20x faster than FP32 while producing a standard IEEE FP32 output:

This image has an empty alt attribute; its file name is image-49-1024x458.png
NVIDIA’s TF32

Intel has also doubled the socket-to-socket bandwidth with Cooper Lake. Till now, the Xeon scalable chips have had three UPI (Ultra Path Interconnect) between the various CPUs in a system. With Cooper Lake, that has been increased to six. On the downside, each CPU can only connect to three others

Source: AnandTech

Each CPU in a Cooper Lake-SP will be connector to its peers using two UPI links, with each running at 10.4 GT/s, for a total of 20.8 GT/s, essentially doubling the bandwidth compared to Cascade Lake.

The last major upgrade that Cooper Lake has received is with respect to memory. Cooper Lake supports up to DDR4-3200 with the higher-end Platinum CPUs. This increases the overall bandwidth from 23.46GB/s per channel to 25.60GB/s.

As far as max memory is concerned, the entry-level Xeon chips will support up to 1.125TiB memory, up from 1TiB. This means you can connect six 64GB DIMMs and six 128GB modules. However, at the same time, a full 12* 128GB config is still not supported.

There will be higher memory capacity CPUs designated as the “HL” line with support for up to 4.5TB of memory. Intel’s second-gen (Barlow Pass) Optane DC Persistent memory will also be supported with Cooper Lake-SP. The platform supports modules ranging from 128GB to 512GB, all running at the same speed as the main memory (2666MHz).

Cooper Lake-SP offers 48 PCIe 3.0 lanes, still a far cry from AMD Rome’s 64 PCIe 4.0. Although it’s a step up from Cascade Lake, it’s still not quite up there. Explains why NVIDIA ditched the Xeons for Epyc in the DGX A100. The prices aren’t representative of the features you get either. The Rome parts are roughly half as expensive, all the while offering twice as much performance.

3rd Gen Xeon Scalable Pricing
CPUC/TBaseBoostL3TDPMSRP
EPYC 774264 / 1282.253.40256 MB225 W$6950
EPYC 770264 / 1282.003.35256 MB200 W$6450
EPYC 764248 / 962.303.20256 MB225 W$4775
EPYC 755248 / 962.203.30192 MB200 W$4025
EPYC 754232 / 642.903.40128 MB225 W$3400
EPYC 750232 / 642.503.35128 MB200 W$2600
EPYC 745232 / 642.353.35128 MB155 W$2025
EPYC 740224 / 482.803.35128 MB155 W$1783
EPYC 735224 / 482.303.20128 MB180 W$1350
EPYC 730216 / 323.003.30128 MB155 W$978
EPYC 728216 / 322.803.2064 MB120 W$650
EPYC 727212 / 242.903.2064 MB155 W$625
EPYC 72628 / 163.203.40128 MB120 W$575
EPYC 72528 / 163.103.2064 MB120 W$475
AMD Epyc 2nd Gen pricing

As already reported earlier, Ice Lake-SP (Whitley) will land sometime in H2 2020 and Sapphire Rapids based on the Willow Cove core and the third iteration of Intel’s 10nm process is also on track for a late 2021 launch.

Areej

Computer hardware enthusiast, PC gamer, and almost an engineer. Former co-founder of Techquila (2017-2019), a fairly successful tech outlet. Been working on Hardware Times since 2019, an outlet dedicated to computer hardware and its applications.

Related Articles

Back to top button