With over 17 years of IT hardware supply chain resources



The Intel Xeon Gold 6252 is a popular chip and one that we wanted to review for some time. Sitting at the higher-end of the 2nd Generation Intel Xeon Scalable range, the chip offers 24 cores and relatively high clock speeds all at a 150W TDP. Looking at it another way, the Gold 6252 offers similar features to the Xeon Platinum 8260 at a discount of around 22% for a 300MHz base and 200MHz turbo clock speed decrement. Since we did our Intel Xeon Platinum 8260 Benchmarks and Review piece in a quad-socket configuration, we are going to do that again for the Xeon Gold 6252

Key stats for the Intel Xeon Gold 6252: 24 cores / 48 threads with a 2.1GHz base clock and 3.7GHz turbo boost. There is 35.75MB of onboard cache. The CPU features a 150W TDP. These are $3,655 list price parts.

Here is what the lscpu output looks like for an Intel Xeon Gold 6252:

Intel Xeon Gold 6252
4P Intel Xeon Gold 6252 Lscpu

There are a few items we wanted to highlight here. The new lscpu tool shows the status for a few vulnerabilities. One will notice significant deltas between this and what one will see on older hardware, and even some of AMD’s more current offerings due to the differences in architecture.

The second major item is that this configuration is a quad-socket 2nd Generation Intel Xeon Scalable design. That means we have a total of four NUMA nodes:

4P Intel Xeon Gold 6252 Topology

The result here is that each NUMA node has memory and PCIe peripherals attached. While four sockets may seem a logical expansion of the two-socket model, it does not work that way. With the Intel Xeon Gold 62xx, like the Gold 6252, and the Platinum 82xx series, each CPU has one UPI link to the other three CPUs. With the lower-end Xeon Gold 52xx series, there is one fewer UPI link meaning each CPU can only connect to two of the three other CPUs in the system. Quad-socket systems with only one UPI link from CPU-to-CPU can have challenges as inter-socket bandwidth is half to one-third of what we can see in dual-socket systems. Still, if you want to scale up your node, this is a solid way to do it.

Also, if you read our Why the Intel Xeon Platinum 9200 Series Lacks Mainstream Support piece, we explain how a quad-socket Xeon Gold 6252 system can be preferable to the dual Xeon Platinum 9242 system since one can get more memory capacity, Optane DCPMM support, more PCIe I/O, and more options for systems.

Quad Intel Xeon Gold 6252 Test Configuration

For our 2nd Generation Intel Xeon Scalable CPU quad-socket reviews, we are using the following configuration:

  • System: Supemicro SYS-2049U-TR4
  • CPU: Intel Xeon Gold 6252
  • RAM: 48x 32GB DDR4-2933 ECC RDIMMs
  • Storage: 4x Seagate Exos 2TB 2.5″, 2x Samsung 960GB U.2 NVMe SSDs, 128GB Supermicro SATA DOM
  • PCIe Networking: Mellanox ConnectX-4 Lx 25GbE, Intel X710 4x 10GbE SFP+

A quick note here, we did not utilize the Intel Optane DCPMM here because we had standard chips. Using Intel Optane DCPMM even with two 128GB modules per CPU to stay well below the 1TB per CPU memory limit would have meant our memory would work at only DDR4-2666 speeds.

Supermicro SYS 2049U TR4 Cover

You can learn more about the test server in our Supermicro SYS-2049U-TR4 review. Upgrading the server from our first generation to the second generation of Intel Xeon Scalable processors simply required a BIOS update. In newer systems, the platform will come standard with that support.

Overall, the platform has support for an enormous amount of I/O and storage customization options. With four Intel Xeon CPUs, one has a maximum of 192 PCIe lanes available connected to the system which is twice what one has in a traditional dual-socket server. Scaling up is a key value proposition of the Intel Xeon Gold 6xxx and Platinum 8xxx families.

Next, let us look at our performance benchmarks before getting to market positioning and our final words.

Read more

Intel Xeon Gold 6130 Benchmarks and Review A Great SKU



Leave a message