Intel announces broad range of 2nd gen Xeon Scalable CPUs

by Mark Tyson on 2 April 2019, 21:11

Tags: Intel (NASDAQ:INTC)

Quick Link: HEXUS.net/qad6dr

Add to My Vault: x

Please log in to view Printer Friendly Layout

At a special Data-Centric Innovation Day in Santa Clara, California, Intel has launched a wide range of products which are claimed to reflect its new data-centric strategy. These are products which help users process, move and store increasingly vast amounts of data, and support high growth workloads in the cloud, in AI, and 5G for example.

Intel's name was built on its computer processors, and despite all the talk of moving from PC-centric to data-centric strategies, the biggest announcement of the night was of a new clutch of CPUs, albeit aimed at corporate and enterprise servers. In total over 50 new processors were just announced including the headlining 2nd-Generation Intel Xeon Scalable CPUs, previously dubbed Cascade Lake. Additionally there were network-optimised Xeon processors, new Agilex FPGAs, and Xeon-D SoCs added to Intel's processor throng.

Of the 50 new 2nd-Generation Intel Xeon Scalable processors that have become available today, the flagship in undoubtedly the 56-core, 12 memory channel Intel Xeon Platinum 9200. Intel says that this processor is "designed to deliver leadership socket-level performance and unprecedented DDR memory bandwidth in a wide variety of high-performance computing (HPC) workloads, AI applications and high-density infrastructure". Compared to the previous gen these new Xeon Scalable chips are said to offer an average 1.33x performance gain.

An interesting new feature in the 2nd-Generation Intel Xeon Scalable CPUs is the integration of Intel DL Boost (Intel Deep Learning Boost) technology. Intel has designed this processor extension to accelerate AI inference workloads like image-recognition, object-detection and image-segmentation within data centre, enterprise and intelligent-edge computing environments.

Backing up its inference processing credentials Intel says it has worked closely with partners and application makers to take full advantage of Intel DL Boost technology. Intel name-checked frameworks such as TensorFlow, PyTorch, Caffe, MXNet and Paddle Paddle.

Examples of the practical value of Intel DL Boost include the following; Microsoft has seen a 3.4x boost in image recognition performance, Target has witnessed a 4.4x boost in machine learning inference, and JD.com has seen a 2.4x boost in text recognition.

Another feather in the cap of the 2nd-Generation Intel Xeon Scalable processors is that they support Intel's Optane DC persistent memory, "which brings affordable high-capacity and persistence to Intel’s data-centric computing portfolio." With Intel Optane DC, 36TB system-level memory capacity is possible in an 8-socket system (3x more memory than the previous gen of Xeon Scalable processors).*

Other key attractions of the 2nd gen Intel Xeon Scalable processors are as follows:

  • Intel Turbo Boost Technology 2.0 ramps up-to 4.4GHz, alongside memory subsystem enhancements with support for DDR4-2933MT/s and 16Gb DIMM densities.
  • Intel Speed Select Technology provides enterprise and infrastructure-as-a-service providers more flexibility to address evolving workload needs.
  • Enhanced Intel Infrastructure Management Technologies to enable increased utilization and workload optimization across data centre resources.
  • New side-channel protections are directly incorporated into hardware.

Among the other chips announced tonight were the Agilex FPGAs, new 10nm Intel FPGAs that are built to deliver flexible hardware acceleration and application specific optimisation for edge computing, networking and data centres. Agilex FPGAs will become available from H2 2019. Last but not least Intel unveiled the Xeon D-1600 processor, described as a highly-integrated SoC designed for dense environments where power and space are limited, but per-core performance is essential.

*Above is an overview of Intel's processors that have been announced, just one part of the triple pronged series of announcements today – with regard to the moving, storing and processing of data. Coverage of the moving and storing products and innovations will be published in the follow-up article.



HEXUS Forums :: 4 Comments

Login with Forum Account

Don't have an account? Register today!
Yawn, super glue 2.0 and the hypocrisy of how they got high core counts. 4.5TB of memory is a lie, it's Optane, it's 16GB actual per stick at 12 maximum lanes each. The “up to 8 or more sockets” is likely to be mezzanine cards and probably for low power parts if they don't. 56 core part draws 400w at full tilt, good luck shoehorning that into a larger than dual socket.

1st gen Cascade lake ap was heralded as a crock of sh… And there were barely any boards or integrators working on them. This new batch is a Zen 2 attention grabber to try and get some sales and mindshare.

Only interesting nibbles are the side channel fixes for speculative execution. Not full fixes but interested in perf gains. Because it's still skylake, you should be able to compare easily.

Otherwise this was all annoying marketing trash and a shareholder pleaser.
I'm kind of curious how far we're going to go in the ‘moar cores’ battle that appears to have started. Are there workloads that benefit from 100+ threads but can't be more efficiently carried out on a GPU? Are there enough of those workloads to warrant this kind of development?

Thinking back to the Ghz wars it wouldn't be completely unheard of for the marketing men to try and override basic common sense or the laws of physics in product development…
Lanky123
I'm kind of curious how far we're going to go in the ‘moar cores’ battle that appears to have started. Are there workloads that benefit from 100+ threads but can't be more efficiently carried out on a GPU? Are there enough of those workloads to warrant this kind of development?

Thinking back to the Ghz wars it wouldn't be completely unheard of for the marketing men to try and override basic common sense or the laws of physics in product development…

Unfortunately it's the catch 22 of if there is not the hardware there to develop for no one will develop for it but if no one is developing for it then the hardware will not come.

A lot of these extreme core counts aren't for individual pieces of software, they are for numerous pieces of software to run simultaneous on the same hardware.

i.e. a data crunching system that will run a docker on the fly for each task and can be assigned it's own core(s) and ram completely and not share resources with anything else (more secure). Or a massive virtualisation environment and all that good stuff.
“ A Platinum 8280 will set you back between $10,000 and $18,000, depending on the configuration, for instance, minus your negotiated discount.”

urm… i'll go buy several Eypc servers for that cost thanks.