Analyzing Intel's Cascade Lake in the New Era of AI

Wrapping things up, let’s take stock of the second-generation Xeon Scalable’s performance, and what it brings to the table in terms of features. With Cascade Lake Intel has improved performance by 3 to 6%, improved security, fixed some incredibly important bugs/exploits, added some SIMD instructions, and improved the overall server platform. This is nothing earth shattering, but you get more for the same price and power envelope, so what’s not to like? 

That would be fine 5 years ago, when AMD did not have anything like the Zen(2) architecture, ARM vendors were still struggling with cores that offered painfully slow single-threaded performance, and deep learning was in the early stages. But this is not 2014, when Intel outperformed the nearest competition by a factor 3! Ultimately Cascade Lake delivers in areas where CPUs – and only CPUs – do well. But even with Intel’s DL Boost efforts, it’s not enough if the new chips have to go head-to-head with a GPU in a task the latter doesn’t completely suck at.

The reality is that Intel's datacenter group is under tremendous pressure from all sides, and the numbers are showing it. For the first time in years, the datacenter experienced a revenue drop, despite the fact that the overall server market is growing. 

It is been going on for a while, but as we’ve experienced firsthand, machine learning-based AI applications are being rolled out successfully, and they are a game changer for both software and hardware. As a result, future server CPU reviews will never quite be the same: it is not Intel versus AMD or even ARM anymore, but NVIDIA too. NVIDIA is extremely successful in the deep learning market, and they are confident enough to take on Intel in areas where Intel dominated for years: HPC, machine learning, and even data processing. NVIDIA is ready to accelerate a much larger part of the data pipeline and a wider range of AI applications.

Features found in Intel Cascade Lake like DL Boost (VNNI) are the first attempts by Intel to push back – to cut away at the massive advantage that NVIDIA has in inference performance. Meanwhile, the next Xeon – Cooper Lake – will try to get closer to NVIDIA in training performance. 

Moving on, when we saw this slide, we were gasping for air. 

This slide boasting "leadership performance" also conveniently describes the markets where Intel is a very vulnerable position, despite Intel's current dominant position in the datacenter. Although the slide focuses on the Intel Xeon 9200, this could easily be a slide for the high-end Platinum 8200 Xeons too. 

Intel points towards HPC, AI, and high-density infrastructure to sell their massively expensive Xeons. But as the market shifts towards less traditional business intelligence and more machine learning, and more GPU-accelerated HPC, the market for high end Xeons is shrinking. Intel has a very broad AI portfolio from Movidius (edge inference) to Nervana NNP (ASIC for DL training), and they’re going to need it to replace the Xeon in those segments where it falls out.

A midrange Xeon combined with a Nervana NNP coprocessor might work out well – and it would definitely be a better solution for most AI applications than a Xeon 9200. And the same is true for HPC: we are willing to bet that you are much better off with midrange Xeons and a fast NVIDIA GPU. And depending on where AMD's EPYC 2 pricing goes, even that might end up being debatable...

Exploring Parallel HPC
Comments Locked

56 Comments

View All Comments

  • Gondalf - Tuesday, July 30, 2019 - link

    Kudos to the article from a technical point of view :), a little less for the weak analysis of the server market. Johan say that Intel is slowing down in server but the server market is growing fast.
    Unfortunately it is not: Q1 this year was the worst quarter of server market in 8 quarters with a grow of only 1%. Q2 will be likely on a negative trend, moreover there is a general consensus that 2019 will be a negative year with a drop in global revenue.
    So there recent Intel drop is consistent with a drop of the demand in China in Q2.

    To be underlined that a GPU has to be piloted and every GPU like Tesla is up, there is a one or two Xeons on the motherboard.
    GPU is only an accelerator, but without a cpu is useless. Intel slides about upcoming threat from competitors are related to the existence of AMD in HPC , IBM and some sparse ARM based SKUs for custom applications.
    A GPU is welcomed, it helps to sell more Xeons.
  • eastcoast_pete - Tuesday, July 30, 2019 - link

    More a question than anything else: What is the state of AI-related computing on AMD (graphics) hardware? I know NVIDIA is very dominant, but is it mainly due to an existing software ecosystem?
  • BenSkywalker - Wednesday, July 31, 2019 - link

    AMD has two major hurdles to overcome when specifically looking at AI/ML on GPUs, essentially non existent software support and essentially non existent hardware support. AMD has chosen the route of focusing on general purpose cores that can perform solidly on a variety of traditional tasks both in hardware and software. AI/ML benefit enormously from specialized hardware that in turn takes specialized software to utilize.

    This entire article is stacking up $40k worth of Intel CPUs against a consumer nVidia part and Intel gets crushed whenever nVidia can use it's specialized hardware. Throw a few Tesla V100s in to give us something resembling price parity and Intel would be eviscerated.

    AMD needs tensor cores, a decade worth of tools development, and a decade worth of pipeline development(university training, integration into new systems and build out on to those systems, not hardware pipeline) in order to get where nVidia is now if they were standing still.

    The software ecosystem is the biggest problem long term, everyone working in the field uses CUDA whenever they can, even if AMD mopped the floor with nVidia on the hardware side, for their GPUs to get traction they would need all the development tools nVidia has spent a decade building, but right now their GPUs are throttled by nVidia because of specialized hardware.
  • abufrejoval - Tuesday, July 30, 2019 - link

    Some telepathy must be involved: Just a day or two before this appeared online, I was looking for Johan de Gelas' last appearance on AT in 2018 and thinking that it was high time for one of my favorite authors to publish something. Ever so glad you came out with the typical depth, quality and relevance!

    While GAFA and BATX seem to lead AI and the frameworks, their problems and solutions mostly fit their needs and as it turns out the vastest number of use cases cannot afford the depth and quality they require, nor do they benefit from it, either: If the responsibility of your AI is to monitor for broken drill bits from vibration, sound, normal and thermal visuals, the ability to identify cats in every shape and color has no benefit.

    The big guys typically need to solve a sharply defined problem in a signle domain at a very high quality: They don't combine visual with audio and the inherent context in time-series video is actually ignored, as their AIs stare at each frame independently, hunting for known faces or things to tag and correlate social graphs and products.

    Iterating over ML approaches, NN designs and adequate hyperparameters for training requires months even with clusters of DGX workstations and highly experience ML experts. What makes all that effort worthwhile is that the inference part can then run at relatively low power on your mobile phone inside WeChat, Facebook, Instagram, Google keyboard/translate (or some other "innocent" background app) at billions of instances: Trial and train until you have trained the single sufficiently good network design in days, weeks or even months and then you can deploy inference to billions of devices on battery power.

    Few of us smaller IT companies can replicate that, but again, few of us need to, because we have a vastly higher number of small problems to solve and with a few orders of magnitude less of a difference in training:inference efforts: 1Watt of difference makes or brakes the usability of inference model on mobile target devices, 100 Watts of difference in a couple of servers running a dozen instances of a less optimized and well trained model won't justify an ML-expert team working through another five pizzas.

    As the complexity of your approach (e.g. XGBoost or RF) is perhaps much smaller or your network are much simpler than those of GAFA/BATX you actually worry about how to scale-in not out and batch dozens of training for model iteration and mix that with some QA or even production inference streams on GPUs which Linux understands or treats little better than a printer with DMA.

    Intel quite simply understands that while you get famous with the results you get from training AIs e.g. on GPUs, the money is made from inference at the lowest power and lowest operational overhead: Linux (or Unix for that matter), knows how to manage virtual memory (preferably uniform) and CPUs (preferably few); a memory hierarchy deeper than the manual for your VCR and more types and numbers of cores than Unics first hard disk had in blocks, confuse it.

    But I'd dare say that AMD understood it much longer and much better. When they came up with the HSA on their first APUs, this GPGPU blend, which allowed switching the compute model with a function call makes CUDA look very brutish indeed.

    Writing code able to take full advantage of these GPGPU capabilites is still a nightmare, because high-level languages have abstraction levels far too low for what these APUs or VNNI CPUs can execute in a single clock cycle, but from the way I read it, the Infinity Fabric is about making those barriers as low as they can possibly be in terms of hardware and memory space.

    And RISC-V goes beyond what all x86 advocates still suffer from: An instruction set that's not designed for modular expandability.
  • FunBunny2 - Wednesday, July 31, 2019 - link

    "Trial and train until you have trained the single sufficiently good network design in days, weeks or even months and then you can deploy inference to billions of devices on battery power."

    when and if this capability is used for something useful, e.g. cure for cancer, rather than yet another scheme to extract moolah from rubes. then I'll be interested.
  • keg504 - Tuesday, July 30, 2019 - link

    Why do you say on the testing page that AMD is colour coded in orange, and then put them in grey?
  • 808Hilo - Wednesday, July 31, 2019 - link

    Client/server renamed again...
    There is no AI. That stuff is very very dumb. look at the diagramm above. Nothing new. Data, script does something, parsing and readout of vastly unimportant info. I have not seen a single meaningful AI app. Its now year 25 of the Internet and I am terribly bored. Next please.
  • J7SC_Orion - Wednesday, July 31, 2019 - link

    This explains very nicely why Intel has been raiding GPU staff and pouring resources into Xe Discrete Graphics...if you can't beat them, join them ?
  • tibamusic.com - Saturday, August 3, 2019 - link

    Thank you very much.
  • Threska - Saturday, August 3, 2019 - link

    What a coincidence. The latest humble bundle is "Data Analysis & Machine Learning by O'Reilly"

    https://www.humblebundle.com/books/data-analysis-m...

Log in

Don't have an account? Sign up now