Analyzing Intel's Cascade Lake in the New Era of AI

Wrapping things up, let’s take stock of the second-generation Xeon Scalable’s performance, and what it brings to the table in terms of features. With Cascade Lake Intel has improved performance by 3 to 6%, improved security, fixed some incredibly important bugs/exploits, added some SIMD instructions, and improved the overall server platform. This is nothing earth shattering, but you get more for the same price and power envelope, so what’s not to like? 

That would be fine 5 years ago, when AMD did not have anything like the Zen(2) architecture, ARM vendors were still struggling with cores that offered painfully slow single-threaded performance, and deep learning was in the early stages. But this is not 2014, when Intel outperformed the nearest competition by a factor 3! Ultimately Cascade Lake delivers in areas where CPUs – and only CPUs – do well. But even with Intel’s DL Boost efforts, it’s not enough if the new chips have to go head-to-head with a GPU in a task the latter doesn’t completely suck at.

The reality is that Intel's datacenter group is under tremendous pressure from all sides, and the numbers are showing it. For the first time in years, the datacenter experienced a revenue drop, despite the fact that the overall server market is growing. 

It is been going on for a while, but as we’ve experienced firsthand, machine learning-based AI applications are being rolled out successfully, and they are a game changer for both software and hardware. As a result, future server CPU reviews will never quite be the same: it is not Intel versus AMD or even ARM anymore, but NVIDIA too. NVIDIA is extremely successful in the deep learning market, and they are confident enough to take on Intel in areas where Intel dominated for years: HPC, machine learning, and even data processing. NVIDIA is ready to accelerate a much larger part of the data pipeline and a wider range of AI applications.

Features found in Intel Cascade Lake like DL Boost (VNNI) are the first attempts by Intel to push back – to cut away at the massive advantage that NVIDIA has in inference performance. Meanwhile, the next Xeon – Cooper Lake – will try to get closer to NVIDIA in training performance. 

Moving on, when we saw this slide, we were gasping for air. 

This slide boasting "leadership performance" also conveniently describes the markets where Intel is a very vulnerable position, despite Intel's current dominant position in the datacenter. Although the slide focuses on the Intel Xeon 9200, this could easily be a slide for the high-end Platinum 8200 Xeons too. 

Intel points towards HPC, AI, and high-density infrastructure to sell their massively expensive Xeons. But as the market shifts towards less traditional business intelligence and more machine learning, and more GPU-accelerated HPC, the market for high end Xeons is shrinking. Intel has a very broad AI portfolio from Movidius (edge inference) to Nervana NNP (ASIC for DL training), and they’re going to need it to replace the Xeon in those segments where it falls out.

A midrange Xeon combined with a Nervana NNP coprocessor might work out well – and it would definitely be a better solution for most AI applications than a Xeon 9200. And the same is true for HPC: we are willing to bet that you are much better off with midrange Xeons and a fast NVIDIA GPU. And depending on where AMD's EPYC 2 pricing goes, even that might end up being debatable...

Exploring Parallel HPC
Comments Locked

56 Comments

View All Comments

  • C-4 - Monday, July 29, 2019 - link

    It's interesting that optimizations did so much for the Intel processors (but relatively less for the AMD ones). Who made these optimizations? How much time was devoted to doing this? How close are the algorithms to being "fully optimized" for the AMD and nVidia chips?
  • quorm - Monday, July 29, 2019 - link

    I believe these optimizations largely take advantage of AVX512, and are therefore intel specific, as amd processors do not incorporate this feature.
  • RSAUser - Monday, July 29, 2019 - link

    As quorm said, I'd assume it's due to AVX512 optimizations, the next generation of AMD Epyc CPU's should support it, and I am hoping closer to 3GHz clock speeds on the 64 core chips, since it seems the new ceiling is around the 4GHz mark for 16 all-core.

    It will be an interesting Q3/Q4 for Intel in the server market this year.
  • SarahKerrigan - Monday, July 29, 2019 - link

    Next generation? You mean Rome? Zen2 doesn't have any AVX512.
  • HStewart - Tuesday, July 30, 2019 - link

    I believe AMD AVX 2 is dual-128 bit instead of 256bit - so AVX 512 would probably be quad 128bit .
  • jospoortvliet - Tuesday, July 30, 2019 - link

    That’s not really how it works, in the sense that you explicitly need to support the new instructions... and amd doesn’t (plan to, as far as we know).
  • Qasar - Tuesday, July 30, 2019 - link

    from wikipedia :
    " AVX2 is now fully supported, with an increase in execution unit width from 128-bit to 256-bit. "

    " AMD has increased the execution unit width from 128-bit to 256-bit, allowing for single-cycle AVX2 calculations, rather than cracking the calculation into two instructions and two cycles."
    which is from here : https://www.anandtech.com/show/14525/amd-zen-2-mic...

    looks like AVX2 is single 256 bit :-)
  • name99 - Monday, July 29, 2019 - link

    Regarding the limits of large batches: while this is true in principle, the maximum size of those batches can be very large, is hard to predict (at leas right now) and there is on-going work to increase the sizes, This link describes some of the issue and what’s known:

    http://ai.googleblog.com/2019/03/measuring-limits-...

    I think Intel would be foolish to pin many hopes on the assumption that batch scaling will soon end the superior performance of GPUs and even more specialized hardware...
  • brunohassuna - Monday, July 29, 2019 - link

    Some information about energy consumption would very useful in comparisons like that
  • ozzuneoj86 - Monday, July 29, 2019 - link

    My first thought when clicking this article was how much more visibly-complex CPUs have gotten in the past ~35 years.

    Compare the bottom of that Xeon to the bottom of a CLCC package 286:
    https://en.wikipedia.org/wiki/Intel_80286#/media/F...

    And that doesn't even touch the difference internally... 134,000 transistors to 8 million and from 16Mhz to 4,000Mhz. The mind boggles.

Log in

Don't have an account? Sign up now