CPU Performance: Intel's Own Claims

Before we get into the new AI benchmarks, let’s take a quick look at the usual CPU benchmarks and performance claims made available by Intel. 

For this comparison we’ll focus on the second row – the first row is comparing the insanely priced 400W dual-die Intel Platinum 9282 to a much more reasonable and available to everyone Intel Platinum 8180. The second row tells it all: a few MHz and slightly higher RAM speeds result in a 3% (Integer) to 5% (FP) performance increase compared to the first-generation Xeon Scalable parts. The higher boost in floating point performance is probably the result of the fact that Intel's second generation parts can use faster DDR4-2933 DIMMs and hence offer more bandwidth to the cores. 

The midrange SKUs get a bigger boost as some of x2xx Xeon Scalable parts get more cores and more L3 cache than the previous x1xx parts. For example, the 6252 has 24 cores and 35.75 MB L3, while the 6152 had 22 cores and 30.25 MB L3. 

 

The comparison with AMD's EPYC 7601 however deserves our attention, as there’s some interesting data here. Again, the comparison of a 400W, $50k chiplet CPU with a 180W $4k one does not make any sense whatsoever, so we ignore the first line. 

The Linpack numbers are not surprising: the more expensive Skylake SKUs add a 512-bit FMAC to the already existing dual 256-bit FMACs, offering up to 4 times more AVX throughput than AMD's EPYC. AMD's next generation will be a lot more competitive in this area as the each FP unit is now capable of doing 256-bit AVX instead of 128-bit. 

The image classification results clearly show that Intel is trying to convince people that some AI applications should simply run on a CPU, no GPU needed. Well, at least for now… 

The fact that Intel claims that database performance is a lot better than on the EPYC is quite interesting, as we’ve previously pointed out that AMD's four NUMA dies on a chip does have drawbacks. Quoting our Xeon Skylake vs EPYC review

Out of the box, the EPYC CPU is a rather mediocre transactional database CPU ... transactional databases will remain Intel territory for now.

In databases, cache (coherency) latency plays an important role. It will be interesting to see how well AMD has addressed this weakness in the second generation EPYC server chips. 

Testing Notes & Benchmark Configuration SAP S&D 2-tier
Comments Locked

56 Comments

View All Comments

  • Drumsticks - Monday, July 29, 2019 - link

    It's an interesting, valuable take on the challenges of responding to many of the ML workloads of today with a general purpose CPU, thanks! A third party review of Intel's latest against Nvidia, and even throwing AMD in to the mix, is pretty helpful as the two companies have been going at it for a while now.

    Intel has a lot of stuff going that should make the next few years quite interesting. If they manage to follow through on the Nervana Coprocessor/NNP-I that Toms talked about, or on their discrete GPUs, they'll have a potent lineup. The execution definitely isn't guaranteed, especially given the software reliance these products will have, but if Intel really can manage to transform their product stack, and do it in the next few years, they'll be well on their way to competing in a much larger market, and defending their current one.

    OTOH, if they fail with all of them, it'll definitely be bad news for their future. They obviously won't go bankrupt (they'll continue to be larger than AMD for the foreseeable future), but it'll be exponentially harder if not impossible to get back into those markets they missed.
  • JohanAnandtech - Monday, July 29, 2019 - link

    Thanks! Indeed, Nervana coprocessors are indeed Intel's most promising technology in this area.
  • p1esk - Monday, July 29, 2019 - link

    No one in their right mind would think "gee, should I get CPU or GPU for my DL app?" More concerning for Intel should be the fact that I bought a Threadripper for my latest DL build.
  • Smell This - Monday, July 29, 2019 - link

    You gotta Radeon VII ?

    I'm thinking Intel, and to a lesser extent, nVidia, is waiting for the next shoe(s) to drop in **Big Compute** --- Cascade Lake has been left at the starting gate.

    An AMD Radeon Instinct 'cluster' on a dense specialized 'chiplet' server with hundreds of CPU cores/threads is where this train is headed ...
  • JohanAnandtech - Monday, July 29, 2019 - link

    Spinning up a GPU based instance on Amazon is much more expensive than a CPU one. So for development purposes, this question is asked.
  • p1esk - Tuesday, July 30, 2019 - link

    Then you should be answering precisely that question: which instance should I spin up? Your article does not help with that because the CPU you test is more expensive than the GPU.
  • JohnnyClueless - Monday, July 29, 2019 - link

    Really surprised Intel, and to a lesser extent AMD, are even trying to fight this battle with nVidia on these terms. It’s a lot like going to a gun fight and developing an extra sharp samurai sword rather than bringing the usual switchblade knife. The sword may be awesome, but it’s always going to be the wrong tool for the gun fight.

    IMO, a better approach to capture market share in DL/AI/HPC might be to develop a low core count (by 2019 standards) CPU that excelled at sequential single threaded performance. Something like 6-10 GHz. That would provide a huge and tangible boost to any workload that is at least partially single core frequency limited, and that is most DL/AI/HPC workloads. Leave the parallel computing to chips and devices designed to excel at such workloads!
  • Eris_Floralia - Monday, July 29, 2019 - link

    Still living in early 2000s?
  • FunBunny2 - Monday, July 29, 2019 - link

    "Something like 6-10 GHz. "

    IIRC, all the chip tried to get near that, but couldn't. it's not nice to fool Mother Nature.
  • Santoval - Monday, July 29, 2019 - link

    "Something like 6-10 GHz."
    Google "Dennard scaling" (which ended in ~2005) to find out why this is impossible, at least with silicon based MOSFET transistors (including the GAA-FET based ones of the next decade). Wikipedia has a very informative page with multiple links to various sources for even more. The gist of the end of Dennard scaling is that single core clocks higher than ~5 GHz (at a reasonable TDP of up to ~100W) are explicitly forbidden at *any* node.

    When Dennard scaling ended -in combination with the slowing down of Moore's Law- there was another, related consequence : Koomey's law started to slow down. Koomey's law is all about power efficiency, i.e. how many computations you can extract from each Wh or kWh.

    Before the early 2000s the number of computations per x unit of energy doubled on average every 1.57 years. In 2011 Koomey himself re-evaluated his law and got an average doubling of computations every 2.6 years for the previous decade, a substantial collapse of power efficiency. Since 2011 Koomey's law has obviously slowed down further.

    To make a long story short Moore's law puts a limit to the number of transistors we can fit in each mm^2, and that limit is not too far away. Dennard scaling once allowed us to raise clocks with each new node at the same TDP, and this is ancient history in computing terms. Koomey's law, finally, puts a limit to the power efficiency of our CPUs/GPUs, and this continues to slow down due to the slowing down of Moore's Law (when Moore's Law ends Koomey's law will also end, thus all three fundamental computing laws will be "dead").

    Unless we ditch silicon (and even CMOS transistors, if required) and adopt a new computing paradigm we will have neither 6 - 10 GHz clocked CPUs in a couple of decades nor will we able to speed up CPUs, GPUs and computers at all.

Log in

Don't have an account? Sign up now