SPEC - Per-Core Performance under Load

A metric that is actually more interesting than isolated single-thread performance, is actually per-thread performance in a fully loaded system. This actually is a measurement and benchmark figure that would greatly interest enterprises and customers which are running software or workloads that are possibly licensed on a per-core basis, or simply workloads that require a certain level of per-thread service level agreement in terms of performance.

This has been a strong-point of Intel SKUs for some time now, even when the chips wouldn’t be competitive in terms of total throughput. With the new Ice Lake SPs SKUs now more notably increasing total throughput, it’ll be interesting to see the per-thread breakdown and resulting performance:

SPEC2017 Rate-N Estimated Per-Thread Performance (1S) 

Because the total throughput generational performance increase is larger than the core count increase of the parts, this means that per-thread and per-core performance is higher with this generation. The Xeon 8380 is posting +16.3% and +10.4% per-thread performance versus the Xeon 8280 when only using one thread per core.

Interestingly, these figures are less at +8.2 and +7.4% when using both SMT threads per core. Intel has explained such an increase through the better usage of shared microarchitectural structure usage in the new Sunny Cove cores, essentially diminishing the SMT yield by improving 1/T per core performance.

Generally, Intel is extremely competitive in this benchmark metric, and while AMD easily beats them with the frequency-optimised parts, it’s an advantage that should help Intel in the SLA-centric workloads.

SPEC - Single-Threaded Performance SPECjbb MultiJVM - Java Performance
Comments Locked

169 Comments

View All Comments

  • mode_13h - Thursday, April 8, 2021 - link

    Please tell me you did this test with an ICC released only a couple years ago, or else I feel embarrassed for you polluting this discussion with such irrelevant facts.
  • Oxford Guy - Sunday, April 11, 2021 - link

    It wasn't that long ago.

    If you want to increase the signal to noise ratio you should post something substantive.

    For instance, if you think think ICC no longer produces faster Blender builds why not post some evidence to that effect?
  • eastcoast_pete - Tuesday, April 6, 2021 - link

    This Xeon generation exists primarily because Intel had to come through and deliver something in 10 nm, after announcing the heck out of it for years. As an actual processor, they are not bad as far as Xeons are concerned, but clearly inferior to AMD's current EPYC line, especially on price/performance. Plus, we and the world know that the real update is around the corner within a year: Sapphire Rapids. That one promises a lot of performance uplift, not the least by having PCI-5 and at least the option of directly linked HBM for RAM. Lastly, if Intel would have managed to make this line compatible with the older socket (it's not), one could at least have used these Ice Lake Xeons to update Cooper Lake systems via a CPU swap. As it stands, I don't quite see the value proposition, unless you're in an Intel shop and need capacity very badly right now.
  • Limadanilo2022 - Tuesday, April 6, 2021 - link

    Agreed. Both Ice Lake and Rocket lake are just placeholders to try to make something before the real improvement comes with Saphire rapids and Alder Make respectively... I'm one that says that AMD really needs the competition right now to not get sloppy and become "2017-2020 Intel". I want to see both competing hard in the next years ahead
  • drothgery - Wednesday, April 7, 2021 - link

    Rocket Lake is a stopgap. Ice Lake (and Ice Lake SP) were just late; they would have been unquestioned market leaders if launched on time and even now mostly just run into problems when the competition is throwing way more cores at the problem.
  • AdrianBc - Wednesday, April 7, 2021 - link

    No, Ice Lake Server cores have a much lower clock frequency and a much smaller L3 cache than Epyc 7xx3, so they are much slower core per core than AMD Milan for any general purpose application, e.g. software compilation.

    The Ice Lake Server cores have a double number of floating-point multipliers that can be used by AVX-512 programs, so they are faster (despite their clock frequency deficit) for the applications that are limited by FP multiplication throughput or that can use other special AVX-512 features, e.g. the instructions useful for machine learning.
  • Oxford Guy - Wednesday, April 7, 2021 - link

    'limited by FP multiplication throughput or that can use other special AVX-512 features, e.g. the instructions useful for machine learning.'

    How do they compare with Power?

    How do they compare with GPUs? (I realize that a GPU is very good at a much more limited palette of work types versus a general-purpose CPU. However... how much overlap there is between a GPU and AVX-512 is something at least non-experts will wonder about.)
  • AdrianBc - Thursday, April 8, 2021 - link

    The best GPUs from NVIDIA and AMD can provide between 3 and 4 times more performance per watt than the best Intel Xeons with AVX-512.

    However most GPUs are usable only in applications where low precision is appropriate, i.e. graphics and machine learning.

    The few GPUs that can be used for applications that need higher precision (e.g. NVIDIA A100 or Radeon Instinct) are extremely expensive, much more than Xeons or Epycs, and individuals or small businesses have very little chances to be able to buy them.
  • mode_13h - Friday, April 9, 2021 - link

    Please re-check the price list. The top-end A100 does sell for a bit more than the $8K list price of the top Xeon and EPYC, however MI100 seems to be pretty close. perf/$ is still wildly in favor of GPUs.

    Unfortunately, if you're only looking at the GPUs' ordinary compute specs, you're missing their real point of differentiation, which is their low-precision tensor performance. That's far beyond what the CPUs can dream of!

    Trust there are good reasons why Intel scrapped Xeon Phi, after flogging it for 2 generations (plus a few prior unreleased iterations), and adopted a pure GPU approach to compute!
  • mode_13h - Thursday, April 8, 2021 - link

    "woulda, coulda, shoulda"

    Ice Lake SP is not even competitive with Rome. So, they missed their market window by quite a lot!

Log in

Don't have an account? Sign up now