SPEC - Single-Threaded Performance

Starting off with SPECint2017, we’re using the single-instance runs of the rate variants of the benchmarks.

As usual, because there are not officially submitted scores to SPEC, we’re labelling the results as “estimates” as per the SPEC rules and license.

We compile the binaries with GCC 10.2 on their respective platforms, with simple -Ofast optimisation flags and relevant architecture and machine tuning flags (-march/-mtune=Neoverse-n1 ; -march/-mtune=skylake-avx512 ; -march/-mtune=znver2).

While single-threaded performance in such large enterprise systems isn’t a very meaningful or relevant measure, given that the sockets will rarely ever be used with just 1 thread being loaded on them, it’s still an interesting figure academically, and for the few use-cases which would have such performance bottlenecks. It’s to be remembered that the EPYC and Xeon systems will clock up to respectively 3.4GHz and 4GHz under such situations, while the Ampere Altra still maintains its 3.3GHz maximum speed.

SPECint2017 Rate-1 Estimated Scores

In SPECint2017, the Altra system is performing admirably and is able to generally match the performance of its counterparts, winning some workloads, while losing some others.

SPECfp2017 Rate-1 Estimated Scores

In SPECfp2017 the Neoverse-N1 cores seem to more generally fall behind their x86 counterparts. Particularly what’s odd to see is the vast discrepancy in 507.cactuBSSN_r where the Altra posts less than half the performance of the x86 cores. This is actually quite odd as the Graviton2 had scored 3.81 in the test. The workload has the highest L1D miss rate amongst the SPEC suite, so it’s possible that the neutered prefetchers on the Altra system might in some way play a more substantial role in this workload.

SPEC2017 Rate-1 Estimated Total

The Altra Q80-33 ends up performing extremely well and competitively against the AMD EPYC 7742 and Intel Xeon 8280, actually beating the EPYC in SPECint, although it loses by a larger margin in SPECfp. The Xeon 8280 still holds the crown here in this test due to its ability to boost up to 4GHz across two cores, clocking down to 3.8, 3.7, 3.5 and 3.3GHz beyond 2, 4, 8 and 20 cores active.

The Altra showcases a massive 52% performance lead over the Graviton2 in SPECint, which is actually beyond the expected 32% difference due to clock frequencies being at 3.3GHz versus 2.5GHz. On the other hand, the SPECfp figures are only ahead of 15% for the Altra. The prefetchers are really amongst the only thing that come to mind in regards to these differences, as the only other difference being that the Graviton2 figures were from earlier in the year on GCC 9.3. The Altra figures are definitely more reliable as we actually have our hands on the system here.

While on the AMD system the move from NPS1 to NPS4 hardly changes performance, limiting the Altra Q80-33 from a monolithic setup to a quadrant setup does incur a small performance penalty, which is unsurprising as we’re cutting the L3 into a quarter of its size for single-threaded workloads. That in itself is actually a very interesting experiment as we haven’t been able to do such a change on any prior system before.

Test Bed and Setup - Compiler Options SPEC - Multi-Threaded Performance
Comments Locked

148 Comments

View All Comments

  • mode_13h - Monday, December 21, 2020 - link

    I agree that people should do a sanity-check on their numbers.
  • Spunjji - Monday, December 21, 2020 - link

    "This thing is quite big."
    Package size is not die size.

    If Nvidia can pump out dies more than twice the area on an inferior process and still get some perfect dies, I suspect they'll have no issues whatsoever with yield on TSMC 7nm at this stage - especially with the ability to sell lower-core-count variants.
  • Samus - Sunday, December 20, 2020 - link

    Die harvested models with less cores sell for only 5-10% less. So I'm not sure if that means yields are really good, or really bad. Apparently they seem to be pushing the 80 core models pretty hard since so many are being offered.

    Then again, it depends what we define as yield quality? Defects seem to be low, but binning could be another issue as only two models seem to hit 3.3GHz and at incredibly high power budgets.
  • Spunjji - Monday, December 21, 2020 - link

    3.3Ghz is about where that architecture tops out - I'm not sure that tells us much about yield. To me, the pricing seems to indicate that they aren't expecting to have to shift a ton of the lower-core-count die-harvested models.
  • damianrobertjones - Friday, December 18, 2020 - link

    Assuming that Intel just wants to milk customers forever, just like nVidia/phone oems do, they should quickly bridge the performance gap. They'll just have to stop being lazy and actually provide us with more than a drip fed speed increase.
  • fishingbait - Friday, December 18, 2020 - link

    An Apple guy I see. Remember that up until a couple of months ago Apple was charging you $1000 for a laptop with a dual core 1.1GHz chip.

    The "phone OEMs" finally have a core that can somewhat compete with Apple's Firestorm. It will take them a couple iterations to perfect it but they are on the right path. As for Intel, another story entirely. The latest word is that their 10nm process isn't going to well and they have hit yet another delay for 7nm. They may hit up Samsung's foundries just to get a product out (due to TSMC not having any capacity until December 21). So while their issues are far more significant than those for Android phones, it isn't due to their lazily milking customers. They have real tech issues to deal with, issues of the sort that Apple and AMD don't have to worry about because they lack the capability and expertise required to make their own chips.
  • mode_13h - Sunday, December 20, 2020 - link

    > because they lack the hubris to think they should try to make their own chips.

    Fixed that for you.
  • Spunjji - Monday, December 21, 2020 - link

    "So while their issues are far more significant than those for Android phones, it isn't due to their lazily milking customers."

    Correct, their technical issues are entirely separate from their strategy of lazily milking customers.
  • Ridlo - Friday, December 18, 2020 - link

    While no Blender test is indeed a bummer, did you guys tried testing with other ray tracing application (LuxMark, C-Ray, Povray, etc.)?
  • Andrei Frumusanu - Friday, December 18, 2020 - link

    I didn't have a standalone test, but Povray is part of SPEC.

Log in

Don't have an account? Sign up now