CPU Tests: SPEC

SPEC2017 and SPEC2006 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

We run the tests in a harness built through Windows Subsystem for Linux, developed by our own Andrei Frumusanu. WSL has some odd quirks, with one test not running due to a WSL fixed stack size, but for like-for-like testing is good enough. SPEC2006 is deprecated in favor of 2017, but remains an interesting comparison point in our data. Because our scores aren’t official submissions, as per SPEC guidelines we have to declare them as internal estimates from our part.

For compilers, we use LLVM both for C/C++ and Fortan tests, and for Fortran we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-sourced compilers such as MSVC or ICC.

clang version 10.0.0-svn350067-1~exp1+0~20181226174230.701~1.gbp6019f2 (trunk)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions. We decided to build our SPEC binaries on AVX2, which puts a limit on Haswell as how old we can go before the testing will fall over. This also means we don’t have AVX512 binaries, primarily because in order to get the best performance, the AVX-512 intrinsic should be packed by a proper expert, as with our AVX-512 benchmark. All of the major vendors, AMD, Intel, and Arm, all support the way in which we are testing SPEC.

To note, the requirements for the SPEC licence state that any benchmark results from SPEC have to be labelled ‘estimated’ until they are verified on the SPEC website as a meaningful representation of the expected performance. This is most often done by the big companies and OEMs to showcase performance to customers, however is quite over the top for what we do as reviewers.

For each of the SPEC targets we are doing, SPEC2006 rate-1, SPEC2017 rate-1, and SPEC2017 rate-N, rather than publish all the separate test data in our reviews, we are going to condense it down into a few interesting data points. The full per-test values are in our benchmark database.

(9-0a) SPEC2006 1T Geomean Total(9-0b) SPEC2017 1T Geomean Total

Single thread is very much what we expected, with the consumer processors out in the lead and no real major differences between TR and TR Pro.

(9-0c) SPEC2017 nT Geomean Total

That changes when we move into full thread mode. The extra bandwidth of TR Pro is clear to see, even in the 32C/64T model. In this test we're using 128 GB of memory for all TR and TR Pro processors, and we're seeing a small bump when in 64C/64T mode, perhaps due to the increased memory cap/thread and memory bandwidth/thread as well. The 3990X 64C/128T run kept failing for an odd reason, so we do not have a score for that test.

CPU Tests: Synthetic CPU Tests: Microbenchmarks
Comments Locked

98 Comments

View All Comments

  • Thanny - Thursday, July 15, 2021 - link

    Your Blender results for the 3960X are off by a lot. I rendered the same scene with mine in 173 seconds. That's with PBO enabled, so it'll be a bit faster than stock, but not 20% faster.

    My guess is that you didn't warm Blender up properly first. When starting a render for the first time, it has to do some setup work, which is timed with the rest of the render, but only needs to be done once.

    I'd expect a stock 3960X to be in the neighborhood of 180 seconds.
  • 29a - Thursday, July 15, 2021 - link

    "Firstly, because we need an AI benchmark, and a bad one is still better than not having one at all."

    I 100% disagree with this statement. Bad data is worse than no data at all.
  • arashi - Saturday, July 17, 2021 - link

    But but but what about the few (<10) clicks they'd lose for not having lousy CPU based AI benchmarks!
  • willis936 - Thursday, July 15, 2021 - link

    Availability of entry level ECC CPUs (AMD pro and Intel Xeon E-2200/W) is really low. It's unfortunate. People don't have the cash for $10k systems right now but the need for ECC has only gone up. I hope for more editorials calling for mainstream ECC.
  • Threska - Thursday, July 15, 2021 - link

    Linus is mainstream enough.

    https://arstechnica.com/gadgets/2021/01/linus-torv...
  • Mikewind Dale - Thursday, July 15, 2021 - link

    At least mainstream desktop Ryzens tend to support ECC, even if not officially validated.

    What frustrates me is that laptop Ryzens don't support ECC at all - not even the Ryzen Pros.

    Every Ryzen Pro laptop I've seen lacks ECC support, and some of them even have non-ECC memory soldered to the motherboard.

    If you want an ECC laptop, it appears you have literally no choice at all but a Xeon laptop for $5,000.
  • mode_13h - Friday, July 16, 2021 - link

    > laptop Ryzens don't support ECC at all - not even the Ryzen Pros.

    It probably depends on the laptop. If its motherboard doesn't have the extra traces for the ECC bits, then of course it won't.
  • Mikewind Dale - Saturday, July 17, 2021 - link

    It depends on the laptop, yes. But I haven't found a single Ryzen Pro laptop from a single company that supports ECC.

    AMD's website ("Where to Buy AMD Ryzen™ PRO Powered Laptops") lists HP ProBook, HP EliteBook, and Lenovo Thinkpad. But none of them support ECC.
  • mode_13h - Saturday, July 17, 2021 - link

    > I haven't found a single Ryzen Pro laptop from a single company that supports ECC.

    Thanks for the datapoint. Maybe someone will buck the trend, but it's also possible they judged the laptop users who really care about ECC would also prefer a dGPU and therefore won't be using APUs.
  • mode_13h - Friday, July 16, 2021 - link

    > I hope for more editorials calling for mainstream ECC.

    You'll probably just get inferior in-band ECC.

Log in

Don't have an account? Sign up now