CPU Tests: SPEC2006 1T, SPEC2017 1T, SPEC2017 nT

SPEC2017 and SPEC2006 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

We run the tests in a harness built through Windows Subsystem for Linux, developed by our own Andrei Frumusanu. WSL has some odd quirks, with one test not running due to a WSL fixed stack size, but for like-for-like testing is good enough. SPEC2006 is deprecated in favor of 2017, but remains an interesting comparison point in our data. Because our scores aren’t official submissions, as per SPEC guidelines we have to declare them as internal estimates from our part.

For compilers, we use LLVM both for C/C++ and Fortan tests, and for Fortran we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-sourced compilers such as MSVC or ICC.

clang version 10.0.0
clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
 24bd54da5c41af04838bbe7b68f830840d47fc03)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions. We decided to build our SPEC binaries on AVX2, which puts a limit on Haswell as how old we can go before the testing will fall over. This also means we don’t have AVX512 binaries, primarily because in order to get the best performance, the AVX-512 intrinsic should be packed by a proper expert, as with our AVX-512 benchmark.

To note, the requirements for the SPEC licence state that any benchmark results from SPEC have to be labelled ‘estimated’ until they are verified on the SPEC website as a meaningful representation of the expected performance. This is most often done by the big companies and OEMs to showcase performance to customers, however is quite over the top for what we do as reviewers.

For each of the SPEC targets we are doing, SPEC2006 rate-1, SPEC2017 speed-1, and SPEC2017 speed-N, rather than publish all the separate test data in our reviews, we are going to condense it down into individual data points. The main three will be the geometric means from each of the three suites. 

(9-0a) SPEC2006 1T Geomean Total

(9-0b) SPEC2017 1T Geomean Total

(9-0c) SPEC2017 nT Geomean Total

A fourth metric will be a scaling metric, indicating how well the nT result scales to the 1T result for 2017, divided by the number of cores on the chip. 

(9-0d) SPEC2017 MP Scaling

The per-test data will be a part of Bench.

Experienced users should be aware that 521.wrf_r, part of the SPEC2017 suite, does not work in WSL due to the fixed stack size. It is expected to work with WSL2, however we will cross that bridge when we get to it. For now, we’re not giving wrf_r a score, which because we are taking the geometric mean rather than the average, should not affect the results too much.

CPU Tests: Synthetic CPU Tests: Microbenchmarks
Comments Locked

110 Comments

View All Comments

  • ballsystemlord - Tuesday, July 21, 2020 - link

    @Ian, I love your 30,000 datapoints per article. Thanks for benching all these things.
    The AMD Phenom II 1090T (the original consumer 6 core!!!) is the CPU I'd like to see in the new suite.
  • Samus - Tuesday, July 21, 2020 - link

    Can you build an automated (filtering/categorizing) submission form for donations. I have many Xeon’s, especially the v3’s you have a shortage of, that I would be willing to donate for the cause.
  • ballsystemlord - Tuesday, July 21, 2020 - link

    @ian @Samus Use email to contact each other.
  • Dragonsteel - Tuesday, July 21, 2020 - link

    I'd like to see comparisons for the mainstream 300$ to 400$ CPUs starting with the i7 series.

    I'd really like to see i72600k on those benchmarks. Both stock and OC performance. I do run this CPU, bit am looking at upgrading soon to a comparable model. It just hasn't made sense until now with the new platform updates and more powerful GPUs.
  • Slaps - Tuesday, July 21, 2020 - link

    Would it be possible to add Counter-Strike Global Offensive? You can use the in-game console to load a demo (replay) of a professional match and let it run to get very real and consistent results.
  • ET - Tuesday, July 21, 2020 - link

    What an amazing project. Great and detailed article, too. I'm looking forward to seeing the results. I appreciate Bench, and often when I see someone on Reddit ask about an upgrade from, say, a Phenom II 1055T to FX 6120, I go to Bench to make a comparison (though of course can't often find the exact models).

    Hopefully the UI for Bench will be improved. Search and auto-completion, comparing more than 2 CPUs, these are things I'd expect.
  • DanNeely - Tuesday, July 21, 2020 - link

    y-cruncher sprint graphs are missing.
  • 137ben - Tuesday, July 21, 2020 - link

    This is an ambitious project, and it is the reason I enjoy coming to Anandtech.
  • ozzuneoj86 - Tuesday, July 21, 2020 - link

    This is amazing work! Thank you for doing this!

    One suggestion though... and I've mentioned this in past comments... please please please rename the lowest of the four quality settings for gaming benchmarks. The "IGP" setting is unnecessarily confusing to those looking at CPU benchmarks being run on a top of the line GPU. No IGP is involved. Just call it " VERY LOW" or something.
  • Meteor2 - Monday, August 3, 2020 - link

    Yes x1000!

    (What does IGP even stand for in this context?!)

Log in

Don't have an account? Sign up now