CPU Tests: SPEC

SPEC2017 and SPEC2006 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

We run the tests in a harness built through Windows Subsystem for Linux, developed by our own Andrei Frumusanu. WSL has some odd quirks, with one test not running due to a WSL fixed stack size, but for like-for-like testing is good enough. SPEC2006 is deprecated in favor of 2017, but remains an interesting comparison point in our data. Because our scores aren’t official submissions, as per SPEC guidelines we have to declare them as internal estimates from our part.

For compilers, we use LLVM both for C/C++ and Fortan tests, and for Fortran we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-sourced compilers such as MSVC or ICC.

clang version 8.0.0-svn350067-1~exp1+0~20181226174230.701~1.gbp6019f2 (trunk)
clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
 24bd54da5c41af04838bbe7b68f830840d47fc03)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions. We decided to build our SPEC binaries on AVX2, which puts a limit on Haswell as how old we can go before the testing will fall over. This also means we don’t have AVX512 binaries, primarily because in order to get the best performance, the AVX-512 intrinsic should be packed by a proper expert, as with our AVX-512 benchmark. All of the major vendors, AMD, Intel, and Arm, all support the way in which we are testing SPEC.

To note, the requirements for the SPEC licence state that any benchmark results from SPEC have to be labelled ‘estimated’ until they are verified on the SPEC website as a meaningful representation of the expected performance. This is most often done by the big companies and OEMs to showcase performance to customers, however is quite over the top for what we do as reviewers.

For each of the SPEC targets we are doing, SPEC2006 rate-1, SPEC2017 speed-1, and SPEC2017 speed-N, rather than publish all the separate test data in our reviews, we are going to condense it down into a few interesting data points. The full per-test values are in our benchmark database.

(9-0a) SPEC2006 1T Geomean Total(9-0b) SPEC2017 1T Geomean Total(9-0c) SPEC2017 nT Geomean Total

There are some specific tests that the eDRAM gets a sizeable boost in performance for, such as 471.omnetpp in SPEC2006 (+23% over 6700K). The main gains are in SPEC2017 nT, in 510.parest_r (+49%), 519.lbm_r (+63%), and 554.roms_r (+46%). However, the lower power and lower frequency still hamper the processors in a lot of scenarios. 

CPU Tests: Synthetic CPU Tests: Microbenchmarks
Comments Locked

120 Comments

View All Comments

  • dotjaz - Saturday, November 7, 2020 - link

    *serves
  • Samus - Monday, November 9, 2020 - link

    That's not true. There were numerous requests from OEM's for Intel to make iGPU-enabled XEONs for the specific purpose of QuickSync, so there are indeed various applications other than ML where an iGPU in a server environment is desirable.
  • erikvanvelzen - Saturday, November 7, 2020 - link

    Ever since the Pentium 4 Extreme Edition I've wondered why intel does not permanently offer a top product with a large L3 or L4 cache.
  • lemmemakethis - Thursday, December 3, 2020 - link

    Great blog post for better understanding <a href="https://farmslik.com/sales/">Buy rams near me </a>
  • plonk420 - Monday, November 2, 2020 - link

    been waiting for this to happen ...since the Fury/Fury X. would gladly pay the $230ish they want for a 6 core Zen 2 APU but even with "just" 4c8t + Vega 8 (but preferably 11) + HBM(2)
  • ichaya - Monday, November 2, 2020 - link

    With the RDNA2 infinitycache announcement and the increase (~2x) in effective BW from it, and we know Zen has always done better with more memory BW, so it's just dead obvious now that an L4 cache on the I/O die would increase performance (especially in workloads like gaming) more than it's power cost.

    I really should have said waiting since Zen 2, since that was the I/O die was introduced, but I'll settle for eDRAM or SRAM L4 on the I/O die as that would be easier than a CCX with HBM2 as cache. Some HBM2 APUS would be nice though.
  • throAU - Monday, November 2, 2020 - link

    I think very soon for consumer focused parts, on package HBM won't necessarily be cache, but they'll be main memory. End users don't need massive amounts of RAM in end user devices, especially as more workload moves to cloud.

    8 GB of HBM would be enough for the majority of end user devices for some time to come and using only HBM instead of some multi-level caching architecture would be simpler - and much smaller.
  • Spunjji - Monday, November 2, 2020 - link

    Really liking the level of detail from this new format! Fascinated to see how the Broadwell secret sauce has stood up to the test of time, too.

    Hopefully the new gaming CPU benchmarks will finally put most of the benchmark bitching to bed - for sure it goes to show (at quite some length) that the ranking under artificially CPU-limited scenarios doesn't really correspond to the ranking in a realistic scenario, where the CPU is one constraint amongst many.

    Good work all-round 👍👍
  • lemurbutton - Monday, November 2, 2020 - link

    Anandtech: We're going to review a product from 2015 but we're not going to review the RTX 3080, RTX 3090, nor the RTX 3070.

    If I were management, I'd fire every one of the editors.
  • e36Jeff - Monday, November 2, 2020 - link

    The guy that tests GPUs was affected by the Cali wildfires. Ian wouldn't be writing a GPU review regardless, he does CPUs.

Log in

Don't have an account? Sign up now