SPEC2017 Single-Threaded Results

SPEC2017 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

We run the tests in a harness built through Windows Subsystem for Linux, developed by Andrei Frumusanu. WSL has some odd quirks, with one test not running due to a WSL fixed stack size, but for like-for-like testing it is good enough. Because our scores aren’t official submissions, as per SPEC guidelines we have to declare them as internal estimates on our part.

For compilers, we use LLVM both for C/C++ and Fortan tests, and for Fortran we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-source compilers such as MSVC or ICC.

clang version 10.0.0
clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
 24bd54da5c41af04838bbe7b68f830840d47fc03)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions.

To note, the requirements for the SPEC licence state that any benchmark results from SPEC have to be labeled ‘estimated’ until they are verified on the SPEC website as a meaningful representation of the expected performance. This is most often done by the big companies and OEMs to showcase performance to customers, however is quite over the top for what we do as reviewers.

SPECint2017 Rate-1 Estimated Scores

As we typically do when Intel or AMD releases a new generation, we compare both single and multi-threaded improvements using the SPEC2017 benchmark. Starting with SPECint2017 single-threaded performance, we can see very little benefit from opting for Intel's Core i9-14900K in most of the tests when compared against the previous generation's Core i9-13900K. The only test we did see a noticeable bump in performance was in 520.omnetpp_r, which simulates discrete events of a large 10 Gigabit Ethernet network. There was a bump of around 23% in terms of ST performance in this test, likely due to the increased ST clock speed to 6.0 GHz, up 200 MHz from the 5.8 GHz ST turbo on the Core i9-13900K.

SPECfp2017 Rate-1 Estimated Scores

Onto the second half of the SPEC2017 1T-tests is the SPECfp2017 suite, and again, we're seeing very marginal differences in performance; certainly nothing that represents a large paradigm shift in relation to ST performance. Comparing the 14th Gen and 13th Gen core series directly to each other, there isn't anything new architecturally other than an increase in clock speeds. As we can see in a single-threaded scenario with the Core i9 flagships, there is little to no difference in workload and application performance. Even with 200 MHz more grunt in relation to maximum turbo clock speed, it wasn't enough to shape performance in a way that directly resulted in a significant jump in performance. 

Test Bed and Setup: Moving Towards 2024 SPEC2017 Multi-Threaded Results
Comments Locked

57 Comments

View All Comments

  • DabuXian - Tuesday, October 17, 2023 - link

    so basically a mere 6% better Cinebench MT score at the cost of almost 100 extra watts. I dunno in what universe would anyone want this instead of a 7950x.
  • yankeeDDL - Tuesday, October 17, 2023 - link

    At platform level it is over 200W difference. Impressive.
    And I agree, nobody in teh right mind should get Intel over AMD, unless they have very specific workload in which that 6% makes a difference worth hundreds/thousand of dollars in electricity per year.
  • schujj07 - Tuesday, October 17, 2023 - link

    If you have a workload like that then you run Epyc or Threadripper as the task is probably VERY threaded.
  • shaolin95 - Thursday, December 21, 2023 - link

    😆😆😆😆😆😆 AMDrip fanboys are hilarious and delusional
    And what bullshit connect about the electricity bill per year... thousands.. really???? Dang kid, you are hilariously sad
  • lemurbutton - Tuesday, October 17, 2023 - link

    Who cares about CInebench MT? It's a benchmark for a niche software in a niche.
  • powerarmour - Wednesday, October 18, 2023 - link

    Wouldn't buy the 7950X either, not interested in any CPU that draws >200W unless I'm building a HEDT workstation.
  • shabby - Tuesday, October 17, 2023 - link

    Lol @ the power usage, this will make a nice heater this winter.
  • yankeeDDL - Tuesday, October 17, 2023 - link

    I find it amazing. It takes more than 200W MORE to beat the 7950.
    The difference in efficiency is unbelievable.
    Buying Intel today still makes no sense unless that extra 5-10% in some specific benchmark really make a huge difference. Otherwise it'll cost you dearly in electricity.
  • bug77 - Thursday, October 19, 2023 - link

    While Anand has a policy of testing things out-of-the-box, which is fine, it is well known ADL and RPL can be power constrained to something like 125W max, while losing performance in the single digits range.
    It would be really useful if we had a follow up article looking into that.
  • yankeeDDL - Tuesday, October 17, 2023 - link

    So, 6% faster than previous gen, a bit (10%?) faster than AMD's 7950.
    Consuming over 200W *more* than the Ryzen 7950.
    I'd say Intel's power efficiency is still almost half that of the ryzen. It's amazing how far behind they are.

Log in

Don't have an account? Sign up now