SPEC CPU - Single-Threaded Performance

SPEC2017 and SPEC2006 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

We run the tests in a harness built through Windows Subsystem for Linux, developed by our own Andrei Frumusanu. WSL has some odd quirks, with one test not running due to a WSL fixed stack size, but for like-for-like testing is good enough. SPEC2006 is deprecated in favor of 2017, but remains an interesting comparison point in our data. Because our scores aren’t official submissions, as per SPEC guidelines we have to declare them as internal estimates from our part.

For compilers, we use LLVM both for C/C++ and Fortan tests, and for Fortran we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-sourced compilers such as MSVC or ICC.

clang version 10.0.0
clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
 24bd54da5c41af04838bbe7b68f830840d47fc03)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions. We decided to build our SPEC binaries on AVX2, which puts a limit on Haswell as how old we can go before the testing will fall over. This also means we don’t have AVX512 binaries, primarily because in order to get the best performance, the AVX-512 intrinsic should be packed by a proper expert, as with our AVX-512 benchmark.

To note, the requirements for the SPEC licence state that any benchmark results from SPEC have to be labelled ‘estimated’ until they are verified on the SPEC website as a meaningful representation of the expected performance. This is most often done by the big companies and OEMs to showcase performance to customers, however is quite over the top for what we do as reviewers.

Single-threaded performance of TGL-H shouldn’t be drastically different from that of TGL-U, however there’s a few factors which can come into play and affect the results: The i9-11980HK TGL-H system has a 200MHz higher boost frequency compared to the i7-1185G7, and a single core now has access to up to 24MB of L3 instead of just 12MB.

SPECint2017 Rate-1 Estimated Scores

In SPECint2017, the one results which stands out the most if 502.gcc_r where the TGL-H processor lands in at +16% ahead of TGL-U, undoubtedly due to the increased L3 size of the new chip.

Generally speaking, the new TGL-H chip outperforms its brethren and AMD competitors in almost all tests.

SPECfp2017 Rate-1 Estimated Scores

In the SPECfp2017 suite, we also see general small improvements across the board. The 549.fotonik3d_r test sees a regression which is a bit odd, but I think is related to the LPDDR4 vs DDR4 discrepancy in the systems which I’ll get back to in the next page where we’ll see more multi-threaded results related to this.

SPEC2017 Rate-1 Estimated Total

From an overall single-threaded performance standpoint, the TGL-H i9-11980HK adds in around +3.5-7% on top of what we saw on the i7-1185G7, which lands it amongst the best performing systems – not only amongst laptop CPUs, but all CPUs. The performance lead against AMD’s strongest mobile CPU, the 5980HS is even a little higher than against the i7-1185G7, but loses out against AMD’s best desktop CPU, and of course Apple M1 CPU and SoC used in the latest Macbooks. This latter comparison is apples-to-apples in terms of compiler settings, and is impressive given it does it at around 1/3rd of the package power under single-threaded scenarios.

CPU Tests: Core-to-Core and Cache Latency SPEC CPU - Multi-Threaded Performance
Comments Locked

229 Comments

View All Comments

  • ozzuneoj86 - Monday, May 17, 2021 - link

    While it is nice that it supports gen 4, realistically you're just getting SSDs that put out more heat, with more power draw, while gaining performance benefits that are only measurable in benchmarks or very specific situations.

    I'm sure file copy performance is much higher, but how fast do you need that to be? Assuming you're copying to the drive itself or maybe to a Thunderbolt 4 external drive, it is the difference between copying 1TB of data in 2 minutes versus 6 minutes. You can (theoretically) completely fill a $400 2TB SSD in 4 minutes with gen4 vs maybe 12 minutes with Gen 3. If someone needs to do that all the time, then sure there's a difference... but that has to be pretty uncommon.

    For smaller amounts of data, any decent nvme drive is fast enough to make the difference between models almost unnoticeable. For the vast majority of users, even a SATA drive is plenty fast enough to provide a smooth and nearly wait-free experience.
  • mode_13h - Monday, May 17, 2021 - link

    > realistically you're just getting SSDs that put out more heat, with more power draw,
    > while gaining performance benefits that are only measurable in benchmarks
    > or very specific situations.

    Exactly. Thank you.
  • mode_13h - Monday, May 17, 2021 - link

    > Assuming you're copying to the drive itself or maybe to a Thunderbolt 4 external drive

    Oops! TB 4 is limited to PCIe 3.0 x4 speeds! So, it'd be little-to-no help there!
  • Calin - Tuesday, May 18, 2021 - link

    Well, you could copy full blast to an external drive and have plenty of remaining performance to do other storage intensive things - that's assuming your external drives is fast enough to suffocate PCIe 3.0 x4, and your internal drive is faster still.
  • mode_13h - Thursday, May 20, 2021 - link

    > Well, you could copy full blast to an external drive and have plenty of remaining performance

    I'm not one to turn down "free" performance, but PCIe 4 uses significantly more power. In a laptop, that's not a minor point.
  • inighthawki - Monday, May 17, 2021 - link

    Sequential read and write speeds are basically just flexing. Very few people actually ever make significant use of such speeds in a way that saves more than a second or two here or there. Most laptop users are not sitting there copying a terabyte of sequential data over and over again.
  • The_Assimilator - Monday, May 17, 2021 - link

    There is no laptop chassis on the market that can adequately handle the excess of 8W of heat that a PCIe 4.0 NVMe SSD can dissipate.
  • Cooe - Monday, May 17, 2021 - link

    You're not getting those kind of speeds sustained in a laptop without RIDICULOUS thermal throttling. PCIe 4.0 in mobile atm is just a marketing checkmark & nothing more.
  • Calin - Tuesday, May 18, 2021 - link

    It allows faster "races to sleep" for the processor. And, since the Core2 architecture, the winning move was "fast and power hungry processor that does what it must and then goes to a very low power state". This gives you very good burst speed and low average power - as soon as you finish, you can throttle everything down (CPU, caches, SSDs, ...)
  • mode_13h - Thursday, May 20, 2021 - link

    > It allows faster "races to sleep" for the processor.

    Are we still talking about PCIe 4? I don't think it works like that.

    > since the Core2 architecture, the winning move was "fast and power hungry processor that does what it must and then goes to a very low power state".

    No, it's more energy-efficient to run at a slower clock speed. There's a huge difference between the amount of energy used in turbo and non-turbo modes. As it's far bigger than the performance difference, there's no way that going to idle a little sooner is going to make up for it.

Log in

Don't have an account? Sign up now