CPU Tests: SPEC ST Performance on P-Cores & E-Cores

SPEC2017 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

For compilers, we use LLVM both for C/C++ and Fortan tests, and for Fortran we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-sourced compilers such as MSVC or ICC.

clang version 10.0.0
clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
 24bd54da5c41af04838bbe7b68f830840d47fc03)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions. We decided to build our SPEC binaries on AVX2, which puts a limit on Haswell as how old we can go before the testing will fall over. This also means we don’t have AVX512 binaries, primarily because in order to get the best performance, the AVX-512 intrinsic should be packed by a proper expert, as with our AVX-512 benchmark. All of the major vendors, AMD, Intel, and Arm, all support the way in which we are testing SPEC.

To note, the requirements for the SPEC licence state that any benchmark results from SPEC have to be labeled ‘estimated’ until they are verified on the SPEC website as a meaningful representation of the expected performance. This is most often done by the big companies and OEMs to showcase performance to customers, however is quite over the top for what we do as reviewers.

For Alder Lake, we start off with a comparison of the Golden Cove cores, both in DDR5 as well as DDR4 variants. We’re pitting them as direct comparison against Rocket Lake’s Cypress Cove cores, as well as AMD’s Zen3.

SPECint2017 Rate-1 Estimated Scores

Starting off in SPECint2017, the first thing I’d say is that for single-thread workloads, it seems that DDR5 doesn’t showcase any major improvements over DDR4. The biggest increase for the Golden Cove cores are in 520.omnetpp_r at 9.2% - the workload is defined by sparse memory accessing in a parallel way, so DDR5’s doubled up channel count here is likely what’s affecting the test the most.

Comparing the DDR5 results against RKL’s WLC cores, ADL’s GLC showcases some large advantages in several workloads: 24% in perlbench, +29% in omnetpp, +21% in xalancbmk, and +26% in exchange2 – all of the workloads here are likely boosted by the new core’s larger out of order window which has grown to up to 512 instructions. Perlbench is more heavily instruction pressure biased, at least compared to other workloads in the suite, so the new 6-wide decoder also likely is a big reason we see such a large increase.

The smallest increases are in mcf, which is more pure memory latency bound, and deepsjeng and leela, the latter which is particularly branch mispredict heavy. Whilst Golden Cove improves its branch predictors, the core also had to add an additional cycle of misprediction penalty, so the relative smaller increases here make sense with that as a context.

SPECfp2017 Rate-1 Estimated Scores

In the FP suite, the DDR5 results have a few larger outliers compared to the DDR4 set, bwaves and fotonik3d showcase +15% and +17% just due to the memory change, which is no surprise given both workloads extremely heavy memory bandwidth characteristic.

Compared to RKL, ADL showcases also some very large gains in some of the workloads, +33% in cactuBBSN, +24% in povray. The latter is a surprise to me as it should be a more execution-bound workload, so maybe the new added FADD units of the cores are coming into play here.

We’ve had not too much time to test out the Gracemont cores in isolation, but we are able to showcase some results. This set here is done on native Linux rather than WSL due to affinity issues on Windows, the results are within margin of error between the platforms, however there are a few % points outliers on the FP suite. Still, comparing the P to E-cores are in apples-to-apples conditions in these set of graphs:

SPECint2017 Rate-1 Estimated Scores (P vs E-cores) SPECfp2017 Rate-1 Estimated Scores (P vs E-cores)

When Intel mentioned that the Gracemont E-cores of Alder Lake were matching the ST performance of the original Skylake, Intel was very much correct in that description. Unlike what we consider “little” cores in a normal big.LITTLE setup, the E-cores of Alder Lake are still quite performant.

In the aggregate scores, an E-core is roughly 54-64% of a P-core, however this percentage can go as high as 65-73%. Given the die size differences between the two microarchitectures, and the fact that in multi-threaded scenarios the P-cores would normally have to clock down anyway because of power limits, it’s pretty evident how Intel’s setup with efficiency and density cores allows for much higher performance within a given die size and power envelope.

In SPEC, in terms of package power, the P-cores averaged 25.3W in the integer suite and 29.2W in the FP suite, in contrast to respectively 10.7W and 11.5W for the E-cores, both under single-threaded scenarios. Idle package power ran in at 1.9W.

SPEC2017 Rate-1 Estimated Total

Alder Lake and the Golden Cove cores are able to reclaim the single-threaded performance crown from AMD and Apple. The increases over Rocket Lake come in at +18-20%, and Intel’s advantage over AMD is now at 6.4% and 16.1% depending on the suite, maybe closer than what Intel would have liked given V-cache variants of Zen3 are just a few months away.

Again, the E-core performance of ADL is impressive, while not extraordinary ahead in the FP suite, they can match the performance of some middle-stack Zen2 CPUs from only a couple of years ago in the integer suite.

CPU Tests: Core-to-Core and Cache Latency, DDR4 vs DDR5 MLP CPU Tests: SPEC MT Performance - DDR5 Advantage
Comments Locked

474 Comments

View All Comments

  • Spunjji - Friday, November 5, 2021 - link

    N7 is a little more dense than Intel's 10nm-class process - 15-20% in comparable product lines (e.g. Renoir vs. Ice Lake, Lakefield vs. Zen 3 compute chiplet). There is no indication that Intel 7 is more dense than previous iterations of 10nm. N7 also appears to have better power characteristics.

    It's difficult to tell, though, because Intel are pushing much harder on clock speeds than AMD and have a wider core design, both of which would increase power draw even on an identical process.
  • Blastdoor - Thursday, November 4, 2021 - link

    I’m a little surprised by the low level of attention to performance/watt in this review. ArsTechnica gave a bit more info in that regard, and Alder Lake looks terrible on performance/watt.

    If Intel had achieved this performance with similar efficiency to AMD I would have bought Intel stock today.

    But the efficiency numbers here are truly awful. I can see why this is being released as an enthusiast desktop processor -- that's the market where performance/watt matters least. In the mobile and data center markets (ie, the Big markets), these efficiency numbers are deal breakers. AMD appears to have nothing to fear from Intel in the markets that matter most.
  • meacupla - Thursday, November 4, 2021 - link

    Yeah, the power consumption of 12900K is quite bad.
    From other reviews, it's pretty clear that highest end air cooling is not enough for 12900K, and you will need a thick 280mm or 360mm water cooler to keep 12900K cool.
  • Ian Cutress - Thursday, November 4, 2021 - link

    I think there are some issues with temperature readings on ADL. A lot of software showcases 100C with only 3 P-cores loaded, but even with all cores loaded, the CPU doesn't de-clock at that temp. My MSI AIO has a temperature display, and it only showed 75C at load. I've got questions out in a few places - I think Intel switched some of the thermal monitoring stuff inside and people are polling the wrong things. Other press are showing 100C quite easily too. I'm asking MSI how their AIO had 75C at load, but I'm still waiting on an answer. An ASUS rep said that 75-80C should be normal under load. So why everything is saying 100C I have no idea.
  • Blastdoor - Thursday, November 4, 2021 - link

    Note that the ArsTechnica review looks at power draw from the wall, so unaffected by sensor issues.
  • jamesjones44 - Thursday, November 4, 2021 - link

    They also show the 5900x somehow drawing more power than a 5950x at full load. While I'm sure Intel is drawing more power, I question their testing methods given we know there is very little chance of a 5950x fully loaded drawing less than a 5900x unless they won or lost the CPU lottery.
  • TheinsanegamerN - Thursday, November 4, 2021 - link

    techspot and TPU also show that, and it has been explained before that the 5950x gets the premium dies and runs at a lower core voltage then the 5900x, thus it pulls less power despite having more cores.
  • haukionkannel - Thursday, November 4, 2021 - link

    5950x use better chips than 5900x... that is the reason for power usage!
  • vegemeister - Saturday, November 6, 2021 - link

    5950X can hit the current limit when all cores are loaded, so the power consumption folds back.
  • meacupla - Thursday, November 4, 2021 - link

    75C reading from the AIO, presumably a reading from the base plate, is quite hot, I must say.

Log in

Don't have an account? Sign up now