CPU Tests: SPEC ST Performance on P-Cores & E-Cores

SPEC2017 is a series of standardized tests used to probe the overall performance between different systems, different architectures, different microarchitectures, and setups. The code has to be compiled, and then the results can be submitted to an online database for comparison. It covers a range of integer and floating point workloads, and can be very optimized for each CPU, so it is important to check how the benchmarks are being compiled and run.

For compilers, we use LLVM both for C/C++ and Fortan tests, and for Fortran we’re using the Flang compiler. The rationale of using LLVM over GCC is better cross-platform comparisons to platforms that have only have LLVM support and future articles where we’ll investigate this aspect more. We’re not considering closed-sourced compilers such as MSVC or ICC.

clang version 10.0.0
clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
 24bd54da5c41af04838bbe7b68f830840d47fc03)

-Ofast -fomit-frame-pointer
-march=x86-64
-mtune=core-avx2
-mfma -mavx -mavx2

Our compiler flags are straightforward, with basic –Ofast and relevant ISA switches to allow for AVX2 instructions. We decided to build our SPEC binaries on AVX2, which puts a limit on Haswell as how old we can go before the testing will fall over. This also means we don’t have AVX512 binaries, primarily because in order to get the best performance, the AVX-512 intrinsic should be packed by a proper expert, as with our AVX-512 benchmark. All of the major vendors, AMD, Intel, and Arm, all support the way in which we are testing SPEC.

To note, the requirements for the SPEC licence state that any benchmark results from SPEC have to be labeled ‘estimated’ until they are verified on the SPEC website as a meaningful representation of the expected performance. This is most often done by the big companies and OEMs to showcase performance to customers, however is quite over the top for what we do as reviewers.

For Alder Lake, we start off with a comparison of the Golden Cove cores, both in DDR5 as well as DDR4 variants. We’re pitting them as direct comparison against Rocket Lake’s Cypress Cove cores, as well as AMD’s Zen3.

SPECint2017 Rate-1 Estimated Scores

Starting off in SPECint2017, the first thing I’d say is that for single-thread workloads, it seems that DDR5 doesn’t showcase any major improvements over DDR4. The biggest increase for the Golden Cove cores are in 520.omnetpp_r at 9.2% - the workload is defined by sparse memory accessing in a parallel way, so DDR5’s doubled up channel count here is likely what’s affecting the test the most.

Comparing the DDR5 results against RKL’s WLC cores, ADL’s GLC showcases some large advantages in several workloads: 24% in perlbench, +29% in omnetpp, +21% in xalancbmk, and +26% in exchange2 – all of the workloads here are likely boosted by the new core’s larger out of order window which has grown to up to 512 instructions. Perlbench is more heavily instruction pressure biased, at least compared to other workloads in the suite, so the new 6-wide decoder also likely is a big reason we see such a large increase.

The smallest increases are in mcf, which is more pure memory latency bound, and deepsjeng and leela, the latter which is particularly branch mispredict heavy. Whilst Golden Cove improves its branch predictors, the core also had to add an additional cycle of misprediction penalty, so the relative smaller increases here make sense with that as a context.

SPECfp2017 Rate-1 Estimated Scores

In the FP suite, the DDR5 results have a few larger outliers compared to the DDR4 set, bwaves and fotonik3d showcase +15% and +17% just due to the memory change, which is no surprise given both workloads extremely heavy memory bandwidth characteristic.

Compared to RKL, ADL showcases also some very large gains in some of the workloads, +33% in cactuBBSN, +24% in povray. The latter is a surprise to me as it should be a more execution-bound workload, so maybe the new added FADD units of the cores are coming into play here.

We’ve had not too much time to test out the Gracemont cores in isolation, but we are able to showcase some results. This set here is done on native Linux rather than WSL due to affinity issues on Windows, the results are within margin of error between the platforms, however there are a few % points outliers on the FP suite. Still, comparing the P to E-cores are in apples-to-apples conditions in these set of graphs:

SPECint2017 Rate-1 Estimated Scores (P vs E-cores) SPECfp2017 Rate-1 Estimated Scores (P vs E-cores)

When Intel mentioned that the Gracemont E-cores of Alder Lake were matching the ST performance of the original Skylake, Intel was very much correct in that description. Unlike what we consider “little” cores in a normal big.LITTLE setup, the E-cores of Alder Lake are still quite performant.

In the aggregate scores, an E-core is roughly 54-64% of a P-core, however this percentage can go as high as 65-73%. Given the die size differences between the two microarchitectures, and the fact that in multi-threaded scenarios the P-cores would normally have to clock down anyway because of power limits, it’s pretty evident how Intel’s setup with efficiency and density cores allows for much higher performance within a given die size and power envelope.

In SPEC, in terms of package power, the P-cores averaged 25.3W in the integer suite and 29.2W in the FP suite, in contrast to respectively 10.7W and 11.5W for the E-cores, both under single-threaded scenarios. Idle package power ran in at 1.9W.

SPEC2017 Rate-1 Estimated Total

Alder Lake and the Golden Cove cores are able to reclaim the single-threaded performance crown from AMD and Apple. The increases over Rocket Lake come in at +18-20%, and Intel’s advantage over AMD is now at 6.4% and 16.1% depending on the suite, maybe closer than what Intel would have liked given V-cache variants of Zen3 are just a few months away.

Again, the E-core performance of ADL is impressive, while not extraordinary ahead in the FP suite, they can match the performance of some middle-stack Zen2 CPUs from only a couple of years ago in the integer suite.

CPU Tests: Core-to-Core and Cache Latency, DDR4 vs DDR5 MLP CPU Tests: SPEC MT Performance - DDR5 Advantage
Comments Locked

474 Comments

View All Comments

  • Wrs - Saturday, November 6, 2021 - link

    @Netmsm I'll leave that to the market as I don't foresee using any of the 3 that soon lol. It would stand to reason that if one product is both cheaper and better, it would keep gaining share at the expense of the other. If that doesn't happen I would question the premise of cheaper + better. And seeing as it's a major market for Intel, I have little doubt they'll adjust prices if they do find themselves selling an inferior product.
  • Netmsm - Sunday, November 7, 2021 - link

    That's right. We always check performance per watt and per dollar. A product should be reasonable with respect to its price and power consumption, this is a must.

    12900k can consume up to 241 which is very closer to Threadripper not Ryzen 5900's TDP and yet competing with chips having 125 TDP! What a parody this is!

    I can't disregard and throw away efficiency factor, that's all.
  • Spunjji - Friday, November 5, 2021 - link

    Seeing this has made me very interested to see the value proposition Alder Lake will be offering in gaming notebooks. I was vaguely planning to switch up to a Zen 3+ offering for my next system, but this might be enough to make me reconsider.
  • EnglishMike - Thursday, November 4, 2021 - link

    <blockquote>re: Enterprise: Considering power consumption, it's like a Pyrrhic victory for Intel.</blockquote>
    Why? This is not an enterprise solution -- that's the upcoming Sapphire Rapids Xeon processors, a completely different CPU platform.

    Sure, if all you're doing is pegging desktop CPUs at 100% for video processing or a similar workload, then Alder Lake isn't for you, but the gaming benchmarks clearly show that when it comes to more typical desktop workloads, the i9 12900k is inline with the top of the line AMD processors in terms of power consumption.
  • Netmsm - Thursday, November 4, 2021 - link

    and who in his right mind would believe that upcoming Xeon processors can bring revolutionary breakthrough in power consumption?!
  • EnglishMike - Friday, November 5, 2021 - link

    And that, my friend, is a great example of moving the goalposts.

    We'll have to see what Intel offers re: Xeon's but one thing is for sure, they're going to offer a completely different power profile to their flagship desktop CPUs, because that's the nature of the datacenter business.
  • Netmsm - Saturday, November 6, 2021 - link

    Of course the nature of enterprise won't accept this power consumption. In PC world customers may not care how ineffective a processor is. Intel will reduce the power consumption but the matter is how its processor will accomplish the job! We see an unacceptable performance to watt in Intel's new architecture that needs something like a miracle for Xeon's to become competitive with Epyc's.
  • Wrs - Saturday, November 6, 2021 - link

    No miracle is needed... just go down the frequency-voltage curve. Existing Ice Lake Xeons already do that. What's new about Sapphire Rapids is not so much the process tech (it's still 10nm) but the much larger silicon area enabled per package due to the EMIB packaging. That's their plan to be competitive with Epyc and its multichip modules.
  • Netmsm - Sunday, November 7, 2021 - link

    And what will happen to performance as frequency-voltage curve goes down?
    Just look at facts! With about 100w more power consumption Intel's new architecture gets itself in front of Zen 3 by a slight margin in some cases that lucidly tells us it can never reduce power consumption and yet beat Epyc in performance.
  • Wrs - Sunday, November 7, 2021 - link

    @Netmsm I'm looking at facts. The process nodes are very similar. One side has both a bigger/wider core (Golden Cove) and a really small core (Gracemont). The other side just has the intermediate size core (Zen 3). As a result, on some benchmarks one side wins by a fair bit, and on other benchmarks, the other side takes the cake. Many benches are a tossup.

    In this case the side that theoretically wins on efficiency at iso-throughput (MC performance) is the side that devotes more total silicon to the cores & cache. When comparing a 12900k to a 5950x, the latter has slightly more area across the CCDs, about 140 mm2 versus around 120 mm2. The side that's more efficient at iso-latency (ST/lightly threaded) is the one that devotes more silicon to their largest/preferred cores, which obviously here is ADL. In practice companies don't release their designs at iso-performance, and for throughput benchmarks one may encounter memory and other platform bottlenecks. But Intel seems to have aggressively clocked Golden Cove such that it's impossible for AMD to reach iso-latency with Zen 3 no matter the power input (i.e., you'd have to downclock the ADL). That has significant end-user implications as not everything can be split into more threads.

    The Epyc Rome SKUs are already downclocked relative to Vermeer, like most server/workstation CPUs. Epyc Rome tops out at 64 Zen3 cores across 8 chiplets. Sapphire Rapids, which isn't out yet, has engineering samples topping out at 80 Golden Cove cores across 4 ~400mm2 chiplets. Given what we know about relative core sizes, which side is devoting more silicon to cores? There's your answer to performance at iso-efficiency. That's not to say it's fair to compare a product a year out vs. one you can obtain now, but also I don't see a Zen4 or N5 AMD server CPU within the next year.

Log in

Don't have an account? Sign up now