CPU Tests: SPEC MT Performance - P and E-Core Scaling

Update Nov 6th:

We’ve finished our MT breakdown for the platform, investigating the various combination of cores and memory configurations for Alder Lake and the i9-12900K. We're posting the detailed scores for the DDR5 results, following up the aggregate results for DDR4 as well.

The results here solely cover the i9-12900K and various combinations of MT performance, such as 8 E-cores, 8 P-cores with 1T as well as 2T, and the full 24T 8P2T+8E scenario. The results here were done on Linux due to easier way to set affinities to the various cores, and they’re not completely comparable to the WSL results on the previous page, however should be within small margins of error for most tests.

SPECint2017 Rate-N Estimated Scores (i9-12900K Scaling)

In the integer suite, the E-cores are quite powerful, reaching scores of around 50% of the 8P2T results, or more.

Many of the more core-bound workloads appear to very much enjoy just having more cores added to the suite, and these are also the workloads that have the largest gains in terms of gaining performance when we add 8 E-cores on top of the 8P2T results.

Workloads that are more cache-heavy, or rely on memory bandwidth, both shared resources on the chip, don’t scale too well at the top-end of things when adding the 8 E-cores. Most surprising to me was the 502.gcc_r result which barely saw any improvement with the added 8 E-cores.

More memory-bound workloads such as 520.omnetpp or 505.mcf are not surprising to see them not scale with the added E-cores – mcf even seeing a performance regression as the added cores mean more memory contention on the L3 and memory controllers.

SPECfp2017 Rate-N Estimated Scores (i9-12900K Scaling)

In the FP suite, the E-cores more clearly showcase a lower % of performance relative to the P-cores, and this makes sense given their design. Only few more compute-bound tests, such as 508.namd, 511.povray, or 538.imagick see larger contributions of the E-cores when they’re added in on top of the P-cores.

The FP suite also has a lot more memory-hungry workload. When it comes to DRAM bandwidth, having either E-cores or P-cores doesn’t matter much for the workload, as it’s the memory which is bottlenecked. Here, the E-cores are able to achieve extremely large performance figures compared to the P-cores. 503.bwaves and 519.lbm for example are pure DRAM bandwidth limited, and using the E-cores in MT scenarios allows for similar performance to the P-cores, however at only 35-40W package power, versus 110-125W for the P-cores result set.

Some of these workloads also see regressions in performance when adding in more cores or threads, as it just means more memory traffic contention on the chip, such as seen in the 8P2T+8E, 8P2T regressions over the 8P1T results.

SPEC2017 Rate-N Estimated Total (i9-2900K Scaling)

What’s most interesting here is the scaling of performance and the attribution between the P-cores and the E-cores. Focusing on the DDR5 set, the 8 E-cores are able to provide around 52-55% of the performance of 8 P-cores without SMT, and 47-51% of the P-cores with SMT. At first glance this could be argued that the 8P+8E setup can be somewhat similar to a 12P setup in MT performance, however the combined performance of both clusters only raises the MT scores by respectively 25% in the integer suite, and 5% in the FP suite, as we are hitting near package power limits with just 8P2T, and there’s diminishing returns on performance given the shared L3. What the E-cores do seem to allow the system is to allows to reduce every-day average power usage and increase the efficiency of the socket, as less P-cores need to be active at any one time.

CPU Tests: SPEC MT Performance - DDR5 Advantage CPU Benchmark Performance: E-Core
Comments Locked

474 Comments

View All Comments

  • mode_13h - Saturday, November 6, 2021 - link

    > Consumers deserve non-broken products that aren’t sold via smoke and mirrors tactics.

    What's broken, exactly? They said you wouldn't have AVX-512. That someone figured out how to enable it is just bonus.
  • mode_13h - Saturday, November 6, 2021 - link

    Why are you convinced it's so consequential?
  • mode_13h - Saturday, November 6, 2021 - link

    Oops, that was a response to:

    OG> The current situation is inexcusable.
  • Oxford Guy - Sunday, November 7, 2021 - link

    That question is meritless.
  • mode_13h - Sunday, November 7, 2021 - link

    If the issue isn't terribly consequential, then why is it inexcusable? The gravity of alleged misconduct usually derives from its impacts.
  • Oxford Guy - Monday, November 8, 2021 - link

    I have been suspicious that you’re some sort of IBM AI. Posts like that go a long way toward supporting that suspicion.

    You were the poster who claimed it’s of little consequence. I was the poster who said it’s inexcusable. Either you’re AI that needs work or your mind is rife with confusion in your quest to impress the community via attempts at domination.

    Not a good look, again. Posting your own claims as if they’re mine and using my claims to create a false incompetence situation is a bit better than your pathetic schoolyard taunts. So, perhaps I should praise you for improving the quality of your posts via being merely incompetent — like Intel’s handling of this situation you’re trying to downplay. I shouldn’t make that equivalence, though, as lying to the community in terms of a retail product is worse than any of your parlor tricks.
  • mode_13h - Tuesday, November 9, 2021 - link

    > I have been suspicious that you’re some sort of IBM AI.

    No way. Their artificial intelligence is no match for my natural stupidity.
    :D

    > You were the poster who claimed it’s of little consequence.

    No, I asked *you* why it's so consequential.

    > I was the poster who said it’s inexcusable.

    Which sort of implies that it's very consequential. If it's of not, then why would it be inexcusable?

    > Either you’re AI that needs work or your mind is rife with confusion in your quest to
    > impress the community via attempts at domination.

    If you wouldn't waste so much energy posturing and just answer the question, maybe we could actually get somewhere.

    I don't honestly care what the community thinks of me. That's the beauty of pseudonymity! I don't even need people to believe I'm somehow affiliated with a prestigious university. Either my points make sense and are well-founded or they aren't. Similarly, I don't care if you're "just" the Oxford garbage collector. If you contribute useful information, then we all win. If you're just trolling, flaming, or pulling the thread into irrelevant tangents, then we all lose.

    The main reason I post on here is to share information and to learn. I asked what should be a simple question which you dismissed as meritless, and without explaining why. As usual, only drama ensues, when I try to press the issue. I always want to give people the opportunity to justify their stance, but so often you just look for some way to throw it back in my face.

    This kind of crap is extremely low value. I hope you agree.
  • mode_13h - Saturday, November 6, 2021 - link

    > and the sentence about how it could be eliminated in the future.

    It's true. Intel can disable instructions in microcode updates and in future steppings of the CPU. So, even having the BIOS option is no guarantee.
  • mode_13h - Saturday, November 6, 2021 - link

    > Since the silicon is there, if they can get the scheduler to manage
    > heterogeneous (P/E) cores there is now no down side with enabling AVX-512.

    This will not happen. The OS scheduler cannot compensate for lack of app awareness of the heterogeneous support for AVX-512. I'm sure that was fiercely debated, at Intel, but the performance downsides for naive code (i.e. 99%+ of the AVX-512 code in the wild) would generate too many complaints and negative publicity from the apps where enabling it results in performance & power regressions.
  • Oxford Guy - Saturday, November 6, 2021 - link

    So, Alder Lake is a turkey as a high-end CPU, one that should have never been released? This is because each program has to include Alder Lake AVX-512 support and those that don’t will cause performance regressions?

    So, Intel designed and released a CPU that it knew wouldn’t be properly supported by Windows 11 — yet the public was sold Windows 11 primarily on the basis of how its nifty new scheduler will support this CPU?

    ‘The OS scheduler cannot compensate for lack of app awareness of the heterogeneous support for AVX-512’

    Is Windows 11 able to support a software utility to disable the low-power cores once booted into Windows or are we restricted to disabling them via BIOS? If the latter is the case then Intel had the responsibility for mandating such a switch for all Alder Lake boards, as part of the basic specification.

Log in

Don't have an account? Sign up now