Power: P-Core vs E-Core, Win10 vs Win11

For Alder Lake, Intel brings two new things into the mix when we start talking about power.

First is what we’ve already talked about, the new P-core and E-core, each with different levels of performance per watt and targeted at different sorts of workloads. While the P-cores are expected to mimic previous generations of Intel processors, the E-cores should offer an interesting look into how low power operation might work on these systems and in future mobile systems.

The second element is how Intel is describing power. Rather than simply quote a ‘TDP’, or Thermal Design Power, Intel has decided (with much rejoicing) to start putting two numbers next to each processor, one for the base processor power and one for maximum turbo processor power, which we’ll call Base and Turbo. The idea is that the Base power mimics the TDP value we had before – it’s the power at which the all-core base frequency is guaranteed to. The Turbo power indicates the highest power level that should be observed in normal power virus (usually defined as something causing 90-95% of the CPU to continually switch) situation. There is usually a weighted time factor that limits how long a processor can remain in its Turbo state for slowly reeling back, but for the K processors Intel has made that time factor effectively infinite – with the right cooling, these processors should be able to use their Turbo power all day, all week, and all year.

So with that in mind, let’s start simply looking at the individual P-cores and E-cores.

Listed in red, in this test, all 8P+8E cores fully loaded (on DDR5), we get a CPU package power of 259 W. The progression from idle to load is steady, although there is a big jump from idle to single core. When one core is loaded, we go from 7 W to 78 W, which is a big 71 W jump. Because this is package power (the output for core power had some issues), this does include firing up the ring, the L3 cache, and the DRAM controller, but even if that makes 20% of the difference, we’re still looking at ~55-60 W enabled for a single core. By comparison, for our single thread SPEC power testing on Linux, we see a more modest 25-30W per core, which we put down to POV-Ray’s instruction density.

By contrast, in green, the E-cores only jump from 5 W to 15 W when a single core is active, and that is the same number as we see on SPEC power testing. Using all the E-cores, at 3.9 GHz, brings the package power up to 48 W total.

It is worth noting that there are differences between the blue bars (P-cores only) and the red bars (all cores, with E-cores loaded all the time), and that sometimes the blue bar consumes more power than the red bar. Our blue bar tests were done with E-cores disabled in the BIOS, which means that there might be more leeway in balancing a workload across a smaller number of cores, allowing for higher power. However as everything ramps up, the advantage swings the other way it seems. It’s a bit odd to see this behavior.

Moving on to individual testing, and here’s a look at a power trace of POV-Ray in Windows 11:

Here we’re seeing a higher spike in power, up to 272 W now, with the system at 4.9 GHz all-core. Interestingly enough, we see a decrease of power through the 241 W Turbo Power limit, and it settles around 225 W, with the reported frequency actually dropping to between 4.7-4.8 GHz instead. Technically this all-core is meant to take into account some of the E-cores, so this might be a case of the workload distributing itself and finding the best performance/power point when it comes to instruction mix, cache mix, and IO requirements. However, it takes a good 3-5 minutes to get there, if that’s the case.

Intrigued by this, I looked at how some of our other tests did between different operating systems. Enter Agisoft:

Between Windows 10 and Windows 11, the traces look near identical. The actual run time was 5 seconds faster on Windows 11 out of 20 minutes, so 0.4% faster, which we would consider run-to-run variation. The peaks and spikes look barely higher in Windows 11, and the frequency trace in Windows 11 looks a little more consistent, but overall they’re practically the same.

For our usual power graphs, we get something like this, and we’ll also add in the AVX-512 numbers from that page:

(0-0) Peak Power

Compared to Intel’s previous 11th Generation Processor, the Alder Lake Core i9 uses more power during AVX2, but is actually lower in AVX-512. The difficulty of presenting this graph in the future is based on those E-cores; they're more efficient, and as you’ll see in the results later. Even on AVX-512, Alder Lake pulls out a performance lead using 50 W fewer than 11th Gen.

When we compare it to AMD however, with that 142 W PPT limit that AMD has, Intel is often trailing at a 20-70 W deficit when we’re looking at full load efficiency. That being said, Intel is likely going to argue that in mixed workloads, such as two software programs running where something is on the E-cores, it wants to be the more efficient design.

Fundamental Windows 10 Issues: Priority and Focus Instruction Changes for Golden Cove and Gracemont
Comments Locked

474 Comments

View All Comments

  • mode_13h - Saturday, November 6, 2021 - link

    > So, Alder Lake is a turkey as a high-end CPU, one that should have never been released?

    How do you reach that conclusion, after it blew away its predecessor and (arguably) its main competitor, even without AVX-512?

    > This is because each program has to include Alder Lake AVX-512 support and
    > those that don’t will cause performance regressions?

    No, my point was that relying on the OS to trap AVX-512 instructions executed on E-cores and then context-switch the thread to a P-core is likely to be problematic, from a power & performance perspective. Another issue is code which autodetects AVX-512 won't see it, while running on an E-core. This can result in more than performance issues - it could result in software malfunctions if some threads are using AVX-512 datastructures while other threads in the same process aren't. Those are only a couple of the issues with enabling heterogeneous support of AVX-512, like what some people seem to be advocating for.

    > Is Windows 11 able to support a software utility to disable the low-power cores
    > once booted into Windows or are we restricted to disabling them via BIOS?

    That's not the proposal to which I was responding, which you can see by the quote at the top of my post.
  • Oxford Guy - Sunday, November 7, 2021 - link

    So, you’ve stated the same thing again — that Intel knew Alder Lake couldn’t be fully supported by Windows 11 even before it (AL) was designed?

    The question about the software utility is one you’re unable to answer, it seems.
  • mode_13h - Sunday, November 7, 2021 - link

    > The question about the software utility is one you’re unable to answer, it seems.

    That's not something I was trying to address. I was only responding to @SystemsBuilder's idea that Windows should be able to manage having some cores with AVX-512 and some cores without.

    If you'd like to know what I think about "the software utility", that's a fair thing to ask, but it's outside the scope of what I was discussing and therefore not a relevant counterpoint.
  • Oxford Guy - Monday, November 8, 2021 - link

    More hilarious evasion.
  • mode_13h - Tuesday, November 9, 2021 - link

    > More hilarious evasion.

    Yes, evasion of your whataboutism. Glad you enjoyed it.
  • GeoffreyA - Sunday, November 7, 2021 - link

    "So, Intel designed and released a CPU that it knew wouldn’t be properly supported by Windows 11"

    Oxford Guy, there's a difference between the concerns of the scheduler and that of AVX512. Alder Lake runs even on Windows 10. Only, there's a bit of suboptimal scheduling there, where the P and E cores are concerned.

    If AVX512 weren't disabled, it would've been something of a nightmare keeping track of which cores support it and which don't. Usually, code checks at runtime whether a certain set of instructions---SSE3, AVX, etc---are available, using the CPUID instruction or intrinsic. Stir this complex yeast into the soup of performance and efficiency cores, and there will be trouble in the kitchen.

    Under this is new, messy state of affairs, the only feasible option mum had, or should I say Intel, was bringing the cores onto a equal footing by locking AVX512 in the attic, and saying, no, that fellow doesn't live here.
  • GeoffreyA - Sunday, November 7, 2021 - link

    Also, Intel seems pretty clear that it's disabled and so forth. Doesn't seem shady or controversial to me:

    https://www.intel.com/content/www/us/en/developer/...
  • SystemsBuilder - Saturday, November 6, 2021 - link

    Thinking a bit about what you wrote: "This will not happen". And it is not easy but possible… it’s a bit technical but here we go… sorry for the wall of text.

    When you optimize code today (for pre Alder lake CPUs) to take advantage of AVX-512 you need to write two paths (at least). The application program (custom code) would first check if the CPU is capable of AVX-512 and at what level. There are many levels of AVX-512 support and effectively you need write customized code for each specific CPUID (class of CPUs , e.g. Ice lake, Sky lake X etc.) since for whatever CPU you end up running this particular program on, you would want to utilize the most favorable/relevant AVX-512 instructions. So with the custom code today (Pre Alder lake) the scheduler would just assign a tread to a underutilized core (loosely speaking) and the custom code would check what the core is capable off and then chose best path in real time (AVX2 and various level of AVX-512). The problem is that with Alder Lake not all cores are equal! BUT the custom code should have various paths already so it is capable!… the issue that I see is that the custom code CPU check needs to be adjusted to check core specific capability not CPUID specific (one more level of granularity) AND the scheduler should schedule code with AVX-512 paths on AVX-512 capable cores by preference... what’s needed is a code change in the AVX-512 path selection logic ( on the application developer - not a big deal) and compiler support that embed scheduler specific information about if the specific piece of code prefers AVX-512 or not. The scheduler would then use this information to schedule real time and the custom code would be able to choose the right path at execution time.
    It is absolutely possible and it will come with time.
    I think this is that this is not just applicable to AVX-512. I think in the future P and E cores might have more than just AVX-512 that is different (they might diverge much more than that) so the scheduler needs to be made aware of what a thread prefers and what the each core is capable of before it schedules each tread. It is the responsibility of the custom code to have multiple paths (if they want to utilize AVX-512 or not).
  • SystemsBuilder - Saturday, November 6, 2021 - link

    old .exe which are not adjusted and are not recompiled for Alder Lake (code does not recognize Alder Lake) would simply automatically regress to AVX2 and the scheduler would not care which CPU to schedule it on. Basically that is what's happening today if you do not enable AVX-512 in the ASUS bios.

    Net net: you could make it would work.
  • mode_13h - Saturday, November 6, 2021 - link

    > old .exe which are not adjusted and are not recompiled for Alder Lake (code does
    > not recognize Alder Lake) would simply automatically regress to AVX2

    So, like 98% of shipping AVX-512 code, by the time Raptor Lake is introduced?

    What you're proposing is a lot of work for Microsoft, only to benefit a very small number of applications. I think Intel would rather that people who need those apps simply buy CPU which officially support AVX-512 (or maybe switch off their E-cores and enable AVX-512 in BIOS).

Log in

Don't have an account? Sign up now