Power: P-Core vs E-Core, Win10 vs Win11

For Alder Lake, Intel brings two new things into the mix when we start talking about power.

First is what we’ve already talked about, the new P-core and E-core, each with different levels of performance per watt and targeted at different sorts of workloads. While the P-cores are expected to mimic previous generations of Intel processors, the E-cores should offer an interesting look into how low power operation might work on these systems and in future mobile systems.

The second element is how Intel is describing power. Rather than simply quote a ‘TDP’, or Thermal Design Power, Intel has decided (with much rejoicing) to start putting two numbers next to each processor, one for the base processor power and one for maximum turbo processor power, which we’ll call Base and Turbo. The idea is that the Base power mimics the TDP value we had before – it’s the power at which the all-core base frequency is guaranteed to. The Turbo power indicates the highest power level that should be observed in normal power virus (usually defined as something causing 90-95% of the CPU to continually switch) situation. There is usually a weighted time factor that limits how long a processor can remain in its Turbo state for slowly reeling back, but for the K processors Intel has made that time factor effectively infinite – with the right cooling, these processors should be able to use their Turbo power all day, all week, and all year.

So with that in mind, let’s start simply looking at the individual P-cores and E-cores.

Listed in red, in this test, all 8P+8E cores fully loaded (on DDR5), we get a CPU package power of 259 W. The progression from idle to load is steady, although there is a big jump from idle to single core. When one core is loaded, we go from 7 W to 78 W, which is a big 71 W jump. Because this is package power (the output for core power had some issues), this does include firing up the ring, the L3 cache, and the DRAM controller, but even if that makes 20% of the difference, we’re still looking at ~55-60 W enabled for a single core. By comparison, for our single thread SPEC power testing on Linux, we see a more modest 25-30W per core, which we put down to POV-Ray’s instruction density.

By contrast, in green, the E-cores only jump from 5 W to 15 W when a single core is active, and that is the same number as we see on SPEC power testing. Using all the E-cores, at 3.9 GHz, brings the package power up to 48 W total.

It is worth noting that there are differences between the blue bars (P-cores only) and the red bars (all cores, with E-cores loaded all the time), and that sometimes the blue bar consumes more power than the red bar. Our blue bar tests were done with E-cores disabled in the BIOS, which means that there might be more leeway in balancing a workload across a smaller number of cores, allowing for higher power. However as everything ramps up, the advantage swings the other way it seems. It’s a bit odd to see this behavior.

Moving on to individual testing, and here’s a look at a power trace of POV-Ray in Windows 11:

Here we’re seeing a higher spike in power, up to 272 W now, with the system at 4.9 GHz all-core. Interestingly enough, we see a decrease of power through the 241 W Turbo Power limit, and it settles around 225 W, with the reported frequency actually dropping to between 4.7-4.8 GHz instead. Technically this all-core is meant to take into account some of the E-cores, so this might be a case of the workload distributing itself and finding the best performance/power point when it comes to instruction mix, cache mix, and IO requirements. However, it takes a good 3-5 minutes to get there, if that’s the case.

Intrigued by this, I looked at how some of our other tests did between different operating systems. Enter Agisoft:

Between Windows 10 and Windows 11, the traces look near identical. The actual run time was 5 seconds faster on Windows 11 out of 20 minutes, so 0.4% faster, which we would consider run-to-run variation. The peaks and spikes look barely higher in Windows 11, and the frequency trace in Windows 11 looks a little more consistent, but overall they’re practically the same.

For our usual power graphs, we get something like this, and we’ll also add in the AVX-512 numbers from that page:

(0-0) Peak Power

Compared to Intel’s previous 11th Generation Processor, the Alder Lake Core i9 uses more power during AVX2, but is actually lower in AVX-512. The difficulty of presenting this graph in the future is based on those E-cores; they're more efficient, and as you’ll see in the results later. Even on AVX-512, Alder Lake pulls out a performance lead using 50 W fewer than 11th Gen.

When we compare it to AMD however, with that 142 W PPT limit that AMD has, Intel is often trailing at a 20-70 W deficit when we’re looking at full load efficiency. That being said, Intel is likely going to argue that in mixed workloads, such as two software programs running where something is on the E-cores, it wants to be the more efficient design.

Fundamental Windows 10 Issues: Priority and Focus Instruction Changes for Golden Cove and Gracemont
Comments Locked

474 Comments

View All Comments

  • Oxford Guy - Sunday, November 7, 2021 - link

    ‘or maybe switch off their E-cores and enable AVX-512 in BIOS’

    This from exactly the same person who posted, just a few hours ago, that it’s correct to note that that option can disappear and/or be rendered non-functional.

    I am reminded of your contradictory posts about ECC where you mocked advocacy for it (‘advocacy’ being merely its mention) and proceeded to claim you ‘wish’ for more ECC support.

    Once again, it’s helpful to have a grasp of what one actually believes prior to posting. Allocating less effort to posting puerile insults and more toward substance is advised.
  • mode_13h - Sunday, November 7, 2021 - link

    > This from exactly the same person who posted, just a few hours ago, that it’s
    > correct to note that that option can disappear and/or be rendered non-functional.

    You need to learn to distinguish between what Intel has actually stated vs. the facts as we wish them to be. In the previous post you reference, I affirmed your acknowledgement that the capability disappearing would be consistent with what Intel has actually said, to date.

    In the post above, I was leaving open the possibility that *maybe* Intel is actually "cool" with there being a BIOS option to trade AVX-512 for E-cores. We simply don't know how Intel feels about that, because (to my knowledge) they haven't said.

    When I clarify the facts as they stand, don't confuse that with my position on the facts as I wish them to be. I can simultaneously acknowledge one reality, which maintaining my own personal preference for a different reality.

    This is exactly what happened with the ECC situation: I was clarifying Intel's practice, because your post indicated uncertainty about that fact. It was not meant to convey my personal preference, which I later added with a follow-on post.

    Having to clarify this to an "Oxford Guy" seems a bit surprising, unless you meant like Oxford Mississippi.

    > you mocked advocacy

    It wasn't mocking. It was clarification. And your post seemed more to express befuddlement than expressive of advocacy. It's now clear that your post was a poorly-executed attempt at sarcasm.

    Once again, it's helpful not to have your ego so wrapped up in your posts that you overreact when someone tries to offer a factual clarification.
  • Oxford Guy - Monday, November 8, 2021 - link

    I now skip to the bottom of your posts If I see more of the same preening and posing, I spare myself the rest of the nonsense.
  • mode_13h - Tuesday, November 9, 2021 - link

    > If I see more of the same preening and posing, I spare myself the rest of the nonsense.

    Then I suggest you don't read your own posts.

    I can see that you're highly resistant to reason and logic. Whenever I make a reasoned reply, you always hit back with some kind of vague meta-critique. If that's all you've got, it can be seen as nothing less than a concession.
  • O-o-o-O - Saturday, November 6, 2021 - link

    Anyone talking about dumping x64 ISA?

    I don't see AVX-512 a good solution. Current x64 chips are putting so much complexity in CPU with irrational clock speed that migrating process-node further into Intel4 on would be a nightmare once again.

    I believe most of the companies with in-house developers expect the end of Xeon-era is quite near, as most of the heavy computational tasks are fully optimized for GPUs and that you don't want coal burning CPUs.

    Even if it doesn't come in 5 year time-frame, there's a real threat and have to be ahead of time. After all, x86 already extended its life 10+ years when it could have been discontinued. Now it's really a dinosaur. If so, non-server applications would follow the route as well.

    We want more simple / solid / robust base with scalability. Not an unreliable boost button that sometimes do the trick.
  • SystemsBuilder - Saturday, November 6, 2021 - link

    I don't see AVX-512 that negatively it is just the same as AVX2 but double the vectors size and a with a richer instruction set. I find it pretty cool to work with especially when you've written some libraries that can take advantage of it. As I wrote before, it looks like Golden cove got AVX-512 right based on what Ian and Andrei uncovered. 0 negative offset (e.g. running at full speed), power consumption not much more than AVX2, and it supports both FP16 and BP16 vectors! I think that's pretty darn good! I can work with that! Now I want my Sapphire rapids with 32 or 48 Golden cove P cores! No not fall 2022 i want it now! lol
  • mode_13h - Saturday, November 6, 2021 - link

    > When you optimize code today (for pre Alder lake CPUs) to take advantage
    > of AVX-512 you need to write two paths (at least).

    Ah, so your solution depends on application software changes, specifically requiring them to do more work. That's not viable for the timeframe of concern. And especially not if its successor is just going to add AVX-512 to the E-cores, within a year or so.

    > There are many levels of AVX-512 support and effectively you need write customized
    > code for each specific CPUID

    But you don't expect the capabilities to change as a function of which thread is running, or within a program's lifetime! What you're proposing is very different. You're proposing to change the ABI. That's a big deal!

    > It is absolutely possible and it will come with time.

    Or not. ARM's SVE is a much better solution.

    > I think in the future P and E cores might have more than just AVX-512 that is different

    On Linux, using AMX will require a thread to "enable" it. This is a little like what you're talking about. AMX is a big feature, though, and unlike anything else. I don't expect to start having to enable every new ISA extension I want to use, or query how many hyperthreads actually support - this becomes a mess when you start dealing with different libraries that have these requirements and limitations.

    Intel's solution isn't great, but it's understandable and it works. And, in spite of it, they still delivered a really nice-performing CPU. I think it's great if technically astute users have/retain the option to trade E-cores for AVX-512 (via BIOS), but I think it's kicking a hornets nest to go down the path of having a CPU with asymmetrical capabilities among its cores.

    Hopefully, Raptor Lake just adds AVX-512 to the E-cores and we can just let this issue fade into the mists of time, like other missteps Intel & others have made.
  • SystemsBuilder - Saturday, November 6, 2021 - link

    I too believe AVX-512 exclusion in the E cores it is transitory. next gen E cores may include it and the issue goes away for AVX-512 at least (Raptor Lake?). Still there will be other features that P have but E won't have so the scheduler needs to be adjusted for that. This will continue to evolve with every generation of E and P cores - because they are here to stay.

    I read somewhere a few months ago but right now i do not remember where (maybe on Anandtech not sure) that the AVX-512 transistor budget is quite small (someone measured it on the die) so not really a big issue in terms of area.

    AMX is interesting because where AVX-512 are 512 bit vectors, AMX is making that 512x512 bit matrices or tiles as intel calls it. Reading the spec on AMX you have BF16 tiles which is awesome if you're into neural nets. Of course gpus will still perform better with matrix calculations (multiplications) but the benefit with AMX is that you can keep both the general CPU code and the matrix specific code inside the CPU and can mix the code seamlessly and that's gonna be very cool - you cut out the latency between GPU and CPU (and no special GPU API's are needed). but of course you can still use the GPU when needed (sometimes it maybe faster to just do a matrix- matrix add for instance just inside the CPU with the AMX tiles) - more flexibility.

    Anyway, I do think we will run into a similar issue with AMX as we have the AVX-512 on Alder Lake and therefore again the scheduler needs to become aware of each cores capabilities and each piece of code need to state what type of core they prefer to run on: AVX2, AVX-512, AMX capable core etc (the compliers job). This way the scheduler can do the best job possible with every thread.
    There will be some teething for a while but i think this is the direction it is going.
  • mode_13h - Sunday, November 7, 2021 - link

    The difference is that AMX is new. It's also much more specialized, as you point out. But that means that they can place new hoops for code to jump through, in order to use it.

    It's very hard to put a cat like AVX-512 back in the bag.
  • SystemsBuilder - Saturday, November 6, 2021 - link

    To be clear, I also want to add that the way code is written today (in my organization) pre Alder Lake code base. Every time we write a code path for AVX512 we need to write a fallback code path incase the CPU is not AVX-512 capable. This is standard (unless you can control the execution H/W 100% - i.e. the servers).
    Does not mean all code has to be duplicated but the inner loops where the 80%/20% rule (i.e. 20% of the code that consumes 80% of the time, which in my experience often becomes like the 99%/1% rule) comes into play that's where you write two code paths:
    1 for AVX-512 in case it CPU is capable and
    2 with just AVX2 in case CPU is not capable
    mostly this ends up being just as I said the inner most loops, and there are excellent broadly available templates to use for this.
    Just from a pure comp sci perspective it is quite interesting to vectorize code and see the benefits - pretty cool actually.

Log in

Don't have an account? Sign up now