Conclusion: Variables Maketh the CPU

Suffice to say, Intel’s new 12th Generation Core family has added more variables to the mix than a traditional processor launch. The combination of different types of core, coming at a time where a new operating system has just been launched, but also at the inflection point of a new type of memory. Let’s break down some of the differences we’ve observed in our testing.

When we compare Windows 10 to Windows 11, purely on throughput benchmarks, we don’t find much difference. There are a few spots where Windows 11 has a slight advantage in multi-threaded workloads, but this comes down to how new threads are spawned between the performance cores and the efficiency cores. Intel stated that the performance between the two, at least for CPU workloads, should be within a few percentage points, stating that Windows 11 should have lower run-to-run variance. We can corroborate this in our testing. Windows 10 also had some scheduling issues with low priority threads, which we expect to be ironed out with updates.

Comparing the new DDR5 to the old DDR4 is a different story, as the new memory standard offers a substantial uplift when it comes to memory bandwidth. As we saw recently with the M1 Max review, sometimes memory bandwidth can be a limiting factor. In our testing, DDR5 had a minor lead in single threaded tests but in a lot of multithreaded tests, the lead was significant. For real world, we had examples of +14% in Agisoft, +10% in NAMD, +8% in V-Ray, +10% in Handbrake, and +20% in WinRAR. In SPEC2017, we saw a couple of single threaded workloads get +15% improvements over DDR5, but in multi-threaded this was up to +40% on average, or more than 40% in specific tests. This also comes down to the doubled memory channels (4x32-bit vs 2x64-bit) which can be better utilized on top of the bandwidth increases.

Now comparing the P-core to the E-core, and it’s a story of how the E-core individually can perform on par with a Skylake core. Having eight extra Skylake-class cores is nothing to be sniffed at. In a lot of tests the E-core is half the performance of the P-core, but the P-core is itself is now the market leader in performance. The Golden Cove core inside Alder Lake has reclaimed the single-threaded performance crown with an uplift in SPEC of 18-20%, which is in line with Intel’s 19% claim. This puts it ahead of Apple’s M1 Max or 6% (int) and 16% (fp) ahead of AMD’s Zen 3 core.

The Core i9-12900K

Combining fast P-cores, Skylake-class E-cores, and DDR5 into one package means that Intel has certainly jumped from behind the competition to in front of it, or at least in the mix. When you have your operating system set up just right, and no issues with schedulers, it outperforms AMD’s offering when single core performance matters, and in multi-threaded workloads, it does tend to sit somewhere between a 5900X and a 5950X.

(5-1c) Handbrake 1.3.2, 1080p30 H264 to 4K60 HEVC

It’s important to note that in some tests, the Core i9-12900K does win outright. It’s at this point we should consider how much is core related vs standards related: DDR5 has produced somewhat of an uplift, and the competition is expected to claw some of that back when they introduce it, but those products are expected more towards the latter half of 2022. For users with those specific workloads today, and willing to pay the DDR5 early adopter tax, Alder Lake can provide performance uplifts right now.

(0-0) Peak Power

Power is an interesting topic, and although our peak power numbers when all cores were loaded were above the 241W Turbo power on the box, in real world workloads it didn’t tend to go that high. The P-cores alone on the chip matched the power consumption of Intel’s 11th Generation in AVX2 workloads, but adding in the E-cores does put it over the previous generation. I’m not entirely sure what that says about Intel’s 7 manufacturing process compared to the 10SF used before. A lot of the performance gains here appear to come from IPC and DDR5, and that doesn’t seem to have come with performance per watt gains on the P-cores. It means that Intel is still losing on power efficiency at load compared to the competition.

(2-2) 3D Particle Movement v2.1 (Peak AVX)

I have to say a side word about AVX-512 support, because we found it. If you’re prepared to disable the E-cores, and use specific motherboards, it works. After Intel spent time saying it was fused off, we dug into the story and found it still works for those that need it. It’s going to be interesting to hear how this feature will be discussed by Intel in future.

Overall though, it’s no denying that Intel is now in the thick of it, or if I were to argue, the market leader. The nuances of the hybrid architecture are still nascent, so it will take time to discover where benefits will come, especially when we get to the laptop variants of Alder Lake. At a retail price of around $650, the Core i9-12900K ends up being competitive between the two Ryzen 9 processors, each with their good points. The only serious downside for Intel though is cost of switching to DDR5, and users learning Windows 11. That’s not necessarily on Intel, but it’s a few more hoops than we regularly jump through.

 

Gaming Performance: Intel vs AMD
Comments Locked

474 Comments

View All Comments

  • mode_13h - Saturday, November 6, 2021 - link

    > So, Alder Lake is a turkey as a high-end CPU, one that should have never been released?

    How do you reach that conclusion, after it blew away its predecessor and (arguably) its main competitor, even without AVX-512?

    > This is because each program has to include Alder Lake AVX-512 support and
    > those that don’t will cause performance regressions?

    No, my point was that relying on the OS to trap AVX-512 instructions executed on E-cores and then context-switch the thread to a P-core is likely to be problematic, from a power & performance perspective. Another issue is code which autodetects AVX-512 won't see it, while running on an E-core. This can result in more than performance issues - it could result in software malfunctions if some threads are using AVX-512 datastructures while other threads in the same process aren't. Those are only a couple of the issues with enabling heterogeneous support of AVX-512, like what some people seem to be advocating for.

    > Is Windows 11 able to support a software utility to disable the low-power cores
    > once booted into Windows or are we restricted to disabling them via BIOS?

    That's not the proposal to which I was responding, which you can see by the quote at the top of my post.
  • Oxford Guy - Sunday, November 7, 2021 - link

    So, you’ve stated the same thing again — that Intel knew Alder Lake couldn’t be fully supported by Windows 11 even before it (AL) was designed?

    The question about the software utility is one you’re unable to answer, it seems.
  • mode_13h - Sunday, November 7, 2021 - link

    > The question about the software utility is one you’re unable to answer, it seems.

    That's not something I was trying to address. I was only responding to @SystemsBuilder's idea that Windows should be able to manage having some cores with AVX-512 and some cores without.

    If you'd like to know what I think about "the software utility", that's a fair thing to ask, but it's outside the scope of what I was discussing and therefore not a relevant counterpoint.
  • Oxford Guy - Monday, November 8, 2021 - link

    More hilarious evasion.
  • mode_13h - Tuesday, November 9, 2021 - link

    > More hilarious evasion.

    Yes, evasion of your whataboutism. Glad you enjoyed it.
  • GeoffreyA - Sunday, November 7, 2021 - link

    "So, Intel designed and released a CPU that it knew wouldn’t be properly supported by Windows 11"

    Oxford Guy, there's a difference between the concerns of the scheduler and that of AVX512. Alder Lake runs even on Windows 10. Only, there's a bit of suboptimal scheduling there, where the P and E cores are concerned.

    If AVX512 weren't disabled, it would've been something of a nightmare keeping track of which cores support it and which don't. Usually, code checks at runtime whether a certain set of instructions---SSE3, AVX, etc---are available, using the CPUID instruction or intrinsic. Stir this complex yeast into the soup of performance and efficiency cores, and there will be trouble in the kitchen.

    Under this is new, messy state of affairs, the only feasible option mum had, or should I say Intel, was bringing the cores onto a equal footing by locking AVX512 in the attic, and saying, no, that fellow doesn't live here.
  • GeoffreyA - Sunday, November 7, 2021 - link

    Also, Intel seems pretty clear that it's disabled and so forth. Doesn't seem shady or controversial to me:

    https://www.intel.com/content/www/us/en/developer/...
  • SystemsBuilder - Saturday, November 6, 2021 - link

    Thinking a bit about what you wrote: "This will not happen". And it is not easy but possible… it’s a bit technical but here we go… sorry for the wall of text.

    When you optimize code today (for pre Alder lake CPUs) to take advantage of AVX-512 you need to write two paths (at least). The application program (custom code) would first check if the CPU is capable of AVX-512 and at what level. There are many levels of AVX-512 support and effectively you need write customized code for each specific CPUID (class of CPUs , e.g. Ice lake, Sky lake X etc.) since for whatever CPU you end up running this particular program on, you would want to utilize the most favorable/relevant AVX-512 instructions. So with the custom code today (Pre Alder lake) the scheduler would just assign a tread to a underutilized core (loosely speaking) and the custom code would check what the core is capable off and then chose best path in real time (AVX2 and various level of AVX-512). The problem is that with Alder Lake not all cores are equal! BUT the custom code should have various paths already so it is capable!… the issue that I see is that the custom code CPU check needs to be adjusted to check core specific capability not CPUID specific (one more level of granularity) AND the scheduler should schedule code with AVX-512 paths on AVX-512 capable cores by preference... what’s needed is a code change in the AVX-512 path selection logic ( on the application developer - not a big deal) and compiler support that embed scheduler specific information about if the specific piece of code prefers AVX-512 or not. The scheduler would then use this information to schedule real time and the custom code would be able to choose the right path at execution time.
    It is absolutely possible and it will come with time.
    I think this is that this is not just applicable to AVX-512. I think in the future P and E cores might have more than just AVX-512 that is different (they might diverge much more than that) so the scheduler needs to be made aware of what a thread prefers and what the each core is capable of before it schedules each tread. It is the responsibility of the custom code to have multiple paths (if they want to utilize AVX-512 or not).
  • SystemsBuilder - Saturday, November 6, 2021 - link

    old .exe which are not adjusted and are not recompiled for Alder Lake (code does not recognize Alder Lake) would simply automatically regress to AVX2 and the scheduler would not care which CPU to schedule it on. Basically that is what's happening today if you do not enable AVX-512 in the ASUS bios.

    Net net: you could make it would work.
  • mode_13h - Saturday, November 6, 2021 - link

    > old .exe which are not adjusted and are not recompiled for Alder Lake (code does
    > not recognize Alder Lake) would simply automatically regress to AVX2

    So, like 98% of shipping AVX-512 code, by the time Raptor Lake is introduced?

    What you're proposing is a lot of work for Microsoft, only to benefit a very small number of applications. I think Intel would rather that people who need those apps simply buy CPU which officially support AVX-512 (or maybe switch off their E-cores and enable AVX-512 in BIOS).

Log in

Don't have an account? Sign up now