Instruction Sets: Alder Lake Dumps AVX-512 in a BIG Way

One of the big questions we should address here is how the P-cores and E-cores have been adapted to work inside a hybrid design. One of the critical aspects in a hybrid design is if both cores support different levels of instructions. It is possible to build a processor with an unbalanced instruction support, however that requires hardware to trap unsupported instructions and do core migration mid-execution. The simple way to get around this is to ensure that both types of cores have the same level of instruction support. This is what Intel has done in Alder Lake.

In order to get to this point, Intel had to cut down some of the features of its P-core, and improve some features on the E-core. The biggest thing that gets the cut is that Intel is losing AVX-512 support inside Alder Lake. When we say losing support, we mean that the AVX-512 is going to be physically fused off, so even if you ran the processor with the E-cores disabled at boot time, AVX-512 is still disabled.

Intel’s journey with AVX-512 has been long and fragmented. Some workloads can be vectorised – multiple bits of consecutive data all require the same operation, so you can pack them into a single register and perform it all at once with a single instruction. Designed as its third generation of vector instructions (AVX is 128-bit, AVX2 is 256-bit, AVX512 is 512-bit), AVX-512 was initially found on server processors, then mobile, and we found it in the previous version of desktop processors. At the time, Intel stated that by enabling AVX-512 on its processor line from top to bottom, it would encourage greater adoption, and they were leaning hard into this missive.

But that all changes with Alder Lake. Both desktop processors and mobile processors will now have AVX-512 disabled in all scenarios. But the silicon will still be physically present in the core, only because Intel uses the same core in its next generation server processors called Sapphire Rapids. One could argue that if the AVX-512 unit was removed from the desktop cores that they would be a lot smaller, however Intel has disagreed on this point in previous launches. What it means is that for the consumer parts we have some extra dark silicon in the design, which ultimately might help thermals, or absorb defects.

But it does mean that AVX-512 is probably dead for consumers.

Intel isn’t even supporting AVX-512 with a dual-issue AVX2 mode over multiple operations - it simply won’t work on Alder Lake. If AMD’s Zen 4 processors plan to support some form of AVX-512 as has been theorized, even as dual-issue AVX2 operations, we might be in some dystopian processor environment where AMD is the only consumer processor on the market to support AVX-512.

On the E-core side, Gracemont will be Intel’s first Atom processor to support AVX2. In testing with the previous generation Tremont Atom core, at 2.9 GHz it performed similarly to a Haswell 2.9 GHz Celeron processor, i.e. identical in non-AVX2 situations. By adding AVX2, plus fundamental performance increases, we’re told to expect ‘Skylake-like performance’ from the new E-cores. Intel also stated that both the P-core and E-core will be at ‘Haswell-level’ AVX2 support.

By enabling AVX2  on the E-cores, Intel is also integrating support for VNNI instructions for neural network calculations. In the past VNNI (and VNNI2) were built for AVX-512, however this time around Intel has done a version of AVX2-VNNI for both the P-core and E-core designs in Alder Lake. So while AVX-512 might be dead here, at least some of those AI acceleration features are carrying over, albeit in AVX2 form.

For the data center versions of these big cores, Intel does have AVX-512 support and new features for matrix extensions, which we will cover in that section.

Gracemont Microarchitecture (E-Core) Examined Conclusions: Through The Cores and The Atoms
Comments Locked

223 Comments

View All Comments

  • mode_13h - Friday, August 20, 2021 - link

    > - treat all-zero lines as special cases that are tagged in L2/SLC but don't require
    > transferring data on the NoC. Intel had something like this in IceLake that, after
    > some time, they switched off with microcode update.

    I heard about that. Sad to see it go, but certainly one of those micro-optimizations that's barely measurable.
  • name99 - Thursday, August 19, 2021 - link

    " This is over double that of AMD’s Zen3 µarch, and really only second to Apple’s core microarchitecture which we’ve measured in at around 630 instructions. "

    Apple's ROB is in fact around 2300 entries in size. But because it is put together differently than the traditional ROB, you will get very different numbers depending on exactly what you test.

    The essential points are
    (a)
    - the ROB proper consists of about 330 "rows" where each row holds 7 instructions.
    - one of these instructions can be a "failable", ie something that can force a flush. In other words branches or load/stores
    - so if you simply count NOPs, you'll get a count of ~2300 entries. Anything else will hit earlier limits.

    (b) The most important of these limits, for most purposes, is the History File which tracks changes in the logical to physical register mapping. THIS entity has ~630 entries and is what you will bump into first if you test somewhat varied code.
    Earlier limits are ~380 int physical registers, ~420 or so FP registers, ~128 flag registers. But if you balance code across fp and int you will hit the 630 History File limit first.

    (c) If you carefully balance that against code that does not touch the History File (mainly stores and branches) than you can get to almost but not quite 1000 ROB entries.

    The primary reason Apple looks so different from x86 is that (this is a common pattern throughout Apple's design)
    - what has traditionally been one object (eg a ROB that tracks instruction retirement AND tracks register mappings) is split into two objects each handling a single task.
    The ROB handles in-order retiring, including flush. The History File handles register mapping (in case of flush and revert to an earlier state) and marking registers as free after retire.

    This design style is everywhere. Another, very different, example, is the traditional Load part of the Load/Store queue is split into two parts, one tracking overlap with pending/unresolved stores, the second part tracking whether Replay might be required (eg because of missing in TLB or in the L1).

    - even a single object is split into multiple what Apple calls "slices", but essentially a way to share rare cases with common cases, so the ROB needs to track some extra state for "failable" instructions that may cause a flush, but not every instruction needs that state. So you get this structure where you have up to six "lightweight" instructions with small ROB slots, and a "heavyweight" instruction with a larger ROB slot. Again with see this sort of thing everywhere, eg in the structures that hold branches waiting to retire which are carefully laid out to cover lots of branches, but with less storage for various special cases (taken branches need to preserve the history/path vectors, none-taken branches don't; indirect branches need to store a target, etc etc)
  • GeoffreyA - Friday, August 20, 2021 - link

    Thanks for all the brilliant comments on CPU design!
  • mode_13h - Friday, August 20, 2021 - link

    Go go go!
  • GeoffreyA - Thursday, August 19, 2021 - link

    I think Intel did a great job at last. Golden Cove, impressive. But the real star's going to be Gracemont. Atom's come of age at last. Better than Skylake, while using less power, means it's somewhere in the region of Zen 2. Got a feeling it'll become Intel's chief design in the future, the competitor to Zen.

    As for Intel Thread Director, interesting and impressive; but the closer tying of hardware and scheduler, not too sure about that. Name reminded me of the G-Man, strangely enough. AVX512, good riddance. And Intel Marketing, good job on the slides. They look quite nice. All in all, glad to see Intel's on the right track. Keep it up. And thanks for the coverage, Ian and Andrei.
  • Silver5urfer - Friday, August 20, 2021 - link

    Lol. That is no star. The small puny SKL cores are not going to render your high FPS nor the Zip compression. They are admitting themselves these are efficiency. Why ? Because 10SF is busted in power consumption and Intel cannot really make any more big cores on their Desktop platform without getting power throttled. On top their Ring bus cannot scale like SKL anymore.
  • GeoffreyA - Friday, August 20, 2021 - link

    Not as it stands, but mark my words, the Atom design is going to end up the main branch, on the heels of Zen in p/w. Interesting ideas are getting poured into this thing, whereas the bigger cores, they're just making it wider for the most part.
  • ifThenError - Friday, August 20, 2021 - link

    Totally understand your point and I'd personally welcome such a development!

    Anyway, the past years have shown a rather opposite way. Just take ARM as an example. There once was an efficiency line of cores that got the last update years ago with the A35. Now it's labelled as "super efficient" and hardly has any implementations aside from devices sitting idle most of the time. You can practically consider it abandoned.
    The former mid tier with the A55 is now marketed as efficient cores, while the former top tier A7x more and more turns into the new midrange. Meanwhile people go all crazy about the new X1 top tier processors even though the growth of power consumption and heat is disproportionate to the performance. Does this sound reasonable in a power and heat constraint environment? Yeah, I don't think so either! ;-)

    For that reason I perfectly understand Ian's demand for a 64 core Gracemont CPU. Heck, even a 16 core would still be #1 on my wishlist.
  • GeoffreyA - Saturday, August 21, 2021 - link

    Yes, performance/watt is the way to go, and I reckon a couple more rounds of iteration will get Atom running at the competition's level. The designs are similar enough. It's ironic, because Atom had a reputation for being so slow.
  • mode_13h - Saturday, August 21, 2021 - link

    > Atom had a reputation for being so slow.

    With Tremont, Intel really upped their Atom game. It added a lot of complexity and grew significantly wider.

    However, it's not until Gracemont's addition of AVX/AVX2 that Intel is clearly indicating it wants these cores to be taken seriously.

    I wonder if Intel will promote their Atom line of SoCs as the new replacement for Xeon D. Currently, I think they're just being marketed for embedded servers and 5G Basestations, but they seem to have the nous to taken the markets Xeon D was targeting.

Log in

Don't have an account? Sign up now