Instruction Sets: Alder Lake Dumps AVX-512 in a BIG Way

One of the big questions we should address here is how the P-cores and E-cores have been adapted to work inside a hybrid design. One of the critical aspects in a hybrid design is if both cores support different levels of instructions. It is possible to build a processor with an unbalanced instruction support, however that requires hardware to trap unsupported instructions and do core migration mid-execution. The simple way to get around this is to ensure that both types of cores have the same level of instruction support. This is what Intel has done in Alder Lake.

In order to get to this point, Intel had to cut down some of the features of its P-core, and improve some features on the E-core. The biggest thing that gets the cut is that Intel is losing AVX-512 support inside Alder Lake. When we say losing support, we mean that the AVX-512 is going to be physically fused off, so even if you ran the processor with the E-cores disabled at boot time, AVX-512 is still disabled.

Intel’s journey with AVX-512 has been long and fragmented. Some workloads can be vectorised – multiple bits of consecutive data all require the same operation, so you can pack them into a single register and perform it all at once with a single instruction. Designed as its third generation of vector instructions (AVX is 128-bit, AVX2 is 256-bit, AVX512 is 512-bit), AVX-512 was initially found on server processors, then mobile, and we found it in the previous version of desktop processors. At the time, Intel stated that by enabling AVX-512 on its processor line from top to bottom, it would encourage greater adoption, and they were leaning hard into this missive.

But that all changes with Alder Lake. Both desktop processors and mobile processors will now have AVX-512 disabled in all scenarios. But the silicon will still be physically present in the core, only because Intel uses the same core in its next generation server processors called Sapphire Rapids. One could argue that if the AVX-512 unit was removed from the desktop cores that they would be a lot smaller, however Intel has disagreed on this point in previous launches. What it means is that for the consumer parts we have some extra dark silicon in the design, which ultimately might help thermals, or absorb defects.

But it does mean that AVX-512 is probably dead for consumers.

Intel isn’t even supporting AVX-512 with a dual-issue AVX2 mode over multiple operations - it simply won’t work on Alder Lake. If AMD’s Zen 4 processors plan to support some form of AVX-512 as has been theorized, even as dual-issue AVX2 operations, we might be in some dystopian processor environment where AMD is the only consumer processor on the market to support AVX-512.

On the E-core side, Gracemont will be Intel’s first Atom processor to support AVX2. In testing with the previous generation Tremont Atom core, at 2.9 GHz it performed similarly to a Haswell 2.9 GHz Celeron processor, i.e. identical in non-AVX2 situations. By adding AVX2, plus fundamental performance increases, we’re told to expect ‘Skylake-like performance’ from the new E-cores. Intel also stated that both the P-core and E-core will be at ‘Haswell-level’ AVX2 support.

By enabling AVX2  on the E-cores, Intel is also integrating support for VNNI instructions for neural network calculations. In the past VNNI (and VNNI2) were built for AVX-512, however this time around Intel has done a version of AVX2-VNNI for both the P-core and E-core designs in Alder Lake. So while AVX-512 might be dead here, at least some of those AI acceleration features are carrying over, albeit in AVX2 form.

For the data center versions of these big cores, Intel does have AVX-512 support and new features for matrix extensions, which we will cover in that section.

Gracemont Microarchitecture (E-Core) Examined Conclusions: Through The Cores and The Atoms
Comments Locked

223 Comments

View All Comments

  • GreenReaper - Friday, August 20, 2021 - link

    Intel Threat Detected!
  • ifThenError - Friday, August 20, 2021 - link

    LOL!
    Underrated comment
  • mode_13h - Saturday, August 21, 2021 - link

    :D
  • diediealldie - Thursday, August 19, 2021 - link

    I'm quite curious how their E-cores are designed. They somehow use 6-way decoder which is same width compared to P-cores. And use twice bigger I-cache, yet using 1/4 of area.

    Maybe it's related to design philosophy? or Atom team's a true trump card of the Intel design team?
  • name99 - Thursday, August 19, 2021 - link

    That "6-way decoder" is typical Intel double-talk. What is done is that you have two decoders that can each decode three instructions. This works IF there is a branch between the two sets of instructions, because the branch landing point provides a resync point for the second decoder, so the two can run in parallel.

    You could obviously extend this, in a decent Next Fetch Predictor system, to have the NFP store the lengths of each instruction in the run of instructions to be decoded, and get trivial parallel decode. And Andy Glew (I think it was him, either him or Jim K) submitted a patent for this around 2000. But in true Intel fashion, nothing seems to have been done with the idea...
  • GeoffreyA - Saturday, August 21, 2021 - link

    If I'm not mistaken, Tremont or Goldmont, can't remember which, began marking the instruction boundaries in cache.
  • name99 - Saturday, August 21, 2021 - link

    Doing it in the cache is more difficult. Of course it makes the most sense! But it hits the problem that, *in theory*, you can have stupid code that jumps into the middle of an instruction and starts decoding the alternative version of the byte stream that results.
    This is, of course, absolutely insane, but it's part of the joy that is supporting x86.

    Now one way to handle this is to tag the boundaries not in the I-cache (where you can jump to any byte) but in structures that are already set up to deal with instruction streams (as opposed to byte streams). Hence the Next Fetch Predictor, as I described, or in a trace cache if you are using that.

    Another solution would be yet another predictor! Assume most code will be sane, but have a separate pool of decoders that are validating the byte stream as opposed to the high-speed instruction stream going through main path of the CPU. In the event of a mismatch
    - treat it like a branch misprediction, flush and restart AND
    - train the predictor that for this particular cache line, we have to decode slowly and surely

    Now why doesn't Intel do any of these things? You and I can think of them, just as people like Andy Glew were thinking of variants of them 20 years ago.
    My primary hypothesis is that Intel has become locked into the idea that GHz is everything. Sure they occasionally say they don't believe this, or even claim to have reformed after a disaster (*cough* Pentium4 *cough*) but then they head right back to the crack house.
    I suspect it's something like the same mentality as the US Air Force -- when pilots form the highest levels of command, they see pilots as the essence of what the Air Force IS; drones and UAV's are a cute distraction but will never be the real thing.
    Similarly, if you see GHz as the essence of what Intel is, that smarts are cute but real men work on GHz, then you will always be sacrificing smarts if they might cut into GHz. But GHz costs the problems we see in the big cores: the crazy power draws, and the ridiculously low density...

    Well, this is getting into opinion, not technology, so interpret it as you wish!
  • GeoffreyA - Sunday, August 22, 2021 - link

    Looking at the article again, I see their on-demand instruction length decoder is where this is happening. Seems to be caching lengths after they're worked out. I also wonder if this is why Atom hasn't had a uop cache as yet. It's either that or the length caching, because the uop cache will indirectly serve that purpose as well (decoded instructions don't need their lengths worked out). So it's perhaps a matter of die area here that Intel chose that instead of a uop cache.
  • GeoffreyA - Sunday, August 22, 2021 - link

    It's been said that K7 to Bulldozer also did a similar thing, marking instruction boundaries in the cache. And the Pentium MMX, but need to double check this one.
  • mode_13h - Sunday, August 22, 2021 - link

    > In the event of a mismatch - treat it like a branch misprediction, flush and restart

    Yes, because even assembly language doesn't make it easy to jump into the middle of another instruction. IMO, any code which does that *deserves* to run slowly, so that it will get replaced with newer software that's written in an actual programming language.

    > My primary hypothesis is that Intel has become locked into the idea that GHz is everything.

    I think they just got lulled into thinking it was enough to deliver modest generational gains. Anything more ambitious probably jeopardized the schedule or risked their profit margins due to the cores getting to big and expensive. And when time comes for more performance, they reach into their old playbook and go with a "sure" win, like wider vectors. I wonder if the example of TSX reveals anything about their execution, on the more innovative stuff. Because that doesn't build a lot of confidence for taking on bold, new ideas.

    > when pilots form the highest levels of command,
    > they see pilots as the essence of what the Air Force IS

    Not just pilots, but specifically fighter pilots. So, they also don't care much about bombers or Space Command (now Space Force). The only way to change that would be to make them care, by making them more accountable for the other programs, until they realize they need them to be run by someone who know about that stuff. Either that or just reorg the whole military. That would probably also help reign in defense spending.

Log in

Don't have an account? Sign up now