Instruction Changes

Both of the processor cores inside Alder Lake are brand new – they build on the previous generation Core and Atom designs in multiple ways. As always, Intel gives us a high level overview of the microarchitecture changes, as we’ve written in an article from Architecture Day:

At the highest level, the P-core supports a 6-wide decode (up from 4), and has split the execution ports to allow for more operations to execute at once, enabling higher IPC and ILP from workflow that can take advantage. Usually a wider decode consumes a lot more power, but Intel says that its micro-op cache (now 4K) and front-end are improved enough that the decode engine spends 80% of its time power gated.

For the E-core, similarly it also has a 6-wide decode, although split to 2x3-wide. It has a 17 execution ports, buffered by double the load/store support of the previous generation Atom core. Beyond this, Gracemont is the first Atom core to support AVX2 instructions.

As part of our analysis into new microarchitectures, we also do an instruction sweep to see what other benefits have been added. The following is literally a raw list of changes, which we are still in the process of going through. Please forgive the raw data. Big thanks to our industry friends who help with this analysis.

Any of the following that is listed as A|B means A in latency (in clocks) and B in reciprocal throughput (1/instructions).

 

P-core: Golden Cove vs Cypress Cove

Microarchitecture Changes:

  • 6-wide decoder with 32b window: it means code size much less important, e.g. 3 MOV imm64 / clks;(last similar 50% jump was Pentium -> Pentium Pro in 1995, Conroe in 2006 was just 3->4 jump)
  • Triple load: (almost) universal
    • every GPR, SSE, VEX, EVEX load gains (only MMX load unsupported)
    • BROADCAST*, GATHER*, PREFETCH* also gains
  • Decoupled double FADD units
    • every single and double SIMD VADD/VSUB (and AVX VADDSUB* and VHADD*/VHSUB*) has latency gains
    • Another ADD/SUB means 4->2 clks
    • Another MUL means 4->3 clks
    • AVX512 support: 512b ADD/SUB rec. throughput 0.5, as in server!
    • exception: half precision ADD/SUB handled by FMAs
    • exception: x87 FADD remained 3 clks
  • Some form of GPR (general purpose register) immediate additions treated as NOPs (removed at the "allocate/rename/move ellimination/zeroing idioms" step)
    • LEA r64, [r64+imm8]
    • ADD r64, imm8
    • ADD r64, imm32
    • INC r64
    • Is this just for 64b addition GPRs?
  • eliminated instructions:
    • MOV r32/r64
    • (V)MOV(A/U)(PS/PD/DQ) xmm, ymm
    • 0-5 0x66 NOP
    • LNOP3-7
    • CLC/STC
  • zeroing idioms:
    • (V)XORPS/PD, (V)PXOR xmm, ymm
    • (V)PSUB(U)B/W/D/Q xmm
    • (V)PCMPGTB/W/D/Q xmm
    • (V)PXOR xmm

Faster GPR instructions (vs Cypress Cove):

  • LOCK latency 20->18 clks
  • LEA with scale throughput 2->3/clk
  • (I)MUL r8 latency 4->3 clks
  • LAHF latency 3->1 clks
  • CMPS* latency 5->4 clks
  • REP CMPSB 1->3.7 Bytes/clock
  • REP SCASB 0.5->1.85 Bytes/clock
  • REP MOVS* 115->122 Bytes/clock
  • CMPXVHG16B 20|20 -> 16|14
  • PREFETCH* throughput 1->3/clk
  • ANDN/BLSI/BLSMSK/BLSR throughput 2->3/clock
  • SHA1RNDS4 latency 6->4
  • SHA1MSG2 throughput 0.2->0.25/clock
  • SHA256MSG2 11|5->6|2
  • ADC/SBB (r/e)ax 2|2 -> 1|1

Faster SIMD instructions (vs Cypress Cove):

  • *FADD xmm/ymm latency 4->3 clks (after MUL)
  • *FADD xmm/ymm latency 4->2 clks(after ADD)
  • * means (V)(ADD/SUB/ADDSUB/HADD/HSUB)(PS/PD) affected
  • VADD/SUB/PS/PD zmm  4|1->3.3|0.5
  • CLMUL xmm  6|1->3|1
  • CLMUL ymm, zmm 8|2->3|1
  • VPGATHERDQ xmm, [xm32], xmm 22|1.67->20|1.5 clks
  • VPGATHERDD ymm, [ym32], ymm throughput 0.2 -> 0.33/clock
  • VPGATHERQQ ymm, [ym64], ymm throughput 0.33 -> 0.50/clock

Regressions, Slower instructions (vs Cypress Cove):

  • Store-to-Load-Forward 128b 5->7, 256b 6->7 clocks
  • PAUSE latency 140->160 clocks
  • LEA with scale latency 2->3 clocks
  • (I)DIV r8 latency 15->17 clocks
  • FXCH throughput 2->1/clock
  • LFENCE latency 6->12 clocks
  • VBLENDV(B/PS/PD) xmm, ymm 2->3 clocks
  • (V)AESKEYGEN latency 12->13 clocks
  • VCVTPS2PH/PH2PS latency 5->6 clocks
  • BZHI throughput 2->1/clock
  • VPGATHERDD ymm, [ym32], ymm latency 22->24 clocks
  • VPGATHERQQ ymm, [ym64], ymm latency 21->23 clocks

 

E-core: Gracemont vs Tremont

Microarchitecture Changes:

  • Dual 128b store port (works with every GPR, PUSH, MMX, SSE, AVX, non-temporal m32, m64, m128)
  • Zen2-like memory renaming with GPRs
  • New zeroing idioms
    • SUB r32, r32
    • SUB r64, r64
    • CDQ, CQO
    • (V)PSUBB/W/D/Q/SB/SW/USB/USW
    • (V)PCMPGTB/W/D/Q
  • New ones idiom: (V)PCMPEQB/W/D/Q
  • MOV elimination: MOV; MOVZX; MOVSX r32, r64
  • NOP elimination: NOP, 1-4 0x66 NOP throughput 3->5/clock, LNOP 3, LNOP 4, LNOP 5

Faster GPR instructions (vs Tremont)

  • PAUSE latency 158->62 clocks
  • MOVSX; SHL/R r, 1; SHL/R r,imm8  tp 1->0.25
  • ADD;SUB; CMP; AND; OR; XOR; NEG; NOT; TEST; MOVZX; BSSWAP; LEA [r+r]; LEA [r+disp8/32] throughput 3->4 per clock
  • CMOV* throughput 1->2 per clock
  • RCR r, 1 10|10 -> 2|2
  • RCR/RCL r, imm/cl 13|13->11|11
  • SHLD/SHRD r1_32, r1_32, imm8 2|2 -> 2|0.5
  • MOVBE latency 1->0.5 clocks
  • (I)MUL r32 3|1 -> 3|0.5
  • (I)MUL r64 5|2 -> 5|0.5
  • REP STOSB/STOSW/STOSD/STOSQ 15/8/12/11 byte/clock -> 15/15/15/15 bytes/clock

Faster SIMD instructions (vs Tremont)

  • A lot of xmm SIMD throughput is 4/clock instead of theoretical maximum(?) of 3/clock, not sure how this is possible
  • MASKMOVQ throughput 1 per 104 clocks -> 1 per clock
  • PADDB/W/D; PSUBB/W/D PAVGB/PAVGW 1|0.5 -> 1|.33
  • PADDQ/PSUBQ/PCMPEQQ mm, xmm: 2|1 -> 1|.33
  • PShift (x)mm, (x)mm 2|1 -> 1|.33
  • PMUL*, PSADBW mm, xmm 4|1 -> 3|1
  • ADD/SUB/CMP/MAX/MINPS/PD 3|1 -> 3|0.5
  • MULPS/PD 4|1 -> 4|0.5
  • CVT*, ROUND xmm, xmm 4|1 -> 3|1
  • BLENDV* xmm, xmm 3|2 -> 3|0.88
  • AES, GF2P8AFFINEQB, GF2P8AFFINEINVQB xmm 4|1 -> 3|1
  • SHA256RNDS2 5|2 -> 4|1
  • PHADD/PHSUB* 6|6 -> 5|5

Regressions, Slower (vs Tremont):

  • m8, m16 load latency 4->5 clocks
  • ADD/MOVBE load latency 4->5 clocks
  • LOCK ADD 16|16->18|18
  • XCHG mem 17|17->18|18
  • (I)DIV +1 clock
  • DPPS 10|1.5 -> 18|6
  • DPPD 6|1 -> 10|3.5
  • FSIN/FCOS +12% slower

 

Power: P-Core vs E-Core, Win10 vs Win11 CPU Tests: Core-to-Core and Cache Latency, DDR4 vs DDR5 MLP
Comments Locked

474 Comments

View All Comments

  • mode_13h - Saturday, November 6, 2021 - link

    > Consumers deserve non-broken products that aren’t sold via smoke and mirrors tactics.

    What's broken, exactly? They said you wouldn't have AVX-512. That someone figured out how to enable it is just bonus.
  • mode_13h - Saturday, November 6, 2021 - link

    Why are you convinced it's so consequential?
  • mode_13h - Saturday, November 6, 2021 - link

    Oops, that was a response to:

    OG> The current situation is inexcusable.
  • Oxford Guy - Sunday, November 7, 2021 - link

    That question is meritless.
  • mode_13h - Sunday, November 7, 2021 - link

    If the issue isn't terribly consequential, then why is it inexcusable? The gravity of alleged misconduct usually derives from its impacts.
  • Oxford Guy - Monday, November 8, 2021 - link

    I have been suspicious that you’re some sort of IBM AI. Posts like that go a long way toward supporting that suspicion.

    You were the poster who claimed it’s of little consequence. I was the poster who said it’s inexcusable. Either you’re AI that needs work or your mind is rife with confusion in your quest to impress the community via attempts at domination.

    Not a good look, again. Posting your own claims as if they’re mine and using my claims to create a false incompetence situation is a bit better than your pathetic schoolyard taunts. So, perhaps I should praise you for improving the quality of your posts via being merely incompetent — like Intel’s handling of this situation you’re trying to downplay. I shouldn’t make that equivalence, though, as lying to the community in terms of a retail product is worse than any of your parlor tricks.
  • mode_13h - Tuesday, November 9, 2021 - link

    > I have been suspicious that you’re some sort of IBM AI.

    No way. Their artificial intelligence is no match for my natural stupidity.
    :D

    > You were the poster who claimed it’s of little consequence.

    No, I asked *you* why it's so consequential.

    > I was the poster who said it’s inexcusable.

    Which sort of implies that it's very consequential. If it's of not, then why would it be inexcusable?

    > Either you’re AI that needs work or your mind is rife with confusion in your quest to
    > impress the community via attempts at domination.

    If you wouldn't waste so much energy posturing and just answer the question, maybe we could actually get somewhere.

    I don't honestly care what the community thinks of me. That's the beauty of pseudonymity! I don't even need people to believe I'm somehow affiliated with a prestigious university. Either my points make sense and are well-founded or they aren't. Similarly, I don't care if you're "just" the Oxford garbage collector. If you contribute useful information, then we all win. If you're just trolling, flaming, or pulling the thread into irrelevant tangents, then we all lose.

    The main reason I post on here is to share information and to learn. I asked what should be a simple question which you dismissed as meritless, and without explaining why. As usual, only drama ensues, when I try to press the issue. I always want to give people the opportunity to justify their stance, but so often you just look for some way to throw it back in my face.

    This kind of crap is extremely low value. I hope you agree.
  • mode_13h - Saturday, November 6, 2021 - link

    > and the sentence about how it could be eliminated in the future.

    It's true. Intel can disable instructions in microcode updates and in future steppings of the CPU. So, even having the BIOS option is no guarantee.
  • mode_13h - Saturday, November 6, 2021 - link

    > Since the silicon is there, if they can get the scheduler to manage
    > heterogeneous (P/E) cores there is now no down side with enabling AVX-512.

    This will not happen. The OS scheduler cannot compensate for lack of app awareness of the heterogeneous support for AVX-512. I'm sure that was fiercely debated, at Intel, but the performance downsides for naive code (i.e. 99%+ of the AVX-512 code in the wild) would generate too many complaints and negative publicity from the apps where enabling it results in performance & power regressions.
  • Oxford Guy - Saturday, November 6, 2021 - link

    So, Alder Lake is a turkey as a high-end CPU, one that should have never been released? This is because each program has to include Alder Lake AVX-512 support and those that don’t will cause performance regressions?

    So, Intel designed and released a CPU that it knew wouldn’t be properly supported by Windows 11 — yet the public was sold Windows 11 primarily on the basis of how its nifty new scheduler will support this CPU?

    ‘The OS scheduler cannot compensate for lack of app awareness of the heterogeneous support for AVX-512’

    Is Windows 11 able to support a software utility to disable the low-power cores once booted into Windows or are we restricted to disabling them via BIOS? If the latter is the case then Intel had the responsibility for mandating such a switch for all Alder Lake boards, as part of the basic specification.

Log in

Don't have an account? Sign up now