The Intel 12th Gen Core i9-12900K Review: Hybrid Performance Brings Hybrid Complexity
by Dr. Ian Cutress & Andrei Frumusanu on November 4, 2021 9:00 AM ESTInstruction Changes
Both of the processor cores inside Alder Lake are brand new – they build on the previous generation Core and Atom designs in multiple ways. As always, Intel gives us a high level overview of the microarchitecture changes, as we’ve written in an article from Architecture Day:
At the highest level, the P-core supports a 6-wide decode (up from 4), and has split the execution ports to allow for more operations to execute at once, enabling higher IPC and ILP from workflow that can take advantage. Usually a wider decode consumes a lot more power, but Intel says that its micro-op cache (now 4K) and front-end are improved enough that the decode engine spends 80% of its time power gated.
For the E-core, similarly it also has a 6-wide decode, although split to 2x3-wide. It has a 17 execution ports, buffered by double the load/store support of the previous generation Atom core. Beyond this, Gracemont is the first Atom core to support AVX2 instructions.
As part of our analysis into new microarchitectures, we also do an instruction sweep to see what other benefits have been added. The following is literally a raw list of changes, which we are still in the process of going through. Please forgive the raw data. Big thanks to our industry friends who help with this analysis.
Any of the following that is listed as A|B means A in latency (in clocks) and B in reciprocal throughput (1/instructions).
P-core: Golden Cove vs Cypress Cove
Microarchitecture Changes:
- 6-wide decoder with 32b window: it means code size much less important, e.g. 3 MOV imm64 / clks;(last similar 50% jump was Pentium -> Pentium Pro in 1995, Conroe in 2006 was just 3->4 jump)
- Triple load: (almost) universal
- every GPR, SSE, VEX, EVEX load gains (only MMX load unsupported)
- BROADCAST*, GATHER*, PREFETCH* also gains
- Decoupled double FADD units
- every single and double SIMD VADD/VSUB (and AVX VADDSUB* and VHADD*/VHSUB*) has latency gains
- Another ADD/SUB means 4->2 clks
- Another MUL means 4->3 clks
- AVX512 support: 512b ADD/SUB rec. throughput 0.5, as in server!
- exception: half precision ADD/SUB handled by FMAs
- exception: x87 FADD remained 3 clks
- Some form of GPR (general purpose register) immediate additions treated as NOPs (removed at the "allocate/rename/move ellimination/zeroing idioms" step)
- LEA r64, [r64+imm8]
- ADD r64, imm8
- ADD r64, imm32
- INC r64
- Is this just for 64b addition GPRs?
- eliminated instructions:
- MOV r32/r64
- (V)MOV(A/U)(PS/PD/DQ) xmm, ymm
- 0-5 0x66 NOP
- LNOP3-7
- CLC/STC
- zeroing idioms:
- (V)XORPS/PD, (V)PXOR xmm, ymm
- (V)PSUB(U)B/W/D/Q xmm
- (V)PCMPGTB/W/D/Q xmm
- (V)PXOR xmm
Faster GPR instructions (vs Cypress Cove):
- LOCK latency 20->18 clks
- LEA with scale throughput 2->3/clk
- (I)MUL r8 latency 4->3 clks
- LAHF latency 3->1 clks
- CMPS* latency 5->4 clks
- REP CMPSB 1->3.7 Bytes/clock
- REP SCASB 0.5->1.85 Bytes/clock
- REP MOVS* 115->122 Bytes/clock
- CMPXVHG16B 20|20 -> 16|14
- PREFETCH* throughput 1->3/clk
- ANDN/BLSI/BLSMSK/BLSR throughput 2->3/clock
- SHA1RNDS4 latency 6->4
- SHA1MSG2 throughput 0.2->0.25/clock
- SHA256MSG2 11|5->6|2
- ADC/SBB (r/e)ax 2|2 -> 1|1
Faster SIMD instructions (vs Cypress Cove):
- *FADD xmm/ymm latency 4->3 clks (after MUL)
- *FADD xmm/ymm latency 4->2 clks(after ADD)
- * means (V)(ADD/SUB/ADDSUB/HADD/HSUB)(PS/PD) affected
- VADD/SUB/PS/PD zmm 4|1->3.3|0.5
- CLMUL xmm 6|1->3|1
- CLMUL ymm, zmm 8|2->3|1
- VPGATHERDQ xmm, [xm32], xmm 22|1.67->20|1.5 clks
- VPGATHERDD ymm, [ym32], ymm throughput 0.2 -> 0.33/clock
- VPGATHERQQ ymm, [ym64], ymm throughput 0.33 -> 0.50/clock
Regressions, Slower instructions (vs Cypress Cove):
- Store-to-Load-Forward 128b 5->7, 256b 6->7 clocks
- PAUSE latency 140->160 clocks
- LEA with scale latency 2->3 clocks
- (I)DIV r8 latency 15->17 clocks
- FXCH throughput 2->1/clock
- LFENCE latency 6->12 clocks
- VBLENDV(B/PS/PD) xmm, ymm 2->3 clocks
- (V)AESKEYGEN latency 12->13 clocks
- VCVTPS2PH/PH2PS latency 5->6 clocks
- BZHI throughput 2->1/clock
- VPGATHERDD ymm, [ym32], ymm latency 22->24 clocks
- VPGATHERQQ ymm, [ym64], ymm latency 21->23 clocks
E-core: Gracemont vs Tremont
Microarchitecture Changes:
- Dual 128b store port (works with every GPR, PUSH, MMX, SSE, AVX, non-temporal m32, m64, m128)
- Zen2-like memory renaming with GPRs
- New zeroing idioms
- SUB r32, r32
- SUB r64, r64
- CDQ, CQO
- (V)PSUBB/W/D/Q/SB/SW/USB/USW
- (V)PCMPGTB/W/D/Q
- New ones idiom: (V)PCMPEQB/W/D/Q
- MOV elimination: MOV; MOVZX; MOVSX r32, r64
- NOP elimination: NOP, 1-4 0x66 NOP throughput 3->5/clock, LNOP 3, LNOP 4, LNOP 5
Faster GPR instructions (vs Tremont)
- PAUSE latency 158->62 clocks
- MOVSX; SHL/R r, 1; SHL/R r,imm8 tp 1->0.25
- ADD;SUB; CMP; AND; OR; XOR; NEG; NOT; TEST; MOVZX; BSSWAP; LEA [r+r]; LEA [r+disp8/32] throughput 3->4 per clock
- CMOV* throughput 1->2 per clock
- RCR r, 1 10|10 -> 2|2
- RCR/RCL r, imm/cl 13|13->11|11
- SHLD/SHRD r1_32, r1_32, imm8 2|2 -> 2|0.5
- MOVBE latency 1->0.5 clocks
- (I)MUL r32 3|1 -> 3|0.5
- (I)MUL r64 5|2 -> 5|0.5
- REP STOSB/STOSW/STOSD/STOSQ 15/8/12/11 byte/clock -> 15/15/15/15 bytes/clock
Faster SIMD instructions (vs Tremont)
- A lot of xmm SIMD throughput is 4/clock instead of theoretical maximum(?) of 3/clock, not sure how this is possible
- MASKMOVQ throughput 1 per 104 clocks -> 1 per clock
- PADDB/W/D; PSUBB/W/D PAVGB/PAVGW 1|0.5 -> 1|.33
- PADDQ/PSUBQ/PCMPEQQ mm, xmm: 2|1 -> 1|.33
- PShift (x)mm, (x)mm 2|1 -> 1|.33
- PMUL*, PSADBW mm, xmm 4|1 -> 3|1
- ADD/SUB/CMP/MAX/MINPS/PD 3|1 -> 3|0.5
- MULPS/PD 4|1 -> 4|0.5
- CVT*, ROUND xmm, xmm 4|1 -> 3|1
- BLENDV* xmm, xmm 3|2 -> 3|0.88
- AES, GF2P8AFFINEQB, GF2P8AFFINEINVQB xmm 4|1 -> 3|1
- SHA256RNDS2 5|2 -> 4|1
- PHADD/PHSUB* 6|6 -> 5|5
Regressions, Slower (vs Tremont):
- m8, m16 load latency 4->5 clocks
- ADD/MOVBE load latency 4->5 clocks
- LOCK ADD 16|16->18|18
- XCHG mem 17|17->18|18
- (I)DIV +1 clock
- DPPS 10|1.5 -> 18|6
- DPPD 6|1 -> 10|3.5
- FSIN/FCOS +12% slower
474 Comments
View All Comments
Spunjji - Friday, November 5, 2021 - link
This. People often get halfway through the analogy and then give up when they think it's made their argument for them.Dribble - Sunday, November 7, 2021 - link
The having lots of potential power and high power consuption is exactly what mobile phones and laptop cpu's do. That Intel do that in desktops too is not surprising.Spunjji - Friday, November 5, 2021 - link
99% of users don't need a 12900K. Presumably the people who do are likely to use it for these tasks where it will actually show a performance improvement over a cheaper CPU (accepting that some people overspend for e-peen reasons and will buy one for gaming where a 12600K would do just as well).lmcd - Friday, November 5, 2021 - link
99.9999999999% of users don't need a 12900K peak performance constantly, even if they will use the peak performance sometimes, including times when it definitely counts.I won't lie and say I have the best of the best, but Zen 2 vs Zen 1 cut down my build times noticeably. That helps keep me in flow, even if it's only saving me a few minutes per day. For people like me with ADHD or other attention-related issues, this can be a massive boon.
brucethemoose - Thursday, November 4, 2021 - link
Does efficiency really matter for top end desktop SKUs? Intel/AMD tend to clock these near their voltage walls, WAY outside the "sweet spot" of a given architecture, and you can get a boatload of efficiency back just dropping boost clocks by 10% yourself.Now, if the laptop SKUs end up being power hungry, thats a different story.
Blastdoor - Thursday, November 4, 2021 - link
Same core design, same process. So.... I'm sure Intel will lower clocks for mobile and servers to get power usage down, but once they lower the clocks, how will the performance compare?meacupla - Thursday, November 4, 2021 - link
For now, efficiency doesn't matter for desktops, but in a few years time, we are very likely to see laws passed that will mandate high efficiency in high end desktops.There are already some legislation in the works that calls for exactly this, but have not been passed yet.
TheinsanegamerN - Thursday, November 4, 2021 - link
And how, pray tell, are they going to legislate that? Max power usage for a CPU? We've already seen how california tried it, and predictably they made a mess of it.INB4 intel just refuses to sell anything but a celeront o californians and mysteriously tech resellers in arizona get a bunch of cali orders. Hmmmm.....
meacupla - Thursday, November 4, 2021 - link
don't ask me, IDK how law makers will do it. Just be aware that there are some really dumb laws that are already in existence, and the world is going to be entering an age of power shortages, along with carbon neutral incentives.Considering how things are going currently, I think it'll just be a 100% tax on desktop CPUs that can't hit some efficiency metric that Apple has designed.
Wrs - Thursday, November 4, 2021 - link
Doubtful given how poorly the existing law works. All they do is measure computer idle wattage. The lawmakers aren't techies. And they're busy handling the blowback from carbon neutrality bills that the pubic believes are related to power shortages and cost spikes.