Golden Cove Microarchitecture (P-Core) Examined

Section by Andrei Frumusanu

Within Alder Lake and Sapphire Rapids, we find Intel’s brand-new performance core (P-Core) microarchitecture, called Golden Cove. The new microarchitecture and core design follows the Willow Cove core in Tiger Lake, the Sunny Cove core in Ice Lake, and the derivative Cypress Cove core in Rocket Lake.

Although the new design is a follow-up over the last few years of core microarchitectures, Intel sees the Golden Gove as a major step-function update, with massive revamps of the fundamental building blocks of the CPU, going as far as calling it as allowing performance for the next decade of compute, meaning Intel is seemingly planning on using it as the basis for the evolution of cores for at least a few years.

The last time we had seen major upgrades to Intel’s core front-end was in Ice Lake's Sunny Cove, making notable increases in the microarchitecture’s out-of-order execution capabilities and depth, while Willow Cove and Cypress Cove were more iterative designs focusing on the memory subsystem.

Golden Cove, in comparison, makes gigantic changes to the microarchitecture’s front-end – in fact, Intel states that this is the largest microarchitectural upgrade in the core family in a decade, mirroring similar changes in magnitude to what Skylake had done many years ago.

Starting off with the directly most obvious change: Intel is moving from being a 4-wide decode machine to being a 6-wide microarchitecture, a first amongst x86 designs, and a major design focus point. Over the last few years there had been a discussion point about decoder widths and the nature of x86’s variable length instruction set, making it difficult to design decoders that would go wider, compared to say a fixed instruction set ISA like Arm’s, where adding decoders is relatively easier to do. Notably last year AMD’s Mike Clarke had noted while it’s not a fundamental limitation, going for decoders larger than 4 instructions can create practical drawbacks, as the added complexity, and most importantly, added pipeline stages. For Golden Cove, Intel has decided to push forward with these changes, and a compromise that had to be made is that the design now adds an additional stage to the mispredict penalty of the microarchitecture, so the best-case would go up from 16 cycles to 17 cycles. We asked if there was still a kind of special-case decoder layout as in previous generations (such as the 1 complex + 3 simple decoder setup), however the company wouldn’t dwell deeper into the details at this point in time. To feed the decoder, the fetch bandwidth going into it has been doubled from 16 bytes per cycle to 32 bytes per cycle.

Intel states that the decoder is clock-gated 80% of the time, instead relying on the µOP cache. This has also seen extremely large changes this generation: first of all, the structure has now almost doubled from 2.25K entries to 4K entries, mimicking a similar large increase we had seen with the move from AMD’s Zen to Zen2, increasing the hit-rate and further avoiding going the route of the more costly decoders.

Although the I-cache has remained the same in terms of size at 32KB, Intel has improved latencies for larger code coverage by increasing the L1 iTLB from 128 entries to 256 entries (16 -> 32 for 2M/4M hugepages).

Because the general misprediction penalty has gone up this generation, Intel is very much counter-acting this by massively improving branch prediction accuracy. First of all, the L2 BTB (branch target buffer) has grown to well over double with the structure increased from 5K entries to 12K entries, making this by far the biggest in the industry that we’re aware of (Zen3: 6.5K, Cortex-A77+: 8K, Apple: ?). It is also said that the branch predictors themselves are now smarter, with their accuracy being improved, although these are always quite a bit secretive in terms of their designs so we didn’t have too many details divulged.

The decoders and the µOP cache feed a µOP queue – beyond he 50% larger decoder width, the µOP cache also now sees an increased bandwidth and able to emit 8 µOPs instead of 6. The µOP queue itself has seen a structural change, for single threads (only one SMT logical core is loaded), the structure can be aggregated and the usable depth has doubled from 70 entries to 144 entries, while for dual threads, has gone up slightly from 70 to 72 entries per thread.

In the mid-core, there’s also been quite large changes. The core here increases from 5-wide to 6-wide, meaning that in theory it should still be a maximum 6 IPC microarchitecture, though we’re talking about µOPs here and not actual instructions.

Intel is noting that they’re doing an increased amount of dependency resolution at the allocation stage, actually eliminating instructions that otherwise would have to actually emitted to the back-end execution resources. Intel here didn’t want to elaborate a lot, only stating that this capability has increased quite a lot generation over generation.

The out-of-order instruction window of Golden Cove is massively larger than Sunny Cove – increasing the re-order buffer (ROB) size from 352 to 512 entries. This is over double that of AMD’s Zen3 µarch, and really only second to Apple’s core microarchitecture which we’ve measured in at around 630 instructions. Traditionally, we’ve always been told that increasing the ROB has diminishing effects and always corelates with quite an increase in power consumption – but on the other hand we saw Apple achieve extremely high OOO-windows. It’s not clear how exactly Intel managed to increase their µarch by such a large degree in a single jump, but it’s a very impressive and admittedly unexpected design characteristic of the Golden Cove cores.

Moving onto the back-end, Intel has increased their execution width from 10 ports to 12 ports. Intel still continues to use merged execution port / reservation station design that characterises their cores in contrast to other µarchs from the competition, or even their own E-core microarchitectures, which use separate integer and FP/vector ports and pipelines.

On the integer side of things, there’s now a fifth execution port and pipeline with simple ALU and LEA capabilities, which in theory makes this the widest x86 core in raw ALU throughput, although in mixed code the 5 pipelines are still somewhat comparatively narrower.

On the FP capabilities side of things, we’ve seen added dedicated FADD capabilities, which are far more efficient and lower latency than using the FMA units (when applicable). The FMA units now have also seen added support for FP16 data types, although this falls under the AVX512 umbrella consumer cores won’t see the capabilities.

In general, besides the added 5th ALU port/pipeline, the back-end execution resources of Golden Cove appear to be somewhat more conservative compared to the more substantial front-end changes of the core.

On the load & store front, we see the addition of an additional dedicated execution port with a load AGU, increasing the possible number of loads per cycle from 2 to 3, matching the upgrades we’ve seen from many other microarchitectures over the last few years.

For AVX512-enabled implementations of the µarch, it’s able to load up to 1Kb/cycle from the L1, while consumer cores still see a 50% improvement for AVX2 256-bit loads with 3 per cycle.

Intel, like many other, have noted that modern workloads have increased memory level parallelism, so they have increased the outstanding memory transactions from the core by increasing the L1D fill buffers from 12 to 16, and increasing the load and store buffers by undisclosed amounts. Similarly, to AMD, to better address sparse memory access workloads, the µarch has seen a doubling of page walkers from 2 to 4, able to look up pages in TLB misses.

The L1 DTLB has grown from 64 entries to 96 entries – Intel didn’t mention the L2 TLB which would mean it’s still at 2048 entries.

Intel has improved the prefetchers, nothing things such as “better stride prefetching in L1”, though beyond that the company hasn’t divulged much other details. We’ve seen a lot of advances in the industry in terms of prefetchers so it’ll be interesting to see how Golden Cove compared to the competition here.

The L2 cache in Golden Cove remains at 1.25MB for consumer core variants, while the server variant grows it to 2MB compared to 1.25MB in the current Ice Lake-SP chips. Parallel data access outstanding requests for the memory subsystem has grown from 32 misses to 48, also further helping with increasing MLP capabilities of the core.

Intel also again notes there’s been prefetching improvements in the L2. What the company calls “full-line-write predictive bandwidth optimisation” is likely the same mechanism we saw in our review of Ice Lake-SP where the core can greatly improve bandwidth by avoiding RFO reads of cache lines that are going to be fully rewritten, which should greatly improve common generic operations such as memory copies.

The aggregated changes of the new Golden Cove microarchitecture amount to a geomean IPC increase of 19% compared to Cypress Cove (Rocket Lake) - measured over a set of workloads including SPEC CPU 2017, SYSmark 25, Crossmark, PCMark 10, WebXPRT3, and Geekbench 5.4.1. We can see in the graph that there’s outlier workloads with up to +60% IPC, but also low outliers where the new design doesn’t improve thing much or even sees regressions, which is odd.

The front-end changes of Golden Cove indeed look like a major step-function: 50% wider decode, +25% wider µOP output and double the capacity, +45% deeper out-of-order execution buffer, massive branch prediction improvements. It’s possible that the more conservative back-end execution improvements overall lowered the end results.

19% IPC is still a very impressive figure, and one of Intel’s largest jumps in a decade, but the company is also comparing it to Cypress Cove, rather than their latest generation design, which is Willow Cove inside of the Tiger Lake chips. We would have wished for Intel to at least give us both comparisons, so that we would get an idea for both desktop as well as mobile improvements.

Intel Thread Director Gracemont Microarchitecture (E-Core) Examined
Comments Locked

223 Comments

View All Comments

  • mode_13h - Thursday, August 19, 2021 - link

    Indeed. But, remember that it's a Skylake from 2015, fabbed on Intel's original 14 nm node, and it's an integer workload they measured. If they measured vector or FPU workloads, the results would probably be rather different.
  • Spunjji - Monday, August 23, 2021 - link

    Indeed. Based on how Intel usually do their marketing, I'm not expecting anything revolutionary from those cores. Maybe I'll be surprised, but I'm expecting mild disappointment.
  • mode_13h - Tuesday, August 24, 2021 - link

    Having already bought into the "Atom" series at Apollo Lake, for a little always-on media streaming server, I'm already thrilled! Tremont was already a bigger step up than I expected.
  • Spunjji - Tuesday, August 24, 2021 - link

    Fair - I've just been a bit burned! Last time I used an Atom device was Bay Trail, and at the time there was a big noise about its performance being much better than previous Atom processors. The actual experience was not persuasive!
  • Silver5urfer - Thursday, August 19, 2021 - link

    Too many changes in the CPU x86 topology. They are making this CPU a heavily dependent one of the OS side with such insane changes to the Scheduler system, like P, E and then the Hyperthreading of P cores. On top of all this the DRAM system must be good else all that 19% big IPC boost will be wasted just like on Rocket Lake. Finally Windows 11 only ? dafaq.

    I have my doubts on this Intel IDT and the whole ST performance along with gaming SMT/HT performance. Until the CPU is out it's hard to predict the things. Also funny they are simply adding the older Skylake cores to the processor in a small format without HT, while claiming this ultra hybrid nonsense, seems like mostly tuned for a mobile processor than a Desktop system which is why there's no trash cores on the HEDT Sapphire Rapids Xeon. And which Enterprise wants to shift to this new nonsense of x86 landscape. On top we have Zen 4 peaking at 96Core 192T hyperbeast Genoa which also packs AVX512.

    I'm waiting Intel, also AMD for their 3D V Cache Zen 3 refresh. Plus whoever owns any latest processors from Intel or AMD should avoid this Hardware like plague, it's too much of a beta product and OS requirements, DRAM pricing problems will be there for Mobos and RAM kits and PCIe5.0 is just damn new and has no usage at all right now It all feels just like Zen when AMD came with NUMA system and refined it well by the Zen 3. I doubt AMD will have any issue with this design. But one small good news is some competition ?
  • Silver5urfer - Thursday, August 19, 2021 - link

    Also scalable lol. This design is peaked out at 8C16T and 8 small cores while Sapphire Rapids is at 56Cores 112T. AMD's Zen 4 ? 96C/192T lmao that battle is going to be good. Intel is really done with x86 is what I get from this, copying everything from AMD and ARM. Memory Interconnects, Big Little nonsense. Just release the CPU and let it rip Intel, we want to see how it works against 10900Ks and 5900Xs.
  • mode_13h - Friday, August 20, 2021 - link

    > Also funny they are simply adding the older Skylake cores
    > to the processor in a small format without HT

    They're not Skylake cores, of course. They're smaller & more power-efficient, but also a different uArch. 3+3-wide decode, instead of 4-wide, and no proper uop cache. Plus, the whole thing about 17 dispatch ports.

    If you back-ported these to 14 nm, they would lose their advantages over Skylake. If they forward-ported Skylake to "Intel 7", it would probably still be bigger and more power-hungry. So, these are different, for good reasons.
  • vyor - Friday, August 20, 2021 - link

    I believe they have a uOP cache though?
  • mode_13h - Saturday, August 21, 2021 - link

    No, Tremont and Gracemont don't have a uop cache. And if Goldmont didn't, then it's probably safe to say that none of the "Atom" cores did.

    The article does mention that some aspects of the instruction cache make it sound a bit like a uop cache.
  • Silver5urfer - Saturday, August 21, 2021 - link

    I see the only reason - Intel was forced to shrink the SKL and shove them in this designs because their node Fabs are busted. Their Rocket Lake is a giant power hog. Insane power draw. Intel really shined until 10900K, super fast cores, ultra strong IMC that can handle even 5000MHz and any DRAM. Solid SMT. High power but it's a worth trade off.

    With RKL, backport Intel lost - IMC leadership, SMT performance, ST performance (due to Memory latency) AND efficiency PLUS Overclockability. That was the time I saw Intel's armor cracking. HEDT was dead so was Xeon but the only reason Mainstream LGA1200 stood was super strong ring bus even on RKL.

    Now FF to 10SF or Intel 7 whatever they call it. No more high speed IMC now even double whammy due to the dual ring system and the ring is shared by all the cores connected, I doubt these SKL cores can manage the highspeed over 3800MHz DDR4 RAM, which is why they are mentioning Dynamic Clocking for Memory, this will have Gearing memory system for sure. High amount of efficiency focus due to the Laptop market from Apple and AMD pressure. No more big core SMT/HT performance. Copying ARMs technology onto x86 is pathetic. ARM processors never did SMT x86 had this advantage. But Intel is losing it because their 10nm is a dud. Look at the leaked PL1,2,4 numbers. It doesn't change at all, they crammed 8 phone cores and still it's higher and higher.

    Look at HEDT, Sapphire Rapids, tile approach, literally copied everything they could from AMD and tacked on HBM for HPC money. And I bet the power consumption would be insanely high due to no more phone cores cheating only big x86 real cores. Still they are coming back. At this point Intel would have released "Highest Gaming Performance" marketing for ADL, so far none and release is just 2 months. RKL had that campaign before 2 months and CFL, CML all of them had. This one doesn't and they are betting on I/O this time.

    Intel has to show the performance. And it's not like AMD doesn't know this, which is why Lisa Su showed off a strong 15% gaming boost. And remember when AMD showcases the CPUs ? Direct benchmarks against Intel's top - 9900K, 10900Ks all over the place. No sign of 5900X or 5950X comparisons from Intel.

Log in

Don't have an account? Sign up now