Section by Andrei Frumusanu

The New Zen 3 Core: Front-End Updates

Moving on, let’s see what makes the Zen3 microarchitecture tick and how detail on how it actually improves things compared to its predecessor design, starting off with the front-end of the core which includes branch prediction, decode, the OP-cache path and instruction cache, and the dispatch stage.

From a high-level overview, Zen3’s front-end looks the same as on Zen2, at least from a block-diagram perspective. The fundamental building blocks are the same, starting off with the branch-predictor unit which AMD calls state-of-the-art. This feeds into a 32KB instruction cache which forwards instructions into a 4-wide decode block. We’re still maintaining a two-way flow into the OP-queue, as when we see instructions again which have been previously decoded, they are then stored in the OP-cache from which they can be retrieved with a greater bandwidth (8 Mops/cycle) and with less power consumption.

Improvements of the Zen3 cores in the actual blocks here include a faster branch predictor which is able to predict more branches per cycle. AMD wouldn’t exactly detail what this means but we suspect that this could allude to now two branch predictions per cycle instead of just one. This is still a TAGE based design as had been introduced in Zen2, and AMD does say that it has been able to improve the accuracy of the predictor.

Amongst the branch unit structure changes, we’ve seen a rebalancing of the BTBs, with the L1 BTB now doubling in size from 512 to 1024 entries. The L2 BTB has seen a slight reduction from 7K to 6.5K entries, but allowed the structure to be more efficient. The indirect target array (ITA) has also seen a more substantial increase from 1024 to 1536 entries.

If there is a misprediction, the new design reduces the cycle latency required to get a new stream going. AMD wouldn’t exactly detail the exact absolute misprediction cycles or how faster it is in this generation, but it would be a more significant performance boost to the overall design if the misprediction penalty is indeed reduced this generation.

AMD claims no bubbles on most predictions due to the increased branch predictor bandwidth, here I can see parallels to what Arm had introduced with the Cortex-A77, where a similar doubled-up branch predictor bandwidth would be able to run ahead of subsequent pipelines stages and thus fill bubble gaps ahead of them hitting the execution stages and potentially stalling the core.

On the side of the instruction cache, we didn’t see a change in the size of the structure as it’s still a 32KB 8-way block, however AMD has improved its utilisation. Prefetchers are now said to be more efficient and aggressive in actually pulling data out of the L2 ahead of them being used in the L1. We don’t know exactly what kind of pattern AMD alludes to having improved here, but if the L1I behaves the same as the L1D, then adjacent cache lines would then be pulled into the L1I here as well. The part of having a better utilisation wasn’t clear in terms of details and AMD wasn’t willing to divulge more, but we suspect a new cache line replacement policy to be a key aspect of this new improvement.

Being an x86 core, one of the difficulties of the ISA is the fact that instructions are of a variable length with encoding varying from 1 byte to 15 bytes. This has been legacy side-effect of the continuous extensions to the instruction set over the decades, and as modern CPU microarchitectures become wider in their execution throughput, it had become an issue for architects to design efficient wide decoders. For Zen3, AMD opted to remain with a 4-wide design, as going wider would have meant additional pipeline cycles which would have reduced the performance of the whole design.

Bypassing the decode stage through a structure such as the Op-cache is nowadays the preferred method to solve this issue, with the first-generation Zen microarchitecture being the first AMD design to implement such a block. However, such a design also brings problems, such as one set of instructions residing in the instruction cache, and its target residing in the OP-cache, again whose target might again be found in the instruction cache. AMD found this to be a quite large inefficiency in Zen2, and thus evolved the design to better handle instruction flows from both the I-cache and the OP-cache and to deliver them into the µOP-queue. AMD’s researchers seem to have published a more in-depth paper addressing the improvements.

On the dispatch side, Zen3 remains a 6-wide machine, emitting up to 6-Macro-Ops per cycle to the execution units, meaning that the maximum IPC of the core remains at 6. The Op-cache being able to deliver 8 Macro-Ops into the µOp-queue would serve as a mechanism to further reduce pipeline bubbles in the front-end – as the full 8-wide width of that structure wouldn’t be hit at all times.

On the execution engine side of things, we’ve seen a larger overhaul of the design as the Zen3 core has seen a widening of both the integer and floating-point issue width, with larger execution windows and lower latency execution units.

Starting off in more detail on the integer side, the one larger change in the design has been a move from individual schedulers for each of the execution units to a more consolidated design of four schedulers issuing into two execution units each. These new 24-entry schedulers should be more power efficient than having separate smaller schedulers, and the entry capacity also grows slightly from 92 to 96.

The physical register file has seen a slight increase from 180 entries to 192 entries, allowing for a slight increase in the integer OOO-window, with the actual reorder-buffer of the core growing from 224 instructions to 256 instructions, which in the context of competing microarchitectures such as Intel’s 352 ROB in Sunny Cove or Apple giant ROB still seems relatively small.

The overall integer execution unit issue width has grown from 7 to 10. The breakdown here is that while the core still has 4 ALUs, we’ve now seen one of the branch ports separate into its own dedicated unit, whilst the other unit still shares the same port as one of the ALUs, allowing for the unshared ALU to dedicate itself more to actual arithmetic instructions. Not depicted here is an additional store unit, as well as a third load unit, which is what brings us to 10 issue units in total on the integer side.

On the floating-point side, the dispatch width has been increased from 4 µOps to 6 µOps. Similar to the integer pipelines, AMD has opted to disaggregate some of the pipelines capabilities, such as moving the floating point store and floating-point-to-integer conversion units into their own dedicated ports and units, so that the main execution pipelines are able to see higher utilisation with actual compute instructions.

One of the bigger improvements in the instruction latencies has been the shaving off of a cycle from 5 to 4 for fused multiply accumulate operations (FMAC). The scheduler on the FP side has also seen an increase in order to handle more in-flight instructions as loads on the integer side are fetching the required operands, although AMD here doesn’t disclose the exact increases.

Zen 3 At A Glance: Behind The +19% IPC Increase Zen 3: Load/Store and a Massive L3 Cache
Comments Locked

339 Comments

View All Comments

  • halcyon - Tuesday, November 10, 2020 - link

    1. Ryzen 9 5xxx series dominate most gaming benhmarks in CPU bound games up to 720p
    2. However at 1440P/4K Intel, esp. 10850K pull ahead.

    Can somebody explain this anomaly? As Games become more GPU bound at higher res, why does Intel pull ahead (with worse single/multi-thread CPU perf)? Is it a bandwidth/latency issue? If so, where exactly (RAM? L3? somewhere else)? Can't be PCIe, can it?
  • feka1ity - Saturday, November 14, 2020 - link

    RAM. anandtech uses shitty ram for intel systems
  • Makste - Monday, November 16, 2020 - link

    I think the game optimizations for intel processors become clear at those resolutions. AMD has been a none factor in gaming for so long. These games have been developed on and mostly optimised to work better on intel machines
  • Silma - Wednesday, November 11, 2020 - link

    At 4K, the 3700X beats the 5600X quite often.
  • Samus - Friday, November 13, 2020 - link

    Considering Intel just released a new generation of CPU's, it's astonishing at their current IPC generation-over-generation trajectory, it will take them two more generations to surpass Zen 3. That's almost 2 years.

    Wow.
  • ssshenoy - Tuesday, December 15, 2020 - link

    I dont think this article compares the latest generation from Intel - the Willow Cove core in Tiger lake which is launched only for notebooks. The comparison here seems to be with the ancient Skylake generation on 14 nm.
  • abufrejoval - Friday, November 13, 2020 - link

    Got my Ryzen 7 5800X on a new Aorus X570 mainboard and finally working, too.

    It turbos to 4850MHz without any overclocking, so I'd hazard 150MHz "bonus" are pretty much the default across the line.

    At the wall plug 210 Watts was the biggest load I observed for pure CPU loads. HWinfo never reporting anything in excess of 120 Watts on the CPU from internal sensors.

    "finally working": I want ECC with this rig, because I am aiming for 64GB or even 128GB RAM and 24x7 operation. Ordered DDR4-3200 ECC modules from Kingston to go with the board. Those seem a little slow coming so I tried to make do with pilfering some DIMMs from other systems, that could be shut down for a moment. DDR4-2133 ECC and DDR4-2400 ECC modules where candidates, but wouldn't boot...

    Both were 2Rx4, dual rank, nibble not byte organized modules, unbuffered and unregistered but not the byte organized DIMMs that the Gigabyte documentation seeemd to prescribe... Asus, MSI and ASrock don't list such constraints, but I had to go with availability...

    I like to think of RAM as RAM, it may be slower or faster, but it shouldn't be tied to one specific system, right?

    So while I await the DDR4-3200 ECC 32GB modules to arrive, I got myself some DDR4-4000 R1x8 (no ECC, 8GB) DIMMs to fill the gap: But would that X570 mainboard, which might have been laying on shelves for months actually boot a Ryzen 5000?

    No, it wouldn't.

    But yes, it would update the BIOS via Q-Flash Plus-what-shall-we-call-it and then, yes, it did indeed recognize both the CPU and those R1x8 DIMMs just fine after the update.

    I haven't yet tried those R2x4 modules again, because I am still exploring the bandwidth high-end, but I want to report just how much I am impressed by the compatibility of the AM4 platform, fully aware that Zen 3 will be the last generation in this "sprint".

    I vividly remember how I had to get Skylake CPUs in order to get various mainboard ready for Kaby Lake...

    I have been using AMD x86 CPUs from 80486DX4. I owned every iteration of K6-II and K6-III, omitted all Slot-A variants, got back with socket-A, 754, 939, went single, quad, and hexa (Phenom II x4+x6), omitted Bulldozer, but did almost every APU but between Kaveri and Zen 3, AMD simply wasn't compelling enough.

    I would have gotten a Ryzen 9 5950x, if it had been available. But I count myself lucky for the moment to have snatched a Ryzen 7 5800X: It sure doesn't disappoint.

    AMD a toast! You have done very well indeed and you can count me impressed!

    Of course I'll nag about missing SVE/MKTME support day after tomorrow, but in the mean-time, please accept my gratitude.
  • feka1ity - Saturday, November 14, 2020 - link

    Interesting, my default 9700k with 1080ti does 225fps avg - Borderlands 3, 360p, very low settings and anantech testers poop 175fps avg with 10900k and 2080ti?!? And this favoritize amede products. Fake stuff, sorry.
  • Spunjji - Monday, November 16, 2020 - link

    "Fake stuff"

    Thanks for labelling your post
  • feka1ity - Monday, November 16, 2020 - link

    Fake stuff is not a label, it's a epicrisis. Go render stuff, spunji

Log in

Don't have an account? Sign up now