Zen 4 Execution Pipeline: Familiar Pipes With More Caching

Finally, let’s take a look at the Zen 4 microarchitecture’s execution flow in-depth. As we noted before, AMD is seeing a 13% IPC improvement over Zen 3. So how did they do it?

Throughout the Zen 4 architecture, there is not any single radical change. Zen 4 does make a few notable changes, but the basics of the instruction flow are unchanged, especially on the back-end execution pipelines. Rather, many (if not most) of the IPC improvements in Zen 4 come from improving cache and buffer sizes in some respect.

Starting with the front end, AMD has made a few important improvements here. The branch predictor, a common target for improvements given the payoffs of correct predictions, has been further iterated upon for Zen 4. While still predicting 2 branches per cycle (the same as Zen 3), AMD has increased the L1 Branch Target Buffer (BTB) cache size by 50%, to 2 x 1.5k entries. And similarly, the L2 BTB has been increased to 2 x 7k entries (though this is just an ~8% capacity increase). The net result being that the branch predictor’s accuracy is improved by being able to look over a longer history of branch targets.

Meanwhile the branch predictor’s op cache has been more significantly improved. The op cache is not only 68% larger than before (now storing 6.75k ops), but it can now spit out up to 9 macro-ops per cycle, up from 6 on Zen 3. So in scenarios where the branch predictor is doing especially well at its job and the micro-op queue can consume additional instructions, it’s possible to get up to 50% more ops out of the op cache. Besides the performance improvement, this has a positive benefit to power efficiency since tapping cached ops requires a lot less power than decoding new ones.

With that said, the output of the micro-op queue itself has not changed. The final stage of the front-end can still only spit out 6 micro-ops per clock, so the improved op cache transfer rate is only particularly useful in scenarios where the micro-op queue would otherwise be running low on ops to dispatch.

Switching to the back-end of the Zen 4 execution pipeline, things are once again relatively unchanged. There are no pipeline or port changes to speak of; Zen 4 still can (only) schedule up to 10 Integer and 6 Floating Point operations per clock. Similarly, the fundamental floating point op latency rates remain unchanged as 3 cycles for FADD and FMUL, and 4 cycles for FMA.

Instead, AMD’s improvements to the back-end of Zen 4 have here too focused on larger caches and buffers. Of note, the retire queue/reorder buffer is 25% larger, and is now 320 instructions deep, giving the CPU a wider window of instructions to look through to extract performance via out-of-order execution. Similarly, the Integer and FP register files have been increased in size by about 20% each, to 224 registers and 192 registers respectively, in order to accommodate the larger number of instructions that are now in flight.

The only other notable change here is AVX-512 support, which we touched upon earlier. AVX execution takes place in AMD’s floating point ports, and as such, those have been beefed up to support the new instructions.

Moving on, the load/store units within each CPU core have also been given a buffer enlargement. The load queue is 22% deeper, now storing 88 loads. And according to AMD, they’ve made some unspecified changes to reduce port conflicts with their L1 data cache. Otherwise the load/store throughput remains unchanged at 3 loads and 2 stores per cycle.

Finally, let’s talk about AMD’s L2 cache. As previously disclosed by the company, the Zen 4 architecture is doubling the size of the L2 cache on each CPU core, taking it from 512KB to a full 1MB. As with AMD’s lower-level buffer improvements, the larger L2 cache is designed to further improve performance/IPC by keeping more relevant data closer to the CPU cores, as opposed to ending up in the L3 cache, or worse, main memory. Beyond that, the L3 cache remains unchanged at 32MB for an 8 core CCX, functioning as a victim cache for each CPU core’s L2 cache.

All told, we aren’t seeing very many major changes in the Zen 4 execution pipeline, and that’s okay. Increasing cache and buffer sizes is another tried and true way to improve the performance of an architecture by keeping an existing design filled and working more often, and that’s what AMD has opted to do for Zen 4. Especially coming in conjunction with the jump from TSMC 7nm to 5nm and the resulting increase in transistor budget, this is good way to put those additional transistors to good use while AMD works on a more significant overhaul to the Zen architecture for Zen 5.

Zen 4 Architecture: Power Efficiency, Performance, & New Instructions Test Bed and Setup
POST A COMMENT

205 Comments

View All Comments

  • jakky567 - Monday, September 26, 2022 - link

    I'm confused by USB 2, do you mean USB 2.0 or USB 4v2, or what? Reply
  • Ryan Smith - Monday, September 26, 2022 - link

    Yes, USB 2.0.

    USB 4v2 was just announced. We're still some time off from it showing up in any AMD products.
    Reply
  • Myrandex - Thursday, September 29, 2022 - link

    lol did they share any reason why to give a single USB 2.0 port? Reply
  • Ryan Smith - Friday, September 30, 2022 - link

    Basic, low complexity I/O. Implementing a USB 2.x port is relatively simple these days. It's a bit of a failsafe, really. Reply
  • LuxZg - Monday, September 26, 2022 - link

    One question and one observation.

    Q: ECO mode says 170W -> 105W but tested CPU was 170W -> 65W. Is that a typo or was that just to show off? I wish that sample graph showed 7600X at 105W and 65W in addition to 7950X at 170/105/65W.

    Observation: 5800X is 260$ on Amazon. So with cheaper DDR4, cheaper MBOs, and cheaper CPU, it will be big competition inside AMD's own house. At least for those that don't "need" PCIe 5.0 or future proofing.
    Reply
  • andrewaggb - Monday, September 26, 2022 - link

    I was confused by that as well.
    The way I read the paragraph suggested 170w eco mode is 105w but then it's stated the cpu was tested at 65w. Was it meant to say 105w or can a 170w be dialed down to 65w and the test is correctly labelled?
    Reply
  • Otritus - Monday, September 26, 2022 - link

    By default while under 95*C, 203*F, 368.15K, the 7950X will have a TDP of 170 watts and use up to 230 watts of power. You can think of it like TDP and Turbo Power on Intel. Eco mode will reduce TDP to 105 watts (and use up to 142 watts??). You can manually set the power limits, and Anandtech set them to 65 watts to demonstrate efficiency. Meaning the 7950X was not in eco mode, but a manual mode more efficient than eco mode. Reply
  • uefi - Monday, September 26, 2022 - link

    Just by supporting Microsoft's cloud connected hardware DRM makes the 7000 series vastly inferior to all current Intel CPUs. Reply
  • Makaveli - Monday, September 26, 2022 - link

    So you are saying intel is not going to implement this in any of their Future processors?

    If the Raptorlake review shows it supports that also i'm going to back to this message.
    Reply
  • socket420 - Monday, September 26, 2022 - link

    I don't understand where these "intel rulez because they don't use pluton!!" people are coming from - one, the Intel Management Engine... exists, and two, Microsoft explicitly stated that Pluton was developed with the support of AMD, Intel and Qualcomm back in 2020. Intel is clearly on-board with it and I expect to see Pluton included in Raptor Lake or Meteor Lake, they're just late to the party because that's what Intel does best, I guess? Reply

Log in

Don't have an account? Sign up now