Fetch

For Zen, AMD has implemented a decoupled branch predictor. This allows support to speculate on incoming instruction pointers to fill a queue, as well as look for direct and indirect targets. The branch target buffer (BTB) for Zen is described as ‘large’ but with no numbers as of yet, however there is an L1/L2 hierarchical arrangement for the BTB. For comparison, Bulldozer afforded a 512-entry, 4-way L1 BTB with a single cycle latency, and a 5120 entry, 5-way L2 BTB with additional latency; AMD doesn’t state that Zen is larger, just that it is large and supports dual branches. The 32 entry return stack for indirect targets is also devoid of entry numbers at this point as well.

The decoupled branch predictor also allows it to run ahead of instruction fetches and fill the queues based on the internal algorithms. Going too far into a specific branch that fails will obviously incur a power penalty, but successes will help with latency and memory parallelism.

The Translation Lookaside Buffer (TLB) in the branch prediction looks for recent virtual memory translations of physical addresses to reduce load latency, and operates in three levels: L0 with 8 entries of any page size, L1 with 64 entries of any page size, and L2 with 512 entries and support for 4K and 256K pages only. The L2 won’t support 1G pages as the L1 can already support 64 of them, and implementing 1G support at the L2 level is a more complex addition (there may also be power/die area benefits).

When the instruction comes through as a recently used one, it acquires a micro-tag and is set via the op-cache, otherwise it is placed into the instruction cache for decode. The L1-Instruction Cache can also accept 32 Bytes/cycle from the L2 cache as other instructions are placed through the load/store unit for another cycle around for execution.

Decode

The instruction cache will then send the data through the decoder, which can decode four instructions per cycle. As mentioned previously, the decoder can fuse operations together in a fast-path, such that a single micro-op will go through to the micro-op queue but still represent two instructions, but these will be split when hitting the schedulers. The purpose of this allows the system to fit more into the micro-op queue and afford a higher throughput when possible.

The new Stack Engine comes into play between the queue and the dispatch, allowing for a low-power address generation when it is already known from previous cycles. This allows the system to save power from going through the AGU and cycling back around to the caches.

Finally, the dispatch can apply six instructions per cycle, at a maximum rate of 6/cycle to the INT scheduler or 4/cycle to the FP scheduler. We confirmed with AMD that the dispatch unit can simultaneously dispatch to both INT and FP inside the same cycle, which can maximize throughput (the alternative would be to alternate each cycle, which reduces efficiency). We are told that the operations used in Zen for the uOp cache are ‘pretty dense’, and equivalent to x86 operations in most cases.

The High-Level Zen Overview Execution, Load/Store, INT and FP Scheduling
Comments Locked

106 Comments

View All Comments

  • eldakka - Wednesday, August 24, 2016 - link

    The first page link, AMD Server CPUs and Motherboard Analysis, is wrong, it actually links to the ARM v8-A article.
  • atlantico - Friday, August 26, 2016 - link

    Yes, it's also wrong here: http://www.anandtech.com/show/10585/unpacking-amds...

    Sigh.
  • TristanSDX - Wednesday, August 24, 2016 - link

    Zen do not support transactional memory, big disadvantage comparing to Intel
  • Senti - Wednesday, August 24, 2016 - link

    And how much does it matter? TSX is great thing no doubt there. But the adoption? What can you name of real software what uses and get significant benefit of it?

    I blame Intel stupid marketing for cutting TSX from too many versions and killing the adoption.
  • coder111 - Wednesday, August 24, 2016 - link

    As far as I know, Azul JVMs do support transactional memory. So if you have a Java app, you can use it.

    Other than that, yes, I haven't seen TSX used much...
  • 68k - Wednesday, August 24, 2016 - link

    Isn't the version of glibc in recent Linux-distributions using the lock elision feature of TSX?

    https://lwn.net/Articles/534758/
    https://01.org/blogs/tlcounts/2014/lock-elision-gl...

    If so, then essentially every single Linux program does make use of TSX when present.
  • looncraz - Wednesday, August 24, 2016 - link

    One of the most important features of TSX are checkpoints. Zen supports checkpoints in its execution pipeline. Otherwise, I've not seen anything that said Zen did or did not support TSX, not that the tech is widely used at this time.

    From there, you just need tagging and a few other features to add support. It's something that could be included in Zen+ if Zen does not have it.
  • silverblue - Wednesday, August 24, 2016 - link

    It looks like Zen was developed to accelerate the vast majority of software, and rely on core count for everything else. It might explain the lack of focus on AVX.

    If cache stats were any indication of performance, it would appear that Zen was destined to compete with Broadwell, but not quite match the Lake CPUs; Zen+ would perhaps close the gap albeit a bit late. Bulldozer was hamstrung by half-speed writes and horrific L3 latency - would it be remiss to assume that they've at least fixed those two issues?

    I'm not sure anybody can truly predict performance however, even with a Blender demonstration, and certainly not to work out prospective Cinebench or SuperPi performance. You could have a monster of an architecture, but if the software isn't optimised for it, it's not going to be representative of its true performance.
  • wumpus - Wednesday, August 24, 2016 - link

    I'd still want the TSX instructions before even thinking about the server market. I guess they surrendered that before the overall architecture was finished. Although considering how badly it has worked for Intel (essentially turned off after errata was noted in the first generation), maybe it wasn't worth risk.
  • Alexvrb - Sunday, August 28, 2016 - link

    Yeah they need to take their time. A faulty implementation would do more harm than good at this point.

Log in

Don't have an account? Sign up now