Fetch

For Zen, AMD has implemented a decoupled branch predictor. This allows support to speculate on incoming instruction pointers to fill a queue, as well as look for direct and indirect targets. The branch target buffer (BTB) for Zen is described as ‘large’ but with no numbers as of yet, however there is an L1/L2 hierarchical arrangement for the BTB. For comparison, Bulldozer afforded a 512-entry, 4-way L1 BTB with a single cycle latency, and a 5120 entry, 5-way L2 BTB with additional latency; AMD doesn’t state that Zen is larger, just that it is large and supports dual branches. The 32 entry return stack for indirect targets is also devoid of entry numbers at this point as well.

The decoupled branch predictor also allows it to run ahead of instruction fetches and fill the queues based on the internal algorithms. Going too far into a specific branch that fails will obviously incur a power penalty, but successes will help with latency and memory parallelism.

The Translation Lookaside Buffer (TLB) in the branch prediction looks for recent virtual memory translations of physical addresses to reduce load latency, and operates in three levels: L0 with 8 entries of any page size, L1 with 64 entries of any page size, and L2 with 512 entries and support for 4K and 256K pages only. The L2 won’t support 1G pages as the L1 can already support 64 of them, and implementing 1G support at the L2 level is a more complex addition (there may also be power/die area benefits).

When the instruction comes through as a recently used one, it acquires a micro-tag and is set via the op-cache, otherwise it is placed into the instruction cache for decode. The L1-Instruction Cache can also accept 32 Bytes/cycle from the L2 cache as other instructions are placed through the load/store unit for another cycle around for execution.

Decode

The instruction cache will then send the data through the decoder, which can decode four instructions per cycle. As mentioned previously, the decoder can fuse operations together in a fast-path, such that a single micro-op will go through to the micro-op queue but still represent two instructions, but these will be split when hitting the schedulers. The purpose of this allows the system to fit more into the micro-op queue and afford a higher throughput when possible.

The new Stack Engine comes into play between the queue and the dispatch, allowing for a low-power address generation when it is already known from previous cycles. This allows the system to save power from going through the AGU and cycling back around to the caches.

Finally, the dispatch can apply six instructions per cycle, at a maximum rate of 6/cycle to the INT scheduler or 4/cycle to the FP scheduler. We confirmed with AMD that the dispatch unit can simultaneously dispatch to both INT and FP inside the same cycle, which can maximize throughput (the alternative would be to alternate each cycle, which reduces efficiency). We are told that the operations used in Zen for the uOp cache are ‘pretty dense’, and equivalent to x86 operations in most cases.

The High-Level Zen Overview Execution, Load/Store, INT and FP Scheduling
Comments Locked

106 Comments

View All Comments

  • Tucker Smith - Thursday, August 25, 2016 - link

    I hear much regarding the potential of Zen in comparison to Intel's HEDT procs, but, given AMD's touting of Zen's scalability, can we glean insight into how it will compete in the $100 range against the i3? People have been clamoring for an unlocked 2c/4t. The excitement over the potential to OC via BCLK on the Skylake was huge, the disappointment when Intel reneged on it even larger.

    The Kaveri-based Athlon x4 860k and the Carrizo Athlon, the 845, were fine chips under $100, but the limited cache and platform options kinda turned me off. A small Zen proc with one of the new, nicer cooling solutions they're offering on a modern mobo sounds incredibly compelling.

    I hear much regarding 8c/16t chips, a lot about potential APUs, but what about that broad middle ground?
  • iranterres - Thursday, August 25, 2016 - link

    Tucker Smith, you made an excellent point. But I think they will launch zen based stuff to compete all across the board
  • fanofanand - Thursday, August 25, 2016 - link

    Zen is the architecture, not necessarily the name of the processor family. They have mentioned the scalability up and down the chain, indicating that they will indeed populate their entire processor line with the Zen architecture. It's impossible to know how well they will scale until they are in independent tester's hands, but I would imagine they have learned quite a bit from their Jaguar cores and should be able to put together a compelling offering in the sub $100 range.
  • Outlander_04 - Thursday, August 25, 2016 - link

    AMD sell APU's with disabled graphics cores already, as well as a range of 2 module APU's with minimal graphics .
    That is the ground you are talking about surely?
  • alpha754293 - Tuesday, August 30, 2016 - link

    It WOULD be interesting to see how they perform in floating point intensive benchmarks compare to their Intel counterparts given the architectural differences between the two company's approaches.
  • tipoo - Wednesday, August 31, 2016 - link

    Last table - >2MB/cire

Log in

Don't have an account? Sign up now