Zen: New Core Features

Since August, AMD has been slowly releasing microarchitecture details about Zen. Initially it started with a formal disclosure during Intel’s annual developer event, the followed a paper at HotChips, some more details at the ‘New Horizon’ event in December, and recently a talk at ISSCC. The Zen Tech Day just before launch gave a chance to get some of those questions answered.

First up, let’s dive right in to the high-level block diagram:

In this diagram, the core is split into the ‘front-end’ in blue and the rest of the core is the ‘back-end’. The front-end is where instructions come into the core, branch predictors are activated and instructions are decoded into micro-ops (micro-operations) before being placed into a micro-op queue. In red is the part of the back-end that deals with integer (INT) based instructions, such as integer math, loops, loads and stores. In orange is the floating-point (FP) part of the back-end, typically focused on different forms of math compute. Both the INT and FP segments have their own separate execution port schedulers

If it looks somewhat similar to other high-performance CPU cores, you’d be correct: there seems to be a high-level way of ‘doing things’ when it comes to x86, with three levels of cache, multi-level TLBs, instruction coalescing, a set of decoders that dispatch a combined 4-5+ micro-ops per cycle, a very large micro-op queue (150+), shared retire resources, AVX support, and simultaneous hyper-threading.

What’s New to AMD

First up, and the most important, was the inclusion of the micro-op cache. This allows for instructions that were recently used to be called up to the micro-op queue rather than being decoded again, and saves a trip through the core and caches. Typically micro-op caches are still relatively small: Intel’s version can support 1536 uOps with 8-way associativity. We learned (after much asking) at AMD’s Tech Day that the micro-op cache for Zen can support ‘2K’ (aka 2048) micro-ops with up to 8-ops per cache line. This is good for AMD, although I conversed with Mike Clark on this: if AMD had said ‘512’, on one hand I’d be asking why it is so small, and on the other wondering if they would have done something different to account for the performance adjustments. But ‘2K’ fits in with what we would expect.

Secondly is the cache structure. We were given details for the L1, L2 and L3 cache sizes, along with associativity, to compare it to former microarchitectures as well as Intel’s offering.

In this case, AMD has given Zen a 64KB L1 Instruction cache per core with 4-way associativity, with a lop-sided 32KB L1 Data cache per core with 8-way associativity. The size and accessibility determines how frequently a cache line is missed, and it is typically a trade-off for die area and power (larger caches require more die area, more associativity usually costs power). The instruction cache, per cycle, can afford a 32byte fetch while the data cache allows for 2x 16-byte loads and one 16-byte store per cycle. AMD stated that allowing two D-cache loads per cycle is more representative of the most workloads that end up with more loads than stores.

The L2 is a large 512 KB, 8-way cache per core. This is double the size of Intel’s 256 KB 4-way cache in Skylake or 256 KB 8-way cache in Broadwell. Typically doubling the cache size affords a 1.414 (square root of 2) better chance of a cache hit, reducing the need to go further out to find data, but comes at the expense of die area. This will have a big impact on a lot of performance metrics, and AMD is promoting faster cache-to-cache transfers than previous generations. Both the L1 and L2 caches are write-back caches, improving over the L1 write-through cache in Bulldozer.

The L3 cache is an 8MB 16-way cache, although at the time last week it was not specified over how many cores this was. From the data release today, we can confirm rumors that this 8 MB cache is split over a four-core module, affording 2 MB of L3 cache per core or 16 MB of L3 cache for the whole 8-core Zen CPU. These two 8 MB caches are separate, so act as a last-level cache per 4-core module with the appropriate hooks into the other L3 to determine if data is needed. As part of the talk today we also learned that the L3 is a pure victim cache for L1/L2 victims, rather than a cache for prefetch/demand data, which tempers the expectations a little but the large L2 will make up for this. We’ll discuss it as part of today’s announcement.

AMD is also playing with SMT, or simultaneous multi-threading. We’ve covered this with Intel extensively, under the heading ‘HyperThreading’. At a high level both these terms are essentially saying the same thing, although their implementations may differ. Adding SMT to a core design has the potential to increase throughput by allowing a second thread (or third, or fourth, or like IBM up to eight) on the same core to have the same access to execution ports, queues and caches. However SMT requires hardware level support – not all structures can be dynamically shared between threads and can either be algorithmically partitioned (prefetch), statically partitioned (micro-op queue) or used in alternate cycles (retire queue).

We also have dual schedulers, one for INT and another for FP, which is different to Intel’s joint scheduler/buffer implementation. 

CPUs, Speeds, Pricing: AMD Ryzen 7 Launch Details The Ryzen Die
Comments Locked

574 Comments

View All Comments

  • mapesdhs - Thursday, March 2, 2017 - link

    It would be bizarre if they weren't clocked a lot higher, since there'll be a greater thermal limit per core, which is why the 4820K is such a fun CPU (high-TDP socket, 40 PCIe lanes, but only 4 cores so oc'ing isn't really limited by thermals compared to 6-core SB-E/IB-E) that can beat the 5820K in some cases (multi-GPU/compute).
  • Meteor2 - Friday, March 3, 2017 - link

    ...Silverblue, look at the PDF opening test. What comes top? It's not an AMD chip.
  • Cooe - Sunday, February 28, 2021 - link

    Lol, because opening PDF's is where people need/will notice more performance? -_-

    CPU's have been able to open up PDF's fast enough to be irrelevant since around the turn of the century...
  • rarson - Thursday, March 2, 2017 - link

    "AMD really isn't offering anything much for the mid range or regular desktop user either."

    So I'd HIGHLY recommend you wait 3 months, or overpay for Intel stuff. Because the lower-core Zen chips will no doubt provide the same performance-per-dollar that the high-end Ryzen chips are offering right now.
  • rarson - Thursday, March 2, 2017 - link

    "their $499 CPU is often beaten by an i3."

    It's clear that you're looking at raw benchmark numbers and not real-world performance for what the chip is designed. If all you need is i3 performance, then why the hell are you looking at an 8-core processor that runs $329 or more?
  • Ratman6161 - Friday, March 3, 2017 - link

    Its all academic to me. As I posted elsewhere, my i7-2600K is still offering me all the performance I need. So I'm just reading this out of curiosity. I also really, really want to like AMD CPU's because I still have a lot of nostalgia for the good old days of the Athlon 64 - when AMD was actually beating Intel in both performance and price. And sometimes I like to tinker around with the latest toys even if I don't particularly need it. I have a home lab with two VMWare ESXi systems built on FX-8320's because at the time they were the cheapest way to get to 8 threads - running a lot of VM's but with each VM doing light work.
    I also run an IT department so I'm always keeping tabs on what might be coming down the pike when I get ready to update desktops. But there is a sharp divide between what I buy for myself at home and what I buy for users at work. At work, most of our users actually would do fine with an i3. But I'm also keeping an eye out for what AMD has on offer in this range.
  • Notmyusualid - Tuesday, March 7, 2017 - link

    @ Jimster480

    Sorry pal, but that is false, or inaccurate information.

    ALL BUT the lowest model of CPUs in the 2011v3 platform are 40 PCIE lanes. Again, only the entry-level chip (6800K),has 28 lanes:

    http://www.anandtech.com/show/10337/the-intel-broa...

    But I do agree with you, that this is competing against the HEDT line.

    Peace.
  • slickr - Thursday, March 2, 2017 - link

    I'm sorry, but that sound just like Intel PR. I don't usually call people shills, but your reply seems to be straight out of Intel's PR book! First of all more and more games are taking advantage of more cores, you can easily see this especially with DX12 titles where if you have even 16 cores it will take advantage of.

    So having 8 cores for $330 to $500 is incredible value! We also see that the Ryzen chips are all competitive compared to the $1100 6900k which is where the comparison should be. Performance on 8 cores.

    And as I've found out real world performance on 8 cores compared to 4 cores is like night and day. Have you tried running a demanding game, streaming in through OBS to Twitch, with the browser open to read Twitch chat and check other stuff in the process, while also having musicbee open and playing your songs and a separate program to read Twitch donations and text, etc...

    This is where 4 core struggles a lot, while 8 core responsiveness is perfect. I can't use my PC if I decide to reduce a video size to a smaller one with a 4 core. Even 8 cores are fully taken advantage off, but through one core you can always do other stuff like watch movie or surf the internet without it struggling to process.

    But even if games are your holy grail and what you base your opinion on, then Ryzen does really well. Its equal or slightly slower than the much much more optimized Intel processors. But you have to keep in mind a lot of the game code is optimized solely for Intel. That is what most gamers use, in fact over 80% is Intel based gamers, but developers will optimize for AMD now that they have a competitor on their hands.

    We see this all the time, with game developers optimizing for RX 400 series a lot, even though Nvidia has the large majority of share in the market. So I expect to see anywhere from 10% to 25% more performance in games and programs that are also optimized for AMD hardware.
  • lmcd - Thursday, March 2, 2017 - link

    How can you call someone a shill and post this without any self-awareness? Your real-world task is GPU-constrained anyway, since you should be using a GPU capable of both video encode and rendering simultaneously. If not, you can consider excellent features like Intel's Quick Sync, which works even with a primary GPU in use these days.
  • Meteor2 - Friday, March 3, 2017 - link

    Game code is optimised for x86.

Log in

Don't have an account? Sign up now