Zen: New Core Features

Since August, AMD has been slowly releasing microarchitecture details about Zen. Initially it started with a formal disclosure during Intel’s annual developer event, the followed a paper at HotChips, some more details at the ‘New Horizon’ event in December, and recently a talk at ISSCC. The Zen Tech Day just before launch gave a chance to get some of those questions answered.

First up, let’s dive right in to the high-level block diagram:

In this diagram, the core is split into the ‘front-end’ in blue and the rest of the core is the ‘back-end’. The front-end is where instructions come into the core, branch predictors are activated and instructions are decoded into micro-ops (micro-operations) before being placed into a micro-op queue. In red is the part of the back-end that deals with integer (INT) based instructions, such as integer math, loops, loads and stores. In orange is the floating-point (FP) part of the back-end, typically focused on different forms of math compute. Both the INT and FP segments have their own separate execution port schedulers

If it looks somewhat similar to other high-performance CPU cores, you’d be correct: there seems to be a high-level way of ‘doing things’ when it comes to x86, with three levels of cache, multi-level TLBs, instruction coalescing, a set of decoders that dispatch a combined 4-5+ micro-ops per cycle, a very large micro-op queue (150+), shared retire resources, AVX support, and simultaneous hyper-threading.

What’s New to AMD

First up, and the most important, was the inclusion of the micro-op cache. This allows for instructions that were recently used to be called up to the micro-op queue rather than being decoded again, and saves a trip through the core and caches. Typically micro-op caches are still relatively small: Intel’s version can support 1536 uOps with 8-way associativity. We learned (after much asking) at AMD’s Tech Day that the micro-op cache for Zen can support ‘2K’ (aka 2048) micro-ops with up to 8-ops per cache line. This is good for AMD, although I conversed with Mike Clark on this: if AMD had said ‘512’, on one hand I’d be asking why it is so small, and on the other wondering if they would have done something different to account for the performance adjustments. But ‘2K’ fits in with what we would expect.

Secondly is the cache structure. We were given details for the L1, L2 and L3 cache sizes, along with associativity, to compare it to former microarchitectures as well as Intel’s offering.

In this case, AMD has given Zen a 64KB L1 Instruction cache per core with 4-way associativity, with a lop-sided 32KB L1 Data cache per core with 8-way associativity. The size and accessibility determines how frequently a cache line is missed, and it is typically a trade-off for die area and power (larger caches require more die area, more associativity usually costs power). The instruction cache, per cycle, can afford a 32byte fetch while the data cache allows for 2x 16-byte loads and one 16-byte store per cycle. AMD stated that allowing two D-cache loads per cycle is more representative of the most workloads that end up with more loads than stores.

The L2 is a large 512 KB, 8-way cache per core. This is double the size of Intel’s 256 KB 4-way cache in Skylake or 256 KB 8-way cache in Broadwell. Typically doubling the cache size affords a 1.414 (square root of 2) better chance of a cache hit, reducing the need to go further out to find data, but comes at the expense of die area. This will have a big impact on a lot of performance metrics, and AMD is promoting faster cache-to-cache transfers than previous generations. Both the L1 and L2 caches are write-back caches, improving over the L1 write-through cache in Bulldozer.

The L3 cache is an 8MB 16-way cache, although at the time last week it was not specified over how many cores this was. From the data release today, we can confirm rumors that this 8 MB cache is split over a four-core module, affording 2 MB of L3 cache per core or 16 MB of L3 cache for the whole 8-core Zen CPU. These two 8 MB caches are separate, so act as a last-level cache per 4-core module with the appropriate hooks into the other L3 to determine if data is needed. As part of the talk today we also learned that the L3 is a pure victim cache for L1/L2 victims, rather than a cache for prefetch/demand data, which tempers the expectations a little but the large L2 will make up for this. We’ll discuss it as part of today’s announcement.

AMD is also playing with SMT, or simultaneous multi-threading. We’ve covered this with Intel extensively, under the heading ‘HyperThreading’. At a high level both these terms are essentially saying the same thing, although their implementations may differ. Adding SMT to a core design has the potential to increase throughput by allowing a second thread (or third, or fourth, or like IBM up to eight) on the same core to have the same access to execution ports, queues and caches. However SMT requires hardware level support – not all structures can be dynamically shared between threads and can either be algorithmically partitioned (prefetch), statically partitioned (micro-op queue) or used in alternate cycles (retire queue).

We also have dual schedulers, one for INT and another for FP, which is different to Intel’s joint scheduler/buffer implementation. 

CPUs, Speeds, Pricing: AMD Ryzen 7 Launch Details The Ryzen Die
Comments Locked

574 Comments

View All Comments

  • mapesdhs - Sunday, March 5, 2017 - link

    If you have a Q6600, I can understand that, but the QX9650 ain't too bad. ;)
  • Marburg U - Monday, March 6, 2017 - link

    I'm on a Q9550 running at 3.8 for the past 6 years. I could still run modern games at 1050p, with a r9 270x, but that's the best i can squeeze out of it. Mind that i'm still on DDR2 (my motherboard turns 10 in a few months). I really want to embrace a ultra wide monitor.
  • mapesdhs - Monday, March 6, 2017 - link

    Moving up to 2560x1440 may indeed benefit from faster RAM, but it probably depends on the game. Likewise, CPU dependencies vary, and they can lessen at higher resolutions, though this isn't always the case. Still, good point about DDR2 there. To what kind of GPU were you thinking of upgrading? Highend like 1080 Ti? Mid-range? Used GTX 980s are a good deal these days, and a bunch of used 980 Tis will likely hit the market shortly. I've tested 980 SLI with older platforms, actually not too bad, though I've not done tests with my QX9650 yet, started off at the low end to get through the pain. :D (P4/3.4 on an ASUS Striker II Extreme, it's almost embarassing)

    Ian.
  • Meditari - Monday, March 6, 2017 - link

    I'm actually using a Q9550 that's running at 3.8 as well. I have a 980ti and it can do 4k, albeit at 25-30fps in newer games like Witcher 3. Fairly certain a 1080ti would work great with a Q9550, but I feel like the time for these chips is coming to an end. Still incredible that a 8 year old chip can still hold it's own by just upgrading the GPU
  • mapesdhs - Tuesday, March 7, 2017 - link

    Intriguing! Many people don't even try to use such a card on an older mbd, they just assume from sites reviews that it's not worth doing. Can you run 3DMark11/13? What results do you get? You won't be able to cite the URLs here directly, but you can mention the submission numbers and I can compare them to my 980 Ti running on newer CPUs (the first tests I do with every GPU I obtain are with a 5GHz 2700K, at which speed it has the same multithreaded performance as a stock 6700K).

    What do you get for CB 11.5 and CB R15 single/multi?

    What mbd are you using? I ask because some later S775 mbds did use DDR3, albeit not at quite the speeds possible with Z68, etc. In other words, you could move the parts on a better mbd as an intermediate step, though finding such a board could be difficult. Hmm, given the value often placed on such boards, it'd probably be easier to pick up a used 3930K and a board to go with it, that would be fairly low cost.

    Or of course just splash for a 1700X. 8)

    Ian.
  • Notmyusualid - Tuesday, March 7, 2017 - link

    Welcome to the 21:9 fan club brother.

    But be careful of the 1920x1080 screens, my brother's 21:9 doesn't look half as good as my 3440x1440 screen.. It just needs that little bit more verticle resoultion.

    My pals 4k screen is lovely, and brings his 4GB 980 GTX to its knees. Worse aspect ratio (in my opinion), and too many pixels (for now) to draw.

    Careful of second-hand purchases too, many panels with backlight-bleed issues out there, and they are returns for that reason, again, in my opinion.
  • AnnonymousCoward - Monday, March 6, 2017 - link

    Long story short:
    20% lower single-thread than Intel
    70% higher multi-thread due to 8 cores
    $330-$500
  • Mugur - Tuesday, March 7, 2017 - link

    Actually, on average -6.8% IPC versus Kaby Lake (at the same frequency) - I believe this came directly from AMD. Add to this a lower grade 14nm process (GF again) that is biting AMD again and again (see last year RX480). Motherboard issues (memory, HPET), OS/application issues (SMT, lack of optimizations).

    All in all, I'm really impressed of what they achieved with such obstacles.
  • AnnonymousCoward - Tuesday, March 7, 2017 - link

    Just looking at CineBench at a given TDP and price, AMD is 20% lower. That's the high level answer, regardless of IPC * clock frequency. I agree it's a huge win for AMD, and for users who need multicore performance.
  • Cooe - Monday, March 1, 2021 - link

    Maybe compare to Intel's Broadwell-E chips with actually similar core counts.... -_-

Log in

Don't have an account? Sign up now