Improvements to the Cache Hierarchy

The biggest under-the-hood change for the Ryzen 2000-series processors is in the cache latency. AMD is claiming that they were able to knock one-cycle from L1 and L2 caches, several cycles from L3, and better DRAM performance. Because pure core IPC is intimately intertwined with the caches (the size, the latency, the bandwidth), these new numbers are leading AMD to claim that these new processors can offer a +3% IPC gain over the previous generation.

The numbers AMD gives are:

  • 13% Better L1 Latency (1.10ns vs 0.95ns)
  • 34% Better L2 Latency (4.6ns vs 3.0ns)
  • 16% Better L3 Latency (11.0ns vs 9.2ns)
  • 11% Better Memory Latency (74ns vs 66ns at DDR4-3200)
  • Increased DRAM Frequency Support (DDR4-2666 vs DDR4-2933)

It is interesting that in the official slide deck AMD quotes latency measured as time, although in private conversations in our briefing it was discussed in terms of clock cycles. Ultimately latency measured as time can take advantage of other internal enhancements; however a pure engineer prefers to discuss clock cycles.

Naturally we went ahead to test the two aspects of this equation: are the cache metrics actually lower, and do we get an IPC uplift?

Cache Me Ousside, How Bow Dah?

For our testing, we use a memory latency checker over the stride range of the cache hierarchy of a single core. For this test we used the following:

  • Ryzen 7 2700X (Zen+)
  • Ryzen 5 2400G (Zen APU)
  • Ryzen 7 1800X (Zen)
  • Intel Core i7-8700K (Coffee Lake)
  • Intel Core i7-7700K (Kaby Lake)

The most obvious comparison is between the AMD processors. Here we have the Ryzen 7 1800X from the initial launch, the Ryzen 5 2400G APU that pairs Zen cores with Vega graphics, and the new Ryzen 7 2700X processor.

This graph is logarithmic in both axes.

This graph shows that in every phase of the cache design, the newest Ryzen 7 2700X requires fewer core clocks. The biggest difference is on the L2 cache latency, but L3 has a sizeable gain as well. The reason that the L2 gain is so large, especially between the 1800X and 2700X, is an interesting story.

When AMD first launched the Ryzen 7 1800X, the L2 latency was tested and listed at 17 clocks. This was a little high – it turns out that the engineers had intended for the L2 latency to be 12 clocks initially, but run out of time to tune the firmware and layout before sending the design off to be manufactured, leaving 17 cycles as the best compromise based on what the design was capable of and did not cause issues. With Threadripper and the Ryzen APUs, AMD tweaked the design enough to hit an L2 latency of 12 cycles, which was not specifically promoted at the time despite the benefits it provides. Now with the Ryzen 2000-series, AMD has reduced it down further to 11 cycles. We were told that this was due to both the new manufacturing process but also additional tweaks made to ensure signal coherency. In our testing, we actually saw an average L2 latency of 10.4 cycles, down from 16.9 cycles in on the Ryzen 7 1800X.

The L3 difference is a little unexpected: AMD stated a 16% better latency: 11.0 ns to 9.2 ns. We saw a change from 10.7 ns to 8.1 ns, which was a drop from 39 cycles to 30 cycles.

Of course, we could not go without comparing AMD to Intel. This is where it got very interesting. Now the cache configurations between the Ryzen 7 2700X and Core i7-8700K are different:

CPU Cache uArch Comparison
  AMD
Zen (Ryzen 1000)
Zen+ (Ryzen 2000)
Intel
Kaby Lake (Core 7000)
Coffee Lake (Core 8000)
L1-I Size 64 KB/core 32 KB/core
L1-I Assoc 4-way 8-way
L1-D Size 32 KB/core 32 KB/core
L1-D Assoc 8-way 8-way
L2 Size 512 KB/core 256 KB/core
L2 Assoc 8-way 4-way
L3 Size 8 MB/CCX
(2 MB/core)
2 MB/core
L3 Assoc 16-way 16-way
L3 Type Victim Write-back

AMD has a larger L2 cache, however the AMD L3 cache is a non-inclusive victim cache, which means it cannot be pre-fetched into unlike the Intel L3 cache.

This was an unexpected result, but we can see clearly that AMD has a latency timing advantage across the L2 and L3 caches. There is a sizable difference in DRAM, however the core performance metrics are here in the lower caches.

We can expand this out to include the three AMD chips, as well as Intel’s Coffee Lake and Kaby Lake cores.

This is a graph using cycles rather than timing latency: Intel has a small L1 advantage, however the larger L2 caches in AMD’s Zen designs mean that Intel has to hit the higher latency L3 earlier. Intel makes quick work of DRAM cycle latency however.

Talking 12nm and Zen+ Translating to IPC: All This for 3%?
Comments Locked

545 Comments

View All Comments

  • bryanlarsen - Thursday, April 19, 2018 - link

    Just because transistors can be 15% smaller, doesn't mean that they have to be. Every IC design includes transistors of many different sizes. GF is saying that the minimum transistor size is 15% smaller than the previous minimum transistor size. And it seems that AMD chose not to use them, selecting to use a larger, higher performance transistor instead that happens to be the same size as their previous transistor.
  • bryanlarsen - Thursday, April 19, 2018 - link

    And you confirm that in the next paragraph. "AMD confirmed that they are using 9T transistor libraries, also the same as the previous generation, although GlobalFoundries offers a 7.5T design as well." So please delete your very misleading transistor diagram and accompanying text.
  • danjw - Friday, April 20, 2018 - link

    I think you are misreading that part of the article. AMD shrunk the size of the processor blocks giving them more "dark silicone" between the blocks. This allowed better thermal isolation between blocks, thus higher clocks.
  • The Hardcard - Thursday, April 19, 2018 - link

    “Cache Me Ousside, How Bow Dah?“

    Very low hanging fruit, yet still so delicious.
  • msroadkill612 - Thursday, April 19, 2018 - link

    "Intel is expected to have a frequency and IPC advantage
    AMD’s counter is to come close on frequency and offer more cores at the same price

    It is easy for AMD to wave the multi-threaded crown with its internal testing, however the single thread performance is still a little behind."

    If so, why is it given such emphasis - its increasingly a corner xase benefit as game devs begin to use the new mainstream multi core platforms. Oh so recently, the norm wa probably 2 core, so that's what they coded for - THEN.

    This minor advantage, compares to intel getting absolutely smashed on increasingly multi threaded apps, at any price point, is rarely mentioned in proximity, where it deserves to be in a balanced analysis.
  • Ratman6161 - Thursday, April 19, 2018 - link

    "its increasingly a corner xase benefit as game devs begin to use the new mainstream multi core platforms" As I often do, I'd like to remind people that not all readers of this article are gamers or give a darn about games. I am one of those i.e. game performance is meaningless to me.
  • 0ldman79 - Thursday, April 19, 2018 - link

    Agreed.

    I am a gamer, but the gaming benchmarks are nearly irrelevant at this point.

    Almost every CPU (ignoring Atom) can easily feed a modern video card and keep the framerate above 60fps. I'm running an FX 6300 and I still run everything at 1080p with a GTX 970 and hardly ever see a framerate drop.

    Gaming benches are somewhat less important than days gone by. Everything on the market hits the minimum requirement and then some. It's primarily fuel for the fanboys, "OMG!!! AMD sucks!!! Intel is faster at gaming!!!"

    Well, considering Intel is running 200fps and AMD is hitting 175fps I'm *thinking* they're both playable.
  • Akkuma - Thursday, April 19, 2018 - link

    Gaming + streaming benchmarks, as done by GamersNexus, are exactly the kind of relevant and important benchmarks more sites need to be doing. Those numbers you don't care about are much more important when you start trying to do streaming.

    Your 60fps? That isn't even what most people who game care about with high refresh rate monitors doing 144hz+. Add in streaming where you're taking a decent FPS hit and that difference between 200 and 175 fps all of a sudden is the difference between maintaining the 144hz and not.
  • Vesperan - Thursday, April 19, 2018 - link

    Yea but.. of all the people interested in gaming, those with high refresh rate monitors and/or streaming online is what - 10% of the market? Tops?

    Sure the GamersNexus reviews have relevance.. to that distinct minority of people out there. Condemning/praising CPU architectures for gaming in general due to these corner cases is non-sensical.

    Like Oldman79 said, damn near any of these CPUs is fine for gaming - unless you happen to be one of the corner cases.
  • Akkuma - Friday, April 20, 2018 - link

    You're pulling a number out of thin air and building an entire argument around a made up number. 72% of steam users have 1080p monitors. What percentage of those are high refresh rate is unknown, but 120hz monitors have existed for at least 5 years now and maybe even longer. At this stage arguing around 60fps is like arguing about sound quality of cassettes today as we are long past it.

Log in

Don't have an account? Sign up now