Improvements to the Cache Hierarchy

The biggest under-the-hood change for the Ryzen 2000-series processors is in the cache latency. AMD is claiming that they were able to knock one-cycle from L1 and L2 caches, several cycles from L3, and better DRAM performance. Because pure core IPC is intimately intertwined with the caches (the size, the latency, the bandwidth), these new numbers are leading AMD to claim that these new processors can offer a +3% IPC gain over the previous generation.

The numbers AMD gives are:

  • 13% Better L1 Latency (1.10ns vs 0.95ns)
  • 34% Better L2 Latency (4.6ns vs 3.0ns)
  • 16% Better L3 Latency (11.0ns vs 9.2ns)
  • 11% Better Memory Latency (74ns vs 66ns at DDR4-3200)
  • Increased DRAM Frequency Support (DDR4-2666 vs DDR4-2933)

It is interesting that in the official slide deck AMD quotes latency measured as time, although in private conversations in our briefing it was discussed in terms of clock cycles. Ultimately latency measured as time can take advantage of other internal enhancements; however a pure engineer prefers to discuss clock cycles.

Naturally we went ahead to test the two aspects of this equation: are the cache metrics actually lower, and do we get an IPC uplift?

Cache Me Ousside, How Bow Dah?

For our testing, we use a memory latency checker over the stride range of the cache hierarchy of a single core. For this test we used the following:

  • Ryzen 7 2700X (Zen+)
  • Ryzen 5 2400G (Zen APU)
  • Ryzen 7 1800X (Zen)
  • Intel Core i7-8700K (Coffee Lake)
  • Intel Core i7-7700K (Kaby Lake)

The most obvious comparison is between the AMD processors. Here we have the Ryzen 7 1800X from the initial launch, the Ryzen 5 2400G APU that pairs Zen cores with Vega graphics, and the new Ryzen 7 2700X processor.

This graph is logarithmic in both axes.

This graph shows that in every phase of the cache design, the newest Ryzen 7 2700X requires fewer core clocks. The biggest difference is on the L2 cache latency, but L3 has a sizeable gain as well. The reason that the L2 gain is so large, especially between the 1800X and 2700X, is an interesting story.

When AMD first launched the Ryzen 7 1800X, the L2 latency was tested and listed at 17 clocks. This was a little high – it turns out that the engineers had intended for the L2 latency to be 12 clocks initially, but run out of time to tune the firmware and layout before sending the design off to be manufactured, leaving 17 cycles as the best compromise based on what the design was capable of and did not cause issues. With Threadripper and the Ryzen APUs, AMD tweaked the design enough to hit an L2 latency of 12 cycles, which was not specifically promoted at the time despite the benefits it provides. Now with the Ryzen 2000-series, AMD has reduced it down further to 11 cycles. We were told that this was due to both the new manufacturing process but also additional tweaks made to ensure signal coherency. In our testing, we actually saw an average L2 latency of 10.4 cycles, down from 16.9 cycles in on the Ryzen 7 1800X.

The L3 difference is a little unexpected: AMD stated a 16% better latency: 11.0 ns to 9.2 ns. We saw a change from 10.7 ns to 8.1 ns, which was a drop from 39 cycles to 30 cycles.

Of course, we could not go without comparing AMD to Intel. This is where it got very interesting. Now the cache configurations between the Ryzen 7 2700X and Core i7-8700K are different:

CPU Cache uArch Comparison
  AMD
Zen (Ryzen 1000)
Zen+ (Ryzen 2000)
Intel
Kaby Lake (Core 7000)
Coffee Lake (Core 8000)
L1-I Size 64 KB/core 32 KB/core
L1-I Assoc 4-way 8-way
L1-D Size 32 KB/core 32 KB/core
L1-D Assoc 8-way 8-way
L2 Size 512 KB/core 256 KB/core
L2 Assoc 8-way 4-way
L3 Size 8 MB/CCX
(2 MB/core)
2 MB/core
L3 Assoc 16-way 16-way
L3 Type Victim Write-back

AMD has a larger L2 cache, however the AMD L3 cache is a non-inclusive victim cache, which means it cannot be pre-fetched into unlike the Intel L3 cache.

This was an unexpected result, but we can see clearly that AMD has a latency timing advantage across the L2 and L3 caches. There is a sizable difference in DRAM, however the core performance metrics are here in the lower caches.

We can expand this out to include the three AMD chips, as well as Intel’s Coffee Lake and Kaby Lake cores.

This is a graph using cycles rather than timing latency: Intel has a small L1 advantage, however the larger L2 caches in AMD’s Zen designs mean that Intel has to hit the higher latency L3 earlier. Intel makes quick work of DRAM cycle latency however.

Talking 12nm and Zen+ Translating to IPC: All This for 3%?
Comments Locked

545 Comments

View All Comments

  • YukaKun - Saturday, April 21, 2018 - link

    Oh, I'm actually curious about your experience with all the systems.

    I'm still running my i7 2700K at ~4.6Ghz. I do agree I haven't felt that it's a ~2012 CPU and it does everything pretty damn well still, but I'd like to know if you have noticed a difference between the new AMD and your Sandy Bridge. Same for when you assemble the 2700X.

    I'm trying to find an excuse to get the 2700X, but I just can't find one, haha.

    Cheers!
  • Luckz - Monday, April 23, 2018 - link

    The the once in a lifetime chance to largely keep your CPU name (2700K => 2700X) should be all the excuse you need.
  • YukaKun - Monday, April 23, 2018 - link

    That is so incredibly superficial and dumb... I love it!

    Cheers!
  • mapesdhs - Monday, April 23, 2018 - link

    YukaKun, your 2700K is only at 4.6? Deary me, should be 5.0 and proud, doable with just a basic TRUE and one fan. 8) For reference btw, a 2700K at 5GHz gives the same threaded performance as a 6700K at stock.

    And I made a typo in my earlier reply, mentioned the wrong XEON model, should have been the 2680 V2.
  • YukaKun - Tuesday, April 24, 2018 - link

    For daily usage and stability, I found that 4.6Ghz worked best in terms of noise/heat/power ratios.

    I also did not disable any power saving features, so it does not work unnecessarily when not under heavy load.

    I'm using AS5 with a TT Frio (the original one) on top, so it's whisper quiet at 4.6Ghz and I like it like that. When I made it work at 5Ghz, I found I had to have the fans near 100%, so it wasn't something I'd like, TBH.

    But, all of this to say: yes, I've done it, but settled with 4.6Ghz.

    Cheers!
  • mapesdhs - Friday, March 29, 2019 - link

    (an old thread, but in case someone comes across it...)

    I use dynamic vcore so I still get the clock/voltage drops when idle. I'm using a Corsair H80 with 2x NDS 120mm PWM, so also quiet even at full load; no need for such OTT cooling to handle the load heat, but using an H80 means one can have low noise aswell. An ironic advantage of the lower thermal density of the older process sizes. Modern CPUs with the same TDP dump it out in a smaller area, making it more difficult to keep cool.

    Having said that, I've been recently pondering an upgrade to have much better general idle power draw and a decent bump for threaded performance. Considering a Ryzem 5 2600 or 7 2700, but might wait for Zen2, not sure yet.
  • moozooh - Sunday, April 22, 2018 - link

    No, it might have to do with the fact that the 8350K has 1.5x the cache size and beastly per-thread performance that is also sustained at all times—so it doesn't have to switch from a lower-powered state (which the older CPUs were slower at), nor does it taper off as other cores get loaded, which is most noticeable on the the things Samus mentioned, ie. "boot times, app launches and gaming". Boot times and app launches are both essentially single-thread tasks with no prior context, and gaming is where a CPU upgrade like that will improve worst-case scenarios by at least an order of magnitude, which is really what's most noticeable.

    For instance, if your monitor is 60Hz and your average framerate is 70, you won't notice the difference between 60 and 70—you will only notice the time spent under 60. Even a mildly overclocked 8350K is still the one of best gaming CPUs for this reason, easily rivaling or outperforming previous-gen Ryzens in most cases and often being on par with the much more expensive 8700K where thread count isn't as important as per-thread performance for responsiveness and eliminating stutters. When pushed to or above 5 GHz, I'm reasonably certain it will still give many of the newer, more expensive chips, a run for their money.
  • spdragoo - Friday, April 20, 2018 - link

    Memory prices? Memory prices are still pretty much the way they've always been:
    -- faster memory costs (a little) more than slower memory
    -- larger memory sticks/kits cost (a little) more than smaller sticks/kits
    -- last-gen RAM (DDR3) is (very slightly) cheaper than current-gen RAM (DDR4)

    I suppose you can wait 5 billion years for the Sun to fade out, at which point all RAM (or whatever has replaced it by then) will have the same cost ($0...since no one will be around to buy or sell it)...but I don't think you need to worry about that.
  • Ferrari_Freak - Friday, April 20, 2018 - link

    You didn't write anything about price there... All you've said is that relative pricing for things is the same it has always been, and that's no surprise.

    The $$$ cost of any give stick is more than it was a year or two ago. 2x8gb DDR4-3200 G.Skill Ripjaws V is $180 on Newegg today. It was $80 two years ago. Clearly not the way they've always been...
  • James5mith - Friday, April 20, 2018 - link

    2x16GB Crucial DDR4-2400 SO-DIMM kit.

    https://www.amazon.com/gp/product/B019FRCV9G/

    November 29th 2016 (when I purchased): $172

    Current Amazon price for exact same kit: $329

Log in

Don't have an account? Sign up now