Improvements to the Cache Hierarchy

The biggest under-the-hood change for the Ryzen 2000-series processors is in the cache latency. AMD is claiming that they were able to knock one-cycle from L1 and L2 caches, several cycles from L3, and better DRAM performance. Because pure core IPC is intimately intertwined with the caches (the size, the latency, the bandwidth), these new numbers are leading AMD to claim that these new processors can offer a +3% IPC gain over the previous generation.

The numbers AMD gives are:

  • 13% Better L1 Latency (1.10ns vs 0.95ns)
  • 34% Better L2 Latency (4.6ns vs 3.0ns)
  • 16% Better L3 Latency (11.0ns vs 9.2ns)
  • 11% Better Memory Latency (74ns vs 66ns at DDR4-3200)
  • Increased DRAM Frequency Support (DDR4-2666 vs DDR4-2933)

It is interesting that in the official slide deck AMD quotes latency measured as time, although in private conversations in our briefing it was discussed in terms of clock cycles. Ultimately latency measured as time can take advantage of other internal enhancements; however a pure engineer prefers to discuss clock cycles.

Naturally we went ahead to test the two aspects of this equation: are the cache metrics actually lower, and do we get an IPC uplift?

Cache Me Ousside, How Bow Dah?

For our testing, we use a memory latency checker over the stride range of the cache hierarchy of a single core. For this test we used the following:

  • Ryzen 7 2700X (Zen+)
  • Ryzen 5 2400G (Zen APU)
  • Ryzen 7 1800X (Zen)
  • Intel Core i7-8700K (Coffee Lake)
  • Intel Core i7-7700K (Kaby Lake)

The most obvious comparison is between the AMD processors. Here we have the Ryzen 7 1800X from the initial launch, the Ryzen 5 2400G APU that pairs Zen cores with Vega graphics, and the new Ryzen 7 2700X processor.

This graph is logarithmic in both axes.

This graph shows that in every phase of the cache design, the newest Ryzen 7 2700X requires fewer core clocks. The biggest difference is on the L2 cache latency, but L3 has a sizeable gain as well. The reason that the L2 gain is so large, especially between the 1800X and 2700X, is an interesting story.

When AMD first launched the Ryzen 7 1800X, the L2 latency was tested and listed at 17 clocks. This was a little high – it turns out that the engineers had intended for the L2 latency to be 12 clocks initially, but run out of time to tune the firmware and layout before sending the design off to be manufactured, leaving 17 cycles as the best compromise based on what the design was capable of and did not cause issues. With Threadripper and the Ryzen APUs, AMD tweaked the design enough to hit an L2 latency of 12 cycles, which was not specifically promoted at the time despite the benefits it provides. Now with the Ryzen 2000-series, AMD has reduced it down further to 11 cycles. We were told that this was due to both the new manufacturing process but also additional tweaks made to ensure signal coherency. In our testing, we actually saw an average L2 latency of 10.4 cycles, down from 16.9 cycles in on the Ryzen 7 1800X.

The L3 difference is a little unexpected: AMD stated a 16% better latency: 11.0 ns to 9.2 ns. We saw a change from 10.7 ns to 8.1 ns, which was a drop from 39 cycles to 30 cycles.

Of course, we could not go without comparing AMD to Intel. This is where it got very interesting. Now the cache configurations between the Ryzen 7 2700X and Core i7-8700K are different:

CPU Cache uArch Comparison
  AMD
Zen (Ryzen 1000)
Zen+ (Ryzen 2000)
Intel
Kaby Lake (Core 7000)
Coffee Lake (Core 8000)
L1-I Size 64 KB/core 32 KB/core
L1-I Assoc 4-way 8-way
L1-D Size 32 KB/core 32 KB/core
L1-D Assoc 8-way 8-way
L2 Size 512 KB/core 256 KB/core
L2 Assoc 8-way 4-way
L3 Size 8 MB/CCX
(2 MB/core)
2 MB/core
L3 Assoc 16-way 16-way
L3 Type Victim Write-back

AMD has a larger L2 cache, however the AMD L3 cache is a non-inclusive victim cache, which means it cannot be pre-fetched into unlike the Intel L3 cache.

This was an unexpected result, but we can see clearly that AMD has a latency timing advantage across the L2 and L3 caches. There is a sizable difference in DRAM, however the core performance metrics are here in the lower caches.

We can expand this out to include the three AMD chips, as well as Intel’s Coffee Lake and Kaby Lake cores.

This is a graph using cycles rather than timing latency: Intel has a small L1 advantage, however the larger L2 caches in AMD’s Zen designs mean that Intel has to hit the higher latency L3 earlier. Intel makes quick work of DRAM cycle latency however.

Talking 12nm and Zen+ Translating to IPC: All This for 3%?
Comments Locked

545 Comments

View All Comments

  • DisoRDeR4 - Thursday, April 19, 2018 - link

    Thanks for the review, but I noticed a minor error -- your AMD Ryzen Cache Clocks graph on the 3rd page shows data for the 2700X, but in the preceding text it is referred to as the 2800X.
  • IGTrading - Thursday, April 19, 2018 - link

    AMD wins all gaming benchmarks, hands down and does this at a real 105W TDP.

    In my opinion, it is not fair to say that Intel "wins" the single threaded scenarios as long as we see clearly that the 8700 and the 8700K have the "multi-core enhancement" activated and the motherboard allows them to draw 120W on a regular basis, like your own graphs show.

    Allow AMD's Ryzen to draw 120W max and auto-overclock and only the would we have a fair comparison.

    In the end, I guess that all those that bought the 7700K and the 8700K "for gaming" are now very pissed off.

    The former have a 100% dead/un-upgradeable platform while the latter spent a ton of money on a platform that was more expensive, consumes more power and will surely be rendered un-upgradeable soon by Intel :) while AMD already rendered it obsolete (from the "best of the best" POV) or at least the X370+8700K is now the clear second-best in 99% of the tests @ the same power consumption while losing all price/performance comparisons.

    IMHO ... allowing the 8700 & 8700K to draw 120W instead of 65W / 95W and allowing auto-overclocking while the AMD Ryzen is not tested with equivalent settings is maybe the only thing that needs to be improved with regards to the fairness of this review.

    Thank you for your work Ian!
  • Luckz - Monday, April 23, 2018 - link

    The 2700X draws so much more than its fake on-paper TDP it's not funny. With XFR2 and PB2 of course.

    PBO can add even more.
  • Ninjawithagun - Thursday, April 19, 2018 - link

    Incorrect comparison. Why does every review keep making the same mistake?? It has nothing to do with price. Comparing like CPU architectures is the only logical course of action. 6 core/12 thread vs 8 core/16 thread makes no sense. Comparing the Intel 8700K 6 core/12 thread @ $347 to the AMD 2600X 6 core/12 thread @ $229.99 makes the most sense here. Once the proper math is done, AMD destroys Intel in performance vs. cost, especially when you game at any resolution higher than 1080P. The GPU becomes the bottleneck at that point, negating any IPC benefits of the Intel CPUs. I know this how? Simple. I also own a 8700K gaming PC ;-)
  • SmCaudata - Thursday, April 19, 2018 - link

    I'd like to see more scatterplots with performance versus cost. Also, total cost (MB+CPU+cooler if needed) would be ideal. Even an overall average of 99th percentile 4k scores in gaming (one chart) would be interesting.... hmmm maybe a project for the afternoon.
  • Luckz - Monday, April 23, 2018 - link

    The English-language version of the Tomshardware review has a million plots on the last page (14). 4K is complete irrelevant for plotting though since you're GPU-limited there.
  • Krysto - Thursday, April 19, 2018 - link

    Wrong. Performance at a given price level is absolutely a metric chip buyers care about - if not the MOST important metric.

    People usually think "Okay, I have this $300 budget for a CPU, which is the best CPU I can get for that money?" - It's irrelevant whether one has 4 cores or 8 cores or 16 cores. They will get the best CPU for the money, regardless of cores and threads.

    Compared core vs core or thread vs thread is just a synthetic and academic comparison. People don't actually buy based on that kind of thinking. If X chip has 15% better gaming performance than the Y chip for the same amount of money, they'll get the X chip, regardless of cores, threads, caches, and whatnot.
  • Ninjawithagun - Thursday, April 19, 2018 - link

    Incorrect. Cost vs. Cost is only one of many factors to consider, but is not a main one, especially if the competition has a processor of equal quality for much less cost. Comparing an Intel 6 core/12 thread CPU to an AMD 8 cores/16 thread CPU makes absolutely no sense if you are measuring cost vs. performance. Your argument makes no sense, sorry.
  • fallaha56 - Thursday, April 19, 2018 - link

    Ok by your rationale we should compare Threadripper to 8700k too
  • Ninjawithagun - Thursday, April 19, 2018 - link

    Now you are just being stupid.

Log in

Don't have an account? Sign up now