Cache and Infinity Fabric

If it hasn’t been hammered in already,  the big change in the cache is the L1 instruction cache which has been reduced from 64 KB to 32 KB, but the associativity has increased from 4-way to 8-way. This change enabled AMD to increase the size of the micro-op cache from 2K entry to 4K entry, and AMD felt that this gave a better performance balance with how modern workloads are evolving.

The L1-D cache is still 32KB 8-way, while the L2 cache is still 512KB 8-way. The L3 cache, which is a non-inclusive cache (compared to the L2 inclusive cache), has now doubled in size to 16 MB per core complex, up from 8 MB. AMD manages its L3 by sharing a 16MB block per CCX, rather than enabling access to any L3 from any core.

Because of the increase in size of the L3, latency has increased slightly. L1 is still 4-cycle, L2 is still 12-cycle, but L3 has increased from ~35 cycle to ~40 cycle (this is a characteristic of larger caches, they end up being slightly slower latency; it’s an interesting trade off to measure). AMD has stated that it has increased the size of the queues handling L1 and L2 misses, although hasn’t elaborated as to how big they now are.

Infinity Fabric

With the move to Zen 2, we also move to the second generation of Infinity Fabric. One of the major updates with IF2 is the support of PCIe 4.0, and thus the increase of the bus width from 256-bit to 512-bit.

Overall efficiency of IF2 has improved 27% according to AMD, leading to a lower power per bit. As we move to more IF links in EPYC, this will become very important as data is transferred from chiplet to IO die.

One of the features of IF2 is that the clock has been decoupled from the main DRAM clock. In Zen and Zen+, the IF frequency was coupled to the DRAM frequency, which led to some interesting scenarios where the memory could go a lot faster but the limitations in the IF meant that they were both limited by the lock-step nature of the clock. For Zen 2, AMD has introduced ratios to the IF2, enabling a 1:1 normal ratio or a 2:1 ratio that reduces the IF2 clock in half.

This ratio should automatically come into play around DDR4-3600 or DDR4-3800, but it does mean that IF2 clock does reduce in half, which has a knock on effect with respect to bandwidth. It should be noted that even if the DRAM frequency is high, having a slower IF frequency will likely limit the raw performance gain from that faster memory. AMD recommends keeping the ratio at a 1:1 around DDR4-3600, and instead optimizing sub-timings at that speed.

Integer Units, Load and Store Conclusions: Platform, SoC, Core
Comments Locked

216 Comments

View All Comments

  • nandnandnand - Tuesday, June 11, 2019 - link

    Shouldn't we be looking at highest transistors per square millimeter plotted over time? The Wikipedia article helpfully includes die area for most of the processors, but the graph near the top just plots number of transistors without regard to die size. If Intel's Xe hype is accurate, they will be putting out massive GPUs (1600 mm^2?) made of multiple connected dies, and AMD already does something similar with CPU chiplets.

    I know that the original Moore's law did not take into account die size, multi chip modules, etc. but to ignore that seems cheaty now. Regardless, performance is what really matters. Hopefully we see tight integration of CPU and L4 DRAM cache boosting performance within the next 2-3 years.
  • Wilco1 - Wednesday, June 12, 2019 - link

    Moore's law is about transistors on a single integrated chip. But yes density matters too, especially actual density achieved in real chips (rather than marketing slides). TSMC 7nm does 80-90 million transistors/mm^2 for A12X, Kirin 980, Snapdragon 8cx. Intel is still stuck at ~16 million transistors/mm^2.
  • FunBunny2 - Wednesday, June 12, 2019 - link

    enough about Moore, unless you can get it right. Moore said nothing about transistors. He said that compute capability was doubling about every second year. This is what he actually wrote:

    "The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. "

    [the wiki]

    the main reason the Law has slowed is just physics: Xnm is little more (teehee) than propaganda for some years, at least since the end of agreed dimensions of what a 'transistor' was. couple that with the coalescing of the maths around 'the best' compute algorithms; complexity has run into the limiting factor of the maths. you can see it in these comments: gimme more ST, I don't care about cores. and so on. Mother Nature's Laws are fixed and immutable; we just don't know all of them at any given moment, but we're getting closer. in the old days, we had the saying 'doing the easy 80%'. we're well into the tough 20%.
  • extide - Monday, June 17, 2019 - link

    "The complexity for minimum component costs..."

    He was directly referring to transistor count with the word "complexity" in your quote -- so yes he was literally talking about transistor count.
  • crazy_crank - Tuesday, June 11, 2019 - link

    Actually the number of cores doesn't matter AFAIK, as Moores Law originally only was about transistor density, so all you need to compare is transistors per square millimeter. Looked at it like this, it actually doesn't even look that bad
  • chada - Wednesday, June 12, 2019 - link

    Moore's law specifically talks about density doubling. If they can fit 6 cores into the same footprint, you can absolutely consider 6 cores for a density comparison. That being said, we have been off this pace for a while.
  • III-V - Wednesday, June 12, 2019 - link

    >Moore's law specifically talks about density doubling.

    No it doesn't.

    Jesus Christ, why is Moore's Law so fucking hard for people to understand?
  • LordSojar - Thursday, June 13, 2019 - link

    Why it ever became known as a "law" is totally beyond me. More like Moore's Theory (and that's pushing it, as he made a LOT of suppositions about things he couldn't possibly predict, not being an expert in those areas. ie material sciences, quantum mechanics, etc)
  • sing_electric - Friday, June 14, 2019 - link

    This. He wasn't describing something fundamental about the way nature works - he was looking at technological advancements in one field over a short time frame. I guess 'Moore's Observation" just didn't sound as good.

    And the reason why no one seems to get it right is that Moore wrote and said several different things about it over the years - he'd OBSERVED that the number of transistors you could get on an IC was increasing at a certain rate, and from there, that this lead to performance increases, so both the density AND performance arguments have some amount of accuracy behind them.

    And almost no one points out that it's ultimately just a function of geometry: As process decreases linearly (say, 10 units to 7 units) , you get a geometric increase in the # of transistors because you get to multiply that by two dimensions. Other benefits - like decreased power use per transistor, etc. - ultimately flow largely from that as well (or they did, before we had to start using more and more exotic materials to get shrinks to work.)
  • FunBunny2 - Thursday, June 13, 2019 - link

    "Jesus Christ, why is Moore's Law so fucking hard for people to understand?"

    because, in this era of truthiness, simplistic is more fun than reality. Moore made his observation in 1965, at which time IC fabrication had not even reached LSI levels. IOW, the era when node size was dropping like a stone and frequency was rising like a Saturn rocket; performance increases with each new iteration of a device were obvious to even the most casual observer. just like prices in the housing market before the Great Recession, the simpleminded still think that both vectors will continue forevvvvaaahhh.

Log in

Don't have an account? Sign up now