Memory Subsystem: Latency

AMD chose to share a core design among mobile, desktop and server for scalability and economic reasons. The Core Complex (CCX) is still used in Rome like it was in the previous generation. 

What has changed is that each CCX communicates with the central IO hub, instead of four dies communicating in 4 node NUMA layout (This option is still available to use via the NPS4 switch, keeping each CCD local to its quadrant of the sIOD as well as those local memory controllers, avoiding hops between sIOD quadrants which encour a slight latency penalty). So as the performance of modern CPUs depends heavily on the cache subsystem, we were more than curious what kind of latency a server thread would see as it accesses more and more pages in the cache hierarchy. 

We're using our own in-house latency test. In particular what we're interested in publishing is the estimated structural latency of the processors, meaning we're trying to account for TLB misses and disregard them in these numbers, except for the DRAM latencies where latency measurements get a bit more complex between platforms, and we revert to full random figures.

Mem
Hierarchy
AMD EPYC 7742
DDR4-3200

(ns @ 3.4GHz)
AMD EPYC 7601
DDR4-2400

(ns @ 3.2GHz)
Intel Xeon 8280
DDR-2666

(ns @ 2.7GHz)
L1 Cache 32KB

4 cycles
1.18ns
32KB

4 cycles
1.25ns
32KB

4 cycles
1.48ns
L2 Cache 512KB

13 cycles
3.86ns
512KB

12 cycles
3.76ns
1024KB

14 cycles
5.18ns
L3 Cache 16MB / CCX (4C)
256MB Total

~34 cycles (avg)
~10.27 ns
16MB / CCX (4C)
64MB Total

 
38.5MB / (28C)
Shared

~46 cycles (avg)
~17.5ns
DRAM

128MB Full Random
~122ns (NPS1)

~113ns (NPS4)

~116ns

~89ns
DRAM

512MB Full Random
~134ns (NPS1)

~125ns (NPS4)
 
~109ns

Update 2019/10/1: We've discovered inaccuracies with our originally published latency numbers, and have subsequently updated the article with more representative figures with a new testing tool.

Things get really interesting when starting to look at cache depths beyond the L2. Naturally Intel here this happens at 1MB while for AMD this is after 512KB, however AMD’s L2 has a speed advantage over Intel’s larger cache.

Where AMD has an ever more clearer speed advantage is in the L3 caches that are clearly significantly faster than Intel’s chips. The big difference here is that AMD’s L3’s here are only local to a CCX of 4 cores – for the EPYC 7742 this is now doubled to 16MB up from 8MB on the 7601.

Currently this is a two-edged sword for the AMD platforms: On one hand, the EPYC processors have significantly more total cache, coming in at a whopping 256MB for the 7742, quadruple the amount over the 64MB of the 7601, and a lot more than Intel’s platforms, which come in at 38.5MB for the Xeon 8180, 8176, 8280, and a larger 55MB for the Xeon E5-2699 v4.

The disadvantage for AMD is that while they have more cache, the EPYC 7742 rather consist of 16 CCX which all have a very fast 16 MB L3. Although the 64 cores are one big NUMA node now, the 64-core chip is basically 16x 4 cores, each with 16 MB L3-caches. Once you get beyond that 16 MB cache, the prefetchers can soften the blow, but you will be accessing the main DRAM.

A little bit weird is the fact that accessing data that resides at the same die (CCD) but is not within the same CCX is just as slow as accessing data is on a totally different die. This is because regardless of where the other CCX is, whether it is nearby on the same die or on the other side of the chip, the data access still has to go through the IF to the IO die and back again.

Is that necessarily a bad thing? The answer: most of the time it is not. First of all, in most applications only a low percentage of accesses must be answered by the L3 cache. Secondly, each core on the CCX has no less than 4 MB of L3 available, which is far more than the Intel cores have at their disposal (1.375 MB). The prefetchers have a lot more space to make sure that the data is there before it is needed.

But database performance might still suffer somewhat. For example, keeping a large part of the index in the cache improve performance, and especially OLTP accesses tend to quite random. Secondly the relatively slow communication over a central hub slow down synchronization communication. That is a real thing is shown by the fact that Intel states that the OLTP hammerDB runs 60% faster on a 28-core Intel Xeon 8280 than on EPYC 7601. We were not able to check it before the deadline, but it seems reasonable.

But for the vast majority of these high-end CPUs, they will be running many parallel applications, like running microservices, docker containers, virtual machines, map/reducing smaller chunks of data and parallel HPC Jobs. In almost all cases 16 MB L3 for 4 cores is more than enough.

Although come to think of it, when running an 8-core virtual machine there might be small corner cases where performance suffers a (little) bit.

In short, AMD leaves still a bit of performance on table by not using a larger 8-core CCX. We await to see what happens in future platforms.

Memory Subsystem: Bandwidth Latency Part Two: Beating The Prefetchers
Comments Locked

180 Comments

View All Comments

  • wrkingclass_hero - Sunday, August 11, 2019 - link

    What does AMD have to do to get a Gold or Platinum recommendation?
  • oRAirwolf - Thursday, August 15, 2019 - link

    This is a good question
  • imaskar - Sunday, August 11, 2019 - link

    Single thread performance is very important for those who lives in cloud. A quick example: suppose I provision 2 core/4gig vm (this is of course hyperthreads). And on AWS I have a choice - m5 and m5a, where AMD is cheaper. What do I sacrifice? Not really throughput, because you don't run your prod workloads at 100% CPU. But there is the latency. If those cores clocked lower, I would get the same amount of responses, but slower. And since in microservice world you have a chain of calls, you get this decrease 10 times. Is it worth it?
    That was the case for 1st gen EPYC. Would 2nd gen have latency parity?
  • notashill - Sunday, August 11, 2019 - link

    It's hard to say until the cloud instances actually launch.

    The current m5a instances are using a custom SKU which is clocked at 2.5GHz max boost.

    Rome's IPC is ~15% higher and clock speeds are all around higher so single threaded performance should be quite a bit better, but ultimately the exact numbers will depend on which SKUs the cloud vendors decide to use and how high they clock.
  • duploxxx - Tuesday, August 13, 2019 - link

    did you actually ever work with hypervisors?

    there are other things than raw clock speed.... its all about scheduling and when there are more cores / socket available the scheduling is more relaxed, less ready time..... EPYC generation 1 is already awesome for hypervisor way better choice than most Intel counter parts for sure if you look at socket cost... but than again I am probably talking to a typical retard ****
  • JoeBraga - Wednesday, August 14, 2019 - link

    Can you Explain better? But the license isn't bought by the quantity of coresor Per socket?
  • imaskar - Wednesday, August 14, 2019 - link

    He probably talks about VmWare, which is licensed per socket, not per core. So with EPYC gen2 you need twice less licenses for the same cloud capacity (assuming cores are equal).
  • JoeBraga - Wednesday, August 14, 2019 - link

    Now I understood
  • imaskar - Wednesday, August 14, 2019 - link

    Rather than calling others retards, you could first dig a little deeper into an issue. No, I don't work with hypervisors directly, I'm from the other side. I write software and I want good latency (not insane one like for HFT, but still a good one). Because for throughput we could just spin one more instance. You can't buy latency horizontally.
    I'm not taking numbers out of the blue. There is a benchmark for AMD instances vs Intel instances on AWS. I'm not sure if we are allowed to post links to other resources here. Put this string into Google and you will surely find it: "A Look At The AMD EPYC Performance On The Amazon EC2 Cloud". Despite this article being very enthusiastic about those instances, you can really see that per core performance on Intel is better, meaning better latencies for web apps.
    I will probably write my own set of benchmarks, because that one seems to completely ignore web servers. I am very enthusiastic about AMD instances, but they are definitely not a no-brainer.
  • quadibloc - Tuesday, August 13, 2019 - link

    The new Ryzen chips compete well with what Intel is currently producing. But while they doubled AVX 2 support, so as to match what Intel has, Ice Lake will double that - as has been known for some time. So if this is what AMD thought would be competitive with Ice Lake, as Forrest Norrod said, AMD was not trying hard enough - and they're just lucky Ice Lake was late. AMD's position relative to Intel with its previous generations of Ryzens seems to be the limit of their ambitions. Combine that with Intel reacting to its current issues, and it looks to me that AMD will have to rethink some aspects of its strategy to avoid Intel being ahead when it comes time for next year's chips from both companies.

Log in

Don't have an account? Sign up now