Memory Subsystem: Latency

AMD chose to share a core design among mobile, desktop and server for scalability and economic reasons. The Core Complex (CCX) is still used in Rome like it was in the previous generation. 

What has changed is that each CCX communicates with the central IO hub, instead of four dies communicating in 4 node NUMA layout (This option is still available to use via the NPS4 switch, keeping each CCD local to its quadrant of the sIOD as well as those local memory controllers, avoiding hops between sIOD quadrants which encour a slight latency penalty). So as the performance of modern CPUs depends heavily on the cache subsystem, we were more than curious what kind of latency a server thread would see as it accesses more and more pages in the cache hierarchy. 

We're using our own in-house latency test. In particular what we're interested in publishing is the estimated structural latency of the processors, meaning we're trying to account for TLB misses and disregard them in these numbers, except for the DRAM latencies where latency measurements get a bit more complex between platforms, and we revert to full random figures.

Mem
Hierarchy
AMD EPYC 7742
DDR4-3200

(ns @ 3.4GHz)
AMD EPYC 7601
DDR4-2400

(ns @ 3.2GHz)
Intel Xeon 8280
DDR-2666

(ns @ 2.7GHz)
L1 Cache 32KB

4 cycles
1.18ns
32KB

4 cycles
1.25ns
32KB

4 cycles
1.48ns
L2 Cache 512KB

13 cycles
3.86ns
512KB

12 cycles
3.76ns
1024KB

14 cycles
5.18ns
L3 Cache 16MB / CCX (4C)
256MB Total

~34 cycles (avg)
~10.27 ns
16MB / CCX (4C)
64MB Total

 
38.5MB / (28C)
Shared

~46 cycles (avg)
~17.5ns
DRAM

128MB Full Random
~122ns (NPS1)

~113ns (NPS4)

~116ns

~89ns
DRAM

512MB Full Random
~134ns (NPS1)

~125ns (NPS4)
 
~109ns

Update 2019/10/1: We've discovered inaccuracies with our originally published latency numbers, and have subsequently updated the article with more representative figures with a new testing tool.

Things get really interesting when starting to look at cache depths beyond the L2. Naturally Intel here this happens at 1MB while for AMD this is after 512KB, however AMD’s L2 has a speed advantage over Intel’s larger cache.

Where AMD has an ever more clearer speed advantage is in the L3 caches that are clearly significantly faster than Intel’s chips. The big difference here is that AMD’s L3’s here are only local to a CCX of 4 cores – for the EPYC 7742 this is now doubled to 16MB up from 8MB on the 7601.

Currently this is a two-edged sword for the AMD platforms: On one hand, the EPYC processors have significantly more total cache, coming in at a whopping 256MB for the 7742, quadruple the amount over the 64MB of the 7601, and a lot more than Intel’s platforms, which come in at 38.5MB for the Xeon 8180, 8176, 8280, and a larger 55MB for the Xeon E5-2699 v4.

The disadvantage for AMD is that while they have more cache, the EPYC 7742 rather consist of 16 CCX which all have a very fast 16 MB L3. Although the 64 cores are one big NUMA node now, the 64-core chip is basically 16x 4 cores, each with 16 MB L3-caches. Once you get beyond that 16 MB cache, the prefetchers can soften the blow, but you will be accessing the main DRAM.

A little bit weird is the fact that accessing data that resides at the same die (CCD) but is not within the same CCX is just as slow as accessing data is on a totally different die. This is because regardless of where the other CCX is, whether it is nearby on the same die or on the other side of the chip, the data access still has to go through the IF to the IO die and back again.

Is that necessarily a bad thing? The answer: most of the time it is not. First of all, in most applications only a low percentage of accesses must be answered by the L3 cache. Secondly, each core on the CCX has no less than 4 MB of L3 available, which is far more than the Intel cores have at their disposal (1.375 MB). The prefetchers have a lot more space to make sure that the data is there before it is needed.

But database performance might still suffer somewhat. For example, keeping a large part of the index in the cache improve performance, and especially OLTP accesses tend to quite random. Secondly the relatively slow communication over a central hub slow down synchronization communication. That is a real thing is shown by the fact that Intel states that the OLTP hammerDB runs 60% faster on a 28-core Intel Xeon 8280 than on EPYC 7601. We were not able to check it before the deadline, but it seems reasonable.

But for the vast majority of these high-end CPUs, they will be running many parallel applications, like running microservices, docker containers, virtual machines, map/reducing smaller chunks of data and parallel HPC Jobs. In almost all cases 16 MB L3 for 4 cores is more than enough.

Although come to think of it, when running an 8-core virtual machine there might be small corner cases where performance suffers a (little) bit.

In short, AMD leaves still a bit of performance on table by not using a larger 8-core CCX. We await to see what happens in future platforms.

Memory Subsystem: Bandwidth Latency Part Two: Beating The Prefetchers
POST A COMMENT

184 Comments

View All Comments

  • cyberguyz - Thursday, August 8, 2019 - link

    I was also a senior software engineer (retired after 30 years) supporting mostly fortune 1000 companies. I have to tell you that the the vast majority ones I have dealt with use a mixed server environment of Windows server, Linux (RHEL), zLinux, and AIX along with Java as the language of choice along with Javascript as the web interface language. This experience comes from digging through their heap and system dumps, poring through thousands of lines of server source code and building/releasing middleware server development software for those companies. Except for those on zLinux the rest are on multiprocessor x86 systems. Reply
  • Null666666 - Friday, August 9, 2019 - link

    Hardly, but then what do I know, only been tuing corporate large scale databases since 91..

    Linux is for any scale any size.

    Friends don't let friends do windows. Admittedly, it's gotten better. But for high available you just can't do "the windows solution", power off power on.
    Reply
  • sleepeeg3 - Friday, August 9, 2019 - link

    Um... is your background in Windows Server? That might skew your bias. Reply
  • eek2121 - Saturday, August 10, 2019 - link

    This is 100% false, even Microsoft themselves has stated as much. Linux owns the internet. Windows owns the office. Reply
  • Vatharian - Saturday, August 17, 2019 - link

    Not every server in existence is meant to carry and forward mails from accounting to marketing. Most of IT in non-IT focused enterprises are indeed meant as office backend will run WS, but virtually every single workhorse beside that will be Linux running. Between hosting, compute and big data Windows has no place simply because of too high overhead, no flexibility on low level optimization, and extremely high cost of initial driver development. I.e. hardware my company makes (specialized accelerators) has 3x time to market on Windows platform. We now shift to FPGA, and we dropped support for Windows, because of bugs that our vendor can't fix for months. Not to mention, that some of our clients run IBM, therefore, Linux. Reply
  • healthymosquito - Wednesday, October 2, 2019 - link

    Being part of a 10 figure company's infrastructure team, I can say that what you are saying it patently false for electronics Manufacturing. Sure Windows has most of the office desktops, but all engineer stations, as well as all heavy lifting servers in my corp run Linux globally, That isn't counting our 100% Linux AWS and Google Cloud presense. Having worked in hosting recently as a side gig, Web presence for Windows is just as dismal. No one is paying money for an IIS server or MSQL to run websites. Windows numbers on the Internet are extremely low. Reply
  • nobodyblog - Thursday, August 8, 2019 - link

    Windows is used in military..
    Additionally, about Java, I doubt it is as good as .Net even in 2019. And Linux is norm in Big Companies OR embedded market only. Medium/small size are all on Windows - FACT. Additionally, there is no real Antivirus for Linux, and opensource softwares aren't very reliable..

    Thanks!
    Reply
  • Arnulf - Thursday, August 8, 2019 - link

    Antivirus? How old are you?

    I work for a small/medium business (8 figures in EUR) and we have same usage profile as described by Deshi - Linux is running all our key stuff while we have a lone Windows server for AD and related crap.
    Reply
  • FreckledTrout - Thursday, August 8, 2019 - link

    Say what no Antivirus for Linux? Two I know of in use at corporations right now are ESET and Trendmicro. Reply
  • zmatt - Thursday, August 8, 2019 - link

    Completely baseless claims. I have worked large scale government and military IT and Windows servers are the most common by far. There were some Linux but they we a minority. Where you see Linux thrive in servers is cloud providers and in companies that provide primarily web based products. Microsoft even offers their own Linux options through Azure, and everyone knows about AWS and their own totally-not-a-ripoff-of-RHEL distro. But Cloud infrastructure doesn't have to be Windows, people dont use it for the same thing usually.

    Linux still doesn't have an equivalent to Active Directory and that has been in my experience one of the largest infrastructure uses in self hosted environments. Domain controllers and servers that support them made up and continue to make up the bulk. Until Linux has a competitor to it (and I doubt they will because most Linux devs refuse to "copy" anything Microsoft does) then Windows servers will stick around.
    Reply

Log in

Don't have an account? Sign up now