AMD Rome Second Generation EPYC Review: 2x 64-core Benchmarked
by Johan De Gelas on August 7, 2019 7:00 PM ESTMemory Subsystem: Latency
AMD chose to share a core design among mobile, desktop and server for scalability and economic reasons. The Core Complex (CCX) is still used in Rome like it was in the previous generation.
What has changed is that each CCX communicates with the central IO hub, instead of four dies communicating in 4 node NUMA layout (This option is still available to use via the NPS4 switch, keeping each CCD local to its quadrant of the sIOD as well as those local memory controllers, avoiding hops between sIOD quadrants which encour a slight latency penalty). So as the performance of modern CPUs depends heavily on the cache subsystem, we were more than curious what kind of latency a server thread would see as it accesses more and more pages in the cache hierarchy.
We're using our own in-house latency test. In particular what we're interested in publishing is the estimated structural latency of the processors, meaning we're trying to account for TLB misses and disregard them in these numbers, except for the DRAM latencies where latency measurements get a bit more complex between platforms, and we revert to full random figures.
Mem Hierarchy |
AMD EPYC 7742 DDR4-3200 (ns @ 3.4GHz) |
AMD EPYC 7601 DDR4-2400 (ns @ 3.2GHz) |
Intel Xeon 8280 DDR-2666 (ns @ 2.7GHz) |
|
L1 Cache | 32KB 4 cycles 1.18ns |
32KB 4 cycles 1.25ns |
32KB 4 cycles 1.48ns |
|
L2 Cache | 512KB 13 cycles 3.86ns |
512KB 12 cycles 3.76ns |
1024KB 14 cycles 5.18ns |
|
L3 Cache | 16MB / CCX (4C) 256MB Total ~34 cycles (avg) ~10.27 ns |
16MB / CCX (4C) 64MB Total |
38.5MB / (28C) Shared ~46 cycles (avg) ~17.5ns |
|
DRAM 128MB Full Random |
~122ns (NPS1) ~113ns (NPS4) |
~116ns |
~89ns |
|
DRAM 512MB Full Random |
~134ns (NPS1) ~125ns (NPS4) |
~109ns |
Update 2019/10/1: We've discovered inaccuracies with our originally published latency numbers, and have subsequently updated the article with more representative figures with a new testing tool.
Things get really interesting when starting to look at cache depths beyond the L2. Naturally Intel here this happens at 1MB while for AMD this is after 512KB, however AMD’s L2 has a speed advantage over Intel’s larger cache.
Where AMD has an ever more clearer speed advantage is in the L3 caches that are clearly significantly faster than Intel’s chips. The big difference here is that AMD’s L3’s here are only local to a CCX of 4 cores – for the EPYC 7742 this is now doubled to 16MB up from 8MB on the 7601.
Currently this is a two-edged sword for the AMD platforms: On one hand, the EPYC processors have significantly more total cache, coming in at a whopping 256MB for the 7742, quadruple the amount over the 64MB of the 7601, and a lot more than Intel’s platforms, which come in at 38.5MB for the Xeon 8180, 8176, 8280, and a larger 55MB for the Xeon E5-2699 v4.
The disadvantage for AMD is that while they have more cache, the EPYC 7742 rather consist of 16 CCX which all have a very fast 16 MB L3. Although the 64 cores are one big NUMA node now, the 64-core chip is basically 16x 4 cores, each with 16 MB L3-caches. Once you get beyond that 16 MB cache, the prefetchers can soften the blow, but you will be accessing the main DRAM.
A little bit weird is the fact that accessing data that resides at the same die (CCD) but is not within the same CCX is just as slow as accessing data is on a totally different die. This is because regardless of where the other CCX is, whether it is nearby on the same die or on the other side of the chip, the data access still has to go through the IF to the IO die and back again.
Is that necessarily a bad thing? The answer: most of the time it is not. First of all, in most applications only a low percentage of accesses must be answered by the L3 cache. Secondly, each core on the CCX has no less than 4 MB of L3 available, which is far more than the Intel cores have at their disposal (1.375 MB). The prefetchers have a lot more space to make sure that the data is there before it is needed.
But database performance might still suffer somewhat. For example, keeping a large part of the index in the cache improve performance, and especially OLTP accesses tend to quite random. Secondly the relatively slow communication over a central hub slow down synchronization communication. That is a real thing is shown by the fact that Intel states that the OLTP hammerDB runs 60% faster on a 28-core Intel Xeon 8280 than on EPYC 7601. We were not able to check it before the deadline, but it seems reasonable.
But for the vast majority of these high-end CPUs, they will be running many parallel applications, like running microservices, docker containers, virtual machines, map/reducing smaller chunks of data and parallel HPC Jobs. In almost all cases 16 MB L3 for 4 cores is more than enough.
Although come to think of it, when running an 8-core virtual machine there might be small corner cases where performance suffers a (little) bit.
In short, AMD leaves still a bit of performance on table by not using a larger 8-core CCX. We await to see what happens in future platforms.
180 Comments
View All Comments
JoeBraga - Wednesday, August 14, 2019 - link
It can happen if Intel uses the new archtecture Sunny Cove and MCM/Chiplet design instead of Monolithic DesignSanX - Thursday, August 15, 2019 - link
7zip is not a legacy test, it is important for anyone who sends big data over always damn slow network. Do you know all those ZIPs, GZs and other zippers which people mostly use, compress with turtle speeds as low as 20 MB/s even on supercomputers ? The 7Zip though parallelizes that nicely. So do not diminish this good test calling it "legacy"imaskar - Friday, August 16, 2019 - link
7zip is a particular program, doing LZMA in parallel, that's why it is faster that lets say gzip. But on server you often do not want to parallel things, because other cores are doing other jobs and switching is costly. There are a lot of compressing algorithms which are better in certain situations. LZMA rarely fits. More often it is it's LZ4 or zstd for "generate once, consume many" or basic gzip (DEFLATE) for "generate once, consume once". Yes, you would be surprised, but the very basic 30 years old DEFLATE is still the king if you care for sum of compress, send, decompress AND your nodes are inside one datacenter (which is most of the times).SanX - Thursday, August 15, 2019 - link
What you can say about Ian's own test he developed to demonstrate avx512 speed boost which shows some crazy up to 3-4x or more speedups ? Does your test of Molecular Dynamics tell that Ian's test mostly irrelevant for such huge improvement of speed of the real life complex programs?imaskar - Friday, August 16, 2019 - link
Probably because you can't use ONLY avx512. You still need regular things like jumps and conditions. And this is only the best case. Usually you also need to process part of the vector differently. For example, your vector has size 20, but your width is 16. You either do another vector pass, or 4 regular computations. Often second thing is faster or just the only option.realbabilu - Sunday, August 18, 2019 - link
Most of finite element software use Intel mkl to get every juice power spec of processor.it works for Intel ones not for amdAmd math kernel not heavily programmed, otnwaa just for Linux.
Other third party like gotoblas openblas still trying hard to detect cache and type for zen2.
I mean for workstation floating point still hard for amd.
peevee - Monday, August 19, 2019 - link
Prices per core-GHz:EPYC 7742 $48.26
EPYC 7702 $50.39
EPYC 7642 $43.25
EPYC 7552 $38.12
EPYC 7542 $36.64
EPYC 7502 $32.50
EPYC 7452 $26.93
EPYC 7402 $26.53
EPYC 7352 $24.46
EPYC 7302 $20.38
EPYC 7282 $14.51
EPYC 7272 $17.96
EPYC 7262 $22.46
EPYC 7252 $19.15
Value in this 7282 is INSANE.
peevee - Tuesday, August 20, 2019 - link
"Even though our testing is not the ideal case for AMD (you would probably choose 8 or even 16 back-ends), the EPYC edges out the Xeon 8176. Using 8 JVMs increases the gap from 1% to 4-5%."1%? 36917 / 27716 = 1.3319...
33%. Without 8 JVMs.
KathyMilligan - Wednesday, August 21, 2019 - link
University of Illinois Urbana-Champaign is very good university. I am too poorly prepared for this level of education. But I'm getting ready. I read a lot of articles and books, communicate with many smart former students of this university. I also buy research papers on site and this gives me a lot of useful information, which is not so easy to find on the Internet.YB1064 - Wednesday, August 28, 2019 - link
Looks like Intel has been outclassed, out-priced and completely out-maneuvered by AMD. What a disaster!