AMD Rome Second Generation EPYC Review: 2x 64-core Benchmarked
by Johan De Gelas on August 7, 2019 7:00 PM ESTSingle-Thread SPEC CPU2006 Estimates
While it may have been superceded by SPEC2017, we have built up a lot of experience with SPEC CPU2006. Considering the trouble we experience with our datacenter infrastructure, it was our best first round option for raw performance analysis.
Single threaded performance continues to be very important, especially in maintainance and setup situations. These examples may include running a massive bash script, trying out a very complex SQL query, or configuring new software - there are lots of times where a user simply does not use all the cores.
Even though SPEC CPU2006 is more HPC and workstation oriented, it contains a good variety of integer workloads. It is our conviction that we should try to mimic how performance critical software is compiled instead of trying to achieve the highest scores. To that end, we:
- use 64 bit gcc : by far the most used compiler on linux for integer workloads, good all round compiler that does not try to "break" benchmarks (libquantum...) or favor a certain architecture
- use gcc version 7.4 and 8.3: standard compiler with Ubuntu 18.04 LTS and 19.04.
- use -Ofast -fno-strict-aliasing optimization: a good balance between performance and keeping things simple
- added "-std=gnu89" to the portability settings to resolve the issue that some tests will not compile
The ultimate objective is to measure performance in non-aggressively optimized"applications where for some reason – as is frequently the case – a multi-thread unfriendly task keeps us waiting. The disadvantage is there are still quite a few situations where gcc generates suboptimal code, which causes quite a stir when compared to ICC or AOCC results that are optimized to look for specific optimizations in SPEC code.
First the single threaded results. It is important to note that thanks to turbo technology, all CPUs will run at higher clock speeds than their base clock speed.
- The Xeon E5-2699 v4 ("Broadwell") is capable of boosting up to 3.6 GHz. Note: these are old results compiled w GCC 5.4
- The Xeon 8176 ("Skylake-SP") is capable of boosting up to 3.8 GHz.
- The EPYC 7601 ("Naples") is capable of boosting up to 3.2 GHz.
- The EPYC 7742 ("Rome") boosts to 3.4 GHz. Results are compiled with GCC 7.4 and 8.3
Unfortunately we could not test the Intel Xeon 8280 in time for this data. However, the Intel Xeon 8280 will deliver very similar results, the main difference being that it runs a 5% higher clock (4 GHz vs 3.8 GHz). So we basically expect the results to be 3-5% higher than the Xeon 8176.
As per SPEC licensing rules, as these results have not been officially submitted to the SPEC database, we have to declare them as Estimated Results.
Subtest | Application Type | Xeon E5-2699 v4 |
EPYC 7601 |
Xeon 8176 |
EPYC 7742 |
EPYC 7742 |
Frequency | 3.6 GHz | 3.2 GHz | 3.8 GHz | 3.4 GHz | 3.4 GHz | |
Compiler | gcc 5.4 | gcc 7.4 | gcc 7.4 | gcc 7.4 | gcc 8.3 | |
400.perlbench | Spam filter | 43.4 | 31.1 | 46.4 | 41.3 | 43.7 |
401.bzip2 | Compression | 23.9 | 24.0 | 27.0 | 26.7 | 27.2 |
403.gcc | Compiling | 23.7 | 35.1 | 31.0 | 42.3 | 42.6 |
429.mcf | Vehicle scheduling | 44.6 | 40.1 | 40.6 | 39.5 | 39.6 |
445.gobmk | Game AI | 28.7 | 24.3 | 27.7 | 32.8 | 32.7 |
456.hmmer | Protein seq. | 32.3 | 27.9 | 35.6 | 30.3 | 60.5 |
458.sjeng | Chess | 33.0 | 23.8 | 32.8 | 27.7 | 27.6 |
462.libquantum | Quantum sim | 97.3 | 69.2 | 86.4 | 72.7 | 72.3 |
464.h264ref | Video encoding | 58.0 | 50.3 | 64.7 | 62.2 | 60.4 |
471.omnetpp | Network sim | 44.5 | 23.0 | 37.9 | 23.0 | 23.0 |
473.astar | Pathfinding | 26.1 | 19.5 | 24.7 | 25.4 | 25.4 |
483.xalancbmk | XML processing | 64.9 | 35.4 | 63.7 | 48.0 | 47.8 |
A SPEC CPU analysis is always complicated, being a mix of what kind of code the compiler produces and CPU architecture.
Subtest | Application type | EPYC 7742 (2nd gen) vs 7601 (1st gen) |
EPYC 7742 vs Intel Xeon Scalable |
Gcc 8.3 |
400.perlbench | Spam filter | +33% | -11% | +6% |
401.bzip2 | Compression | +11% | -1% | +2% |
403.gcc | Compiling | +21% | +28% | +1% |
429.mcf | Vehicle scheduling | -1% | -3% | 0% |
445.gobmk | Game AI | +35% | +18% | +0% |
456.hmmer | Protein seq. analyses | +9% | -15% | +100% |
458.sjeng | Chess | +16% | -16% | -1% |
462.libquantum | Quantum sim | +5% | -16% | -1% |
464.h264ref | Video encoding | +24% | -4% | -3% |
471.omnetpp | Network sim | +0% | -39% | 0% |
473.astar | Pathfinding | +30% | +3% | 0% |
483.xalancbmk | XML processing | +36% | -25% | 0% |
First of all, the most interesting datapoint was the fact that the code generated by gcc 8 seems to have improved vastly for the EPYC processors. We repeated the single threaded test three times, and the rate numbers show the same thing: it is very consistent.
hmmer is one of the more branch intensive benchmarks, and the other two workloads where the impact of branch prediction is higher (somewhat higher percentage of branch misses) - gobmk, sjeng - perform consistingly better on the second generation EPYC with it's new TAGE predictor.
Why the low IPC omnetpp ("network sim") does not show any improvement is a mystery to us, we expected that the larger L3 cache would help. However this is a test that loves very large caches, as a result the Intel Xeons have the advantage (38.5 - 55 MB L3).
The video encoding benchmark "h264ref" also relies somewhat on the L3 cache, but that benchmark relies much more on DRAM bandwidth. The fact that the EPYC 7002 has higher DRAM bandwidth is clearly visible.
The pointer chasing benchmarks – XML procesing and Path finding – performed less than optimal on the previous EPYC generation (compared to the Xeons), but show very significant improvements on EPYC 7002.
180 Comments
View All Comments
schujj07 - Friday, August 9, 2019 - link
The problem is Microsoft went to the Oracle model of licensing for Server 2016/19. That means that you have to license EVERY CPU core it can be run on. Even if you create a VM with only 8 cores, those 8 cores won't always be running on the same cores of the CPU. That is where Rome hurts the pockets of people. You would pay $10k/instance of Server Standard on a single dual 64 core host or $65k/host for Server DataCenter on a dual 64 core host.browned - Saturday, August 10, 2019 - link
We are currently a small MS shop, VMWare with 8 sockets licensed, Windows Datacenter License. 4 Hosts, 2 x 8 core due to Windows Licensing limits. But we are running 120+ majority Windows systems on those hosts.I see our future with 4 x 16 core systems, unless our CPU requirements grow, in which case we could look at 3 x 48 or 2 x 64 core or 4 x 24 core and buy another lot of datacenter licenses. Because we already have 64 cores licensed the uplift to 96 or 128 is not something we would worry about.
We would also get a benefit from only using 2, 3, or 4 of our 8 VMWare socket licenses. We could them implement a better DR system, or use those licenses at another site that currently use Robo licenses.
jgraham11 - Thursday, August 8, 2019 - link
so how does it work with hyper threaded CPUs? And what if the server owner decides to not run Intel Hyperthreading because it is so prone to CPU exploits (most 10 yrs+ old). Does Google still pay for those cores??ianisiam - Thursday, August 8, 2019 - link
You only pay for physical cores, not logical.twotwotwo - Thursday, August 8, 2019 - link
Sort of a fun thing there is that in the past you've had to buy more cores than you need sometimes: lower-end parts that had enough CPU oomph may not support all the RAM or I/O you want, or maybe some feature you wanted was absent or disabled. These seem to let you load up on RAM and I/O at even 8C or 16C (min. 1P or 2P configs).Of course, some CPU-bound apps can't take advantage of that, but in the right situation being able to build as lopsided a machine as you want might even help out the folks who pay by the core.
azfacea - Wednesday, August 7, 2019 - link
FNikosD - Wednesday, August 7, 2019 - link
Ok guys...The Anandtech's team had a "bad luck and timming issues" to offer a true and decent review of the Greatest x86 CPU of all time, so for a proper review of EPYC Rome coming from the most objective and capable site for servers, take a look here:https://www.servethehome.com/amd-epyc-7002-series-...
anactoraaron - Thursday, August 8, 2019 - link
Fphoenix_rizzen - Saturday, August 10, 2019 - link
Review article for new CPU devolves into Windows vs Linux pissing match, completely obscuring any interesting discussion about said hardware. We really haven't reached peak stupid on the internet yet. :(The Benjamins - Wednesday, August 7, 2019 - link
Can we get a C20 benchmark for the lulz?