Murphy's Law

Anything That Can Go Wrong, Will Go Wrong

For those of you that may not know, I am an Academic Director of MCT at Howest University here in Belgium. I perform research in our labs here on big data analytics, virtualization, cloud computing, and server technology in general. We do all the testing here in the lab, and I also do launch article testing for AnandTech.

Undoubtedly, like most academic institutions, we have a summer vacation, where our labs are locked and we are told to get some sunlight. AMD's Rome launch has happened just as our lab closing started, and so I had the Rome server delivered to my home lab instead. The only issue was that our corresponding Intel server was still in the academic lab. Normally this isn't really a problem - even when the lab is open, I issue testing through remote access and process the data that way, in order to reboot the system and run tests and so forth. If a hardware change is needed, I need to be physically there, but usually this isn't a problem.

However, as Murphy's Law would have it, during testing for this review, our Domain Controller also crashed while our labs were closed. We could not reach our older servers any more. This has limited us somewhat in our testing - while I can test this Rome system during normal hours at the home lab (can't really run it overnight, it is a server and therefore loud), I couldn't issue any benchmarks to our Naples / Cascade Lake systems in the lab.

As a result, our only option was to limit ourselves to the benchmarks already done on the EPYC 7601, Skylake, and Cascade Lake machines. Rest assured that we will be back with our usual Big Data/AI and other real world tests once we can get our complete testing infrastructure up and running. 

Benchmark Configuration and Methodology

All of our testing was conducted on Ubuntu Server 18.04 LTS, except for the EPYC 7742 server, which was running Ubuntu 19.04. The reason was simple: we were told that 19.04 had validated support for Rome, and with two weeks of testing time, we wanted to complete what was possible. Support (including X2APIC/IOMMU patches to utilize 256 threads) for Rome is available with Linux Kernel 4.19 and later. 

You will notice that the DRAM capacity varies among our server configurations. This is of course a result of the fact that Xeons have access to six memory channels while EPYC CPUs have eight channels. As far as we know, all of our tests fit in 128 GB, so DRAM capacity should not have much influence on performance. 

 

AMD Daytona - Dual EPYC 7742

AMD sent us the "Daytona XT" server, a reference platform build by ODM Quanta (D52BQ-2U). 

CPU ​AMD EPYC 7742 (2.25 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Micron 9300 3.84 TB (data)
Motherboard Daytona reference board: S5BQ
PSU PWS-1200

Although the 225W TDP CPUs needs extra heatspipes and heatsinks, there are still running on air cooling... 

AMD EPYC 7601 –  (2U Chassis)

CPU Two EPYC 7601  (2.2 GHz, 32c, 8x8MB L3, 180W)
RAM 512 GB (16x32 GB) Samsung DDR4-2666 @2400
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Intel SSD3710 800 GB (data)
Motherboard AMD Speedway
PSU 1100W PSU (80+ Platinum)

Intel's Xeon "Purley" Server – S2P2SY3Q (2U Chassis)

CPU Two Intel Xeon Platinum 8280  (2.7 GHz, 28c, 38.5MB L3, 205W)
Two Intel Xeon Platinum 8176  (2.1 GHz, 28c, 38.5MB L3, 165W)
RAM 384 GB (12x32 GB) Hynix DDR4-2666
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Micron 9300 3.84 TB (data)
Motherboard Intel S2600WF (Wolf Pass baseboard)
Chipset Intel Wellsburg B0
PSU 1100W PSU (80+ Platinum)

We enabled hyper-threading and Intel virtualization acceleration.

The BIG LIST of Rome CPUs: Core Counts and Frequencies Memory Subsystem: Bandwidth
Comments Locked

180 Comments

View All Comments

  • AnonCPU - Friday, August 9, 2019 - link

    The gain in hmmer on EPYC with GCC8 is not due to TAGE predictor.
    Hmmer gains a lot on EPYC only because of GCC8.
    GCC8 vectorizer has been improved in GCC8 and hmmer gets vectorized heavily while it was not the case for GCC7. The same run on an Intel machine would have shown the same kind of improvement.
  • JohanAnandtech - Sunday, August 11, 2019 - link

    Thanks, do you have a source for that? Interested in learning more!
  • AnonCPU - Monday, August 12, 2019 - link

    That should be due to the improvements on loop distribution:
    https://gcc.gnu.org/gcc-8/changes.html

    "The classic loop nest optimization pass -ftree-loop-distribution has been improved and enabled by default at -O3 and above. It supports loop nest distribution in some restricted scenarios;"

    There are also some references here in what was missing for hmmer vectorization in GCC some years ago:
    https://gcc.gnu.org/ml/gcc/2017-03/msg00012.html

    And a page where you can see that LLVM was missing (at least in 2015) a good loop distribution algo useful for hmmer:

    https://www.phoronix.com/scan.php?page=news_item&a...
  • AnonCPU - Monday, August 12, 2019 - link

    And more:
    https://community.arm.com/developer/tools-software...
  • just4U - Friday, August 9, 2019 - link

    I guess the question to ask now is can they churn these puppies out like no tomorrow? Is the demand there? What about other Hardware? Motherboards and the like..

    Do they have 100 000 of these ready to go? The window of opportunity for AMD is always fleeting.. and if their going to capitalize on this they need to be able to put the product out there.
  • name99 - Friday, August 9, 2019 - link

    No obvious reason why not. The chiplets are not large and TSMC ships 200 million Apple chips a year on essentially the same process. So yields should be there.
    Manufacturing the chiplet assembly also doesn't look any different from the Naples assembly (details differ, yes, but no new envelopes being pushed: no much higher frequency signals or denser traces -- the flip side to that is that there's scope there for some optimization come Milan...)

    So it seems like there is nothing to obviously hold them back...
  • fallaha56 - Saturday, August 10, 2019 - link

    Perhaps Hypertheading should be off on the Intel systems to better reflect eg Google’s reality / proper security standards now we know Intel isn’t secure?
  • Targon - Monday, August 12, 2019 - link

    That is why Google is going to be buying many Epyc based servers going forward. Mitigations do not mean a problem has been fixed.
  • imaskar - Wednesday, August 14, 2019 - link

    Why do you think AWS, GCP, Azure, etc. mitigated the vulnerabilities? They only patched Meltdown at most. All other things are too costly and hard to execute. They just don't care so much for your data. Too loose 2x cloud capacity for that? No way. And for security conscious serious customers they offer private clusters, so your workloads run on separate servers.
  • ballsystemlord - Saturday, August 10, 2019 - link

    Spelling and grammar errors:

    "This happened in almost every OS, and in some cases we saw reports that system administrators and others had to do quite a bit optimization work to get the best performance out of the EPYC 7001 series."
    Missing "of":
    "This happened in almost every OS, and in some cases we saw reports that system administrators and others had to do quite a bit of optimization work to get the best performance out of the EPYC 7001 series."

    "...to us it is simply is ridiculous that Intel expect enterprise users to cough up another few thousand dollars per CPU for a model that supports 2 TB,..."
    Excess "is" and missing "s":
    "...to us it is simply ridiculous that Intel expects enterprise users to cough up another few thousand dollars per CPU for a model that supports 2 TB,..."

    "Although the 225W TDP CPUs needs extra heatspipes and heatsinks, there are still running on air cooling..."
    Excess "s" and incorrect "there",
    "Although the 225W TDP CPUs need extra heatspipes and heatsinks, they're still running on air cooling..."

    "The Intel L3-cache keeps latency consistingy low as long as you stay within the L3-cache."
    "consistently" not "consistingy":
    "The Intel L3-cache keeps latency consistently low as long as you stay within the L3-cache."

    "For example keeping a large part of the index in the cache improve performance..."
    Missing comma and missing "s" (you might also consider making cache plural, but you seem to be talking strictly about the L3):
    "For example, keeping a large part of the index in the cache improves performance..."

    "That is a real thing is shown by the fact that Intel states that the OLTP hammerDB runs 60% faster on a 28-core Intel Xeon 8280 than on EPYC 7601."
    Missing "it":
    "That it is a real thing is shown by the fact that Intel states that the OLTP hammerDB runs 60% faster on a 28-core Intel Xeon 8280 than on EPYC 7601."
    In general, the beginning of the sentance appears quite poorly worded, how about:
    "That L3 cache latency is a matter for concern is shown by the fact that Intel states that the OLTP hammerDB runs 60% faster on a 28-core Intel Xeon 8280 than on EPYC 7601."

    "In NPS4, the NUMA domains are reported to software in such a way as it chiplets always access the near (2 channels) DRAM."
    Missing "s":
    "In NPS4, the NUMA domains are reported to software in such a way as its chiplets always access the near (2 channels) DRAM."

    "The fact that the EPYC 7002 has higher DRAM bandwidth is clearly visible."
    Wrong numbers (maybet you ment, series?):
    "The fact that the EPYC 7742 has higher DRAM bandwidth is clearly visible."

    "...but show very significant improvements on EPYC 7002."
    Wrong numbers (maybet you ment, series?):
    "...but show very significant improvements on EPYC 7742."

    "Using older garbage collector because they happen to better at Specjbb"
    Badly worded.
    "Using an older garbage collector because it happens to be better at Specjbb"

    "For those with little time: at the high end with socketed x86 CPUs, AMD offers you up to 50 to 100% higher performance while offering a 40% lower price."
    "Up to" requires 1 metric, not 2. Try:
    "For those with little time: at the high end with socketed x86 CPUs, AMD offers you from 50 up to 100% higher performance while offering a 40% lower price."

Log in

Don't have an account? Sign up now