Murphy's Law

Anything That Can Go Wrong, Will Go Wrong

For those of you that may not know, I am an Academic Director of MCT at Howest University here in Belgium. I perform research in our labs here on big data analytics, virtualization, cloud computing, and server technology in general. We do all the testing here in the lab, and I also do launch article testing for AnandTech.

Undoubtedly, like most academic institutions, we have a summer vacation, where our labs are locked and we are told to get some sunlight. AMD's Rome launch has happened just as our lab closing started, and so I had the Rome server delivered to my home lab instead. The only issue was that our corresponding Intel server was still in the academic lab. Normally this isn't really a problem - even when the lab is open, I issue testing through remote access and process the data that way, in order to reboot the system and run tests and so forth. If a hardware change is needed, I need to be physically there, but usually this isn't a problem.

However, as Murphy's Law would have it, during testing for this review, our Domain Controller also crashed while our labs were closed. We could not reach our older servers any more. This has limited us somewhat in our testing - while I can test this Rome system during normal hours at the home lab (can't really run it overnight, it is a server and therefore loud), I couldn't issue any benchmarks to our Naples / Cascade Lake systems in the lab.

As a result, our only option was to limit ourselves to the benchmarks already done on the EPYC 7601, Skylake, and Cascade Lake machines. Rest assured that we will be back with our usual Big Data/AI and other real world tests once we can get our complete testing infrastructure up and running. 

Benchmark Configuration and Methodology

All of our testing was conducted on Ubuntu Server 18.04 LTS, except for the EPYC 7742 server, which was running Ubuntu 19.04. The reason was simple: we were told that 19.04 had validated support for Rome, and with two weeks of testing time, we wanted to complete what was possible. Support (including X2APIC/IOMMU patches to utilize 256 threads) for Rome is available with Linux Kernel 4.19 and later. 

You will notice that the DRAM capacity varies among our server configurations. This is of course a result of the fact that Xeons have access to six memory channels while EPYC CPUs have eight channels. As far as we know, all of our tests fit in 128 GB, so DRAM capacity should not have much influence on performance. 

 

AMD Daytona - Dual EPYC 7742

AMD sent us the "Daytona XT" server, a reference platform build by ODM Quanta (D52BQ-2U). 

CPU ​AMD EPYC 7742 (2.25 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Micron 9300 3.84 TB (data)
Motherboard Daytona reference board: S5BQ
PSU PWS-1200

Although the 225W TDP CPUs needs extra heatspipes and heatsinks, there are still running on air cooling... 

AMD EPYC 7601 –  (2U Chassis)

CPU Two EPYC 7601  (2.2 GHz, 32c, 8x8MB L3, 180W)
RAM 512 GB (16x32 GB) Samsung DDR4-2666 @2400
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Intel SSD3710 800 GB (data)
Motherboard AMD Speedway
PSU 1100W PSU (80+ Platinum)

Intel's Xeon "Purley" Server – S2P2SY3Q (2U Chassis)

CPU Two Intel Xeon Platinum 8280  (2.7 GHz, 28c, 38.5MB L3, 205W)
Two Intel Xeon Platinum 8176  (2.1 GHz, 28c, 38.5MB L3, 165W)
RAM 384 GB (12x32 GB) Hynix DDR4-2666
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Micron 9300 3.84 TB (data)
Motherboard Intel S2600WF (Wolf Pass baseboard)
Chipset Intel Wellsburg B0
PSU 1100W PSU (80+ Platinum)

We enabled hyper-threading and Intel virtualization acceleration.

The BIG LIST of Rome CPUs: Core Counts and Frequencies Memory Subsystem: Bandwidth
Comments Locked

180 Comments

View All Comments

  • krumme - Thursday, August 8, 2019 - link

    Because he is feeded by another hand.
    Enjoy the objectivity by Johan as its is very rare these days. It's not easy for AT to post this stuff. So kudos to them.
  • hoohoo - Thursday, August 8, 2019 - link

    Nice review, but tbh I think you should run the AMD system as such, not limit it's RAM to what the Intel system maxes out at. I would not buy a system and configure it to limits of the competition: I would configure it to it's actual linits.
  • yankeeDDL - Thursday, August 8, 2019 - link

    Wow. "Blasted" is the only word that comes to mind. Good job AMD.
  • eastcoast_pete - Thursday, August 8, 2019 - link

    Thanks Johan and Ian! Impressive results, glad to see that AMD is once again making Intel sweat, all of which can only be good for us.
    Question: A bit out of left field, but why does AMD put the 7 nm dies in these close pairs, as opposed to leaving a little more space between them? Wouldn't thermals be better if each chip gets a little more "reserved" lid space? Just curious. Thanks!
  • sharath.naik - Thursday, August 8, 2019 - link

    Now since we finally are entering the era where a single server(Yes backup is addition) is enough for most of smaller organizations. There is one thing that is needed, OS limits/zones which can limit the cpus and memory built in, instead of using VMs. This will save a lot on resources wasted on booting up an entire OS for individual applications. Linux has the ability for targeting specific cpus but not sure windows has it. But there is a need for standardized way to limit resources by process and by user.
  • mdriftmeyer - Thursday, August 8, 2019 - link

    Agreed.
  • quorm - Thursday, August 8, 2019 - link

    Is it possible you haven't heard of docker?
  • abufrejoval - Sunday, August 11, 2019 - link

    or OpenVZ/Virtuozzo or quite simply cgroups. Can even nest them, including with VMs.
  • DillholeMcRib - Thursday, August 8, 2019 - link

    destruction … Intel sat on their proverbial hands too long. It's over.
  • crotach - Friday, August 9, 2019 - link

    Bye Intel!!

Log in

Don't have an account? Sign up now