Murphy's Law

Anything That Can Go Wrong, Will Go Wrong

For those of you that may not know, I am an Academic Director of MCT at Howest University here in Belgium. I perform research in our labs here on big data analytics, virtualization, cloud computing, and server technology in general. We do all the testing here in the lab, and I also do launch article testing for AnandTech.

Undoubtedly, like most academic institutions, we have a summer vacation, where our labs are locked and we are told to get some sunlight. AMD's Rome launch has happened just as our lab closing started, and so I had the Rome server delivered to my home lab instead. The only issue was that our corresponding Intel server was still in the academic lab. Normally this isn't really a problem - even when the lab is open, I issue testing through remote access and process the data that way, in order to reboot the system and run tests and so forth. If a hardware change is needed, I need to be physically there, but usually this isn't a problem.

However, as Murphy's Law would have it, during testing for this review, our Domain Controller also crashed while our labs were closed. We could not reach our older servers any more. This has limited us somewhat in our testing - while I can test this Rome system during normal hours at the home lab (can't really run it overnight, it is a server and therefore loud), I couldn't issue any benchmarks to our Naples / Cascade Lake systems in the lab.

As a result, our only option was to limit ourselves to the benchmarks already done on the EPYC 7601, Skylake, and Cascade Lake machines. Rest assured that we will be back with our usual Big Data/AI and other real world tests once we can get our complete testing infrastructure up and running. 

Benchmark Configuration and Methodology

All of our testing was conducted on Ubuntu Server 18.04 LTS, except for the EPYC 7742 server, which was running Ubuntu 19.04. The reason was simple: we were told that 19.04 had validated support for Rome, and with two weeks of testing time, we wanted to complete what was possible. Support (including X2APIC/IOMMU patches to utilize 256 threads) for Rome is available with Linux Kernel 4.19 and later. 

You will notice that the DRAM capacity varies among our server configurations. This is of course a result of the fact that Xeons have access to six memory channels while EPYC CPUs have eight channels. As far as we know, all of our tests fit in 128 GB, so DRAM capacity should not have much influence on performance. 

 

AMD Daytona - Dual EPYC 7742

AMD sent us the "Daytona XT" server, a reference platform build by ODM Quanta (D52BQ-2U). 

CPU ​AMD EPYC 7742 (2.25 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Micron 9300 3.84 TB (data)
Motherboard Daytona reference board: S5BQ
PSU PWS-1200

Although the 225W TDP CPUs needs extra heatspipes and heatsinks, there are still running on air cooling... 

AMD EPYC 7601 –  (2U Chassis)

CPU Two EPYC 7601  (2.2 GHz, 32c, 8x8MB L3, 180W)
RAM 512 GB (16x32 GB) Samsung DDR4-2666 @2400
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Intel SSD3710 800 GB (data)
Motherboard AMD Speedway
PSU 1100W PSU (80+ Platinum)

Intel's Xeon "Purley" Server – S2P2SY3Q (2U Chassis)

CPU Two Intel Xeon Platinum 8280  (2.7 GHz, 28c, 38.5MB L3, 205W)
Two Intel Xeon Platinum 8176  (2.1 GHz, 28c, 38.5MB L3, 165W)
RAM 384 GB (12x32 GB) Hynix DDR4-2666
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Micron 9300 3.84 TB (data)
Motherboard Intel S2600WF (Wolf Pass baseboard)
Chipset Intel Wellsburg B0
PSU 1100W PSU (80+ Platinum)

We enabled hyper-threading and Intel virtualization acceleration.

The BIG LIST of Rome CPUs: Core Counts and Frequencies Memory Subsystem: Bandwidth
Comments Locked

180 Comments

View All Comments

  • Kevin G - Wednesday, August 7, 2019 - link

    Clock speeds. AMD is being very aggressive on clocks here but the Ryzen 3000 series were still higher. I would expect new Threadripper chips to clock closer to their Ryzen 3000 cousins.

    AMD *might* differentiate Threadripper by cache amounts. While the CPU cores work, they may end up binning Threadripper based upon the amount of cache that wouldn't pass memory tests.

    Last thing would be price. The low end Epyc chips are not that expensive but suffer from low cores/low clocks. Threadripper can offer more for those prices.
  • quorm - Wednesday, August 7, 2019 - link

    Here's hoping we see a 16 core threadripper with a 4ghz base clock.
  • azfacea - Wednesday, August 7, 2019 - link

    half memory channels. half pcie lanes. also i think with epyc AMD spends more on support and system development. i can see 48c 64c threadripper coming 30-40% lower and not affecting epyc
  • twtech - Wednesday, August 7, 2019 - link

    If they gimp the memory access again, it mostly defeats the purpose of TR as a workstation chip. You'd want an Epyc anyway.
  • quorm - Wednesday, August 7, 2019 - link

    Well, on the plus side, the i/o die should solve the asymmetric memory access problem.
  • ikjadoon - Wednesday, August 7, 2019 - link

    Stunning.
  • aryonoco - Wednesday, August 7, 2019 - link

    Between 50% to 100% higher performance while costing between 40% to 50% less. Stunning!

    I remember the sad days of Opteron in 2012 and 2013. If you'd told me that by the end of the decade AMD would be in this position, I'd have wanted to know what you're on.

    Everyone at AMD deserves a massive cheer, from the technical and engineering team all the way to Lisa Su, who is redefining what "execution" means.

    Also thanks for the testing Johan, I can imagine testing this server at home with Europe's recent heatwave would have not been fun. Good to see you writing frequently for AT again, and looking forward to more of your real world benchmarks.
  • twtech - Wednesday, August 7, 2019 - link

    It's as much about Intel having dropped the ball over the past few years as it is about AMD's execution.

    According to Intel's old roadmaps, they ought to be transitioning past 10nm on to 7nm by now, and AMD's recent releases in that environment would have seemed far less impressive.
  • deltaFx2 - Wednesday, August 7, 2019 - link

    Yeah, except I don't remember anyone saying Intel was going great guns because AMD dropped the ball in the bulldozer era. AMD had great bulldozer roadmaps too, it didn't matter much. If bulldozer had met its design targets maybe Nehalem would not be as impressive... See, nobody ever says that. It's almost like if AMD is doing well, it's not because they did a good job but intel screwed up.

    Roadmaps are cheap. Anyone can cobble together a powerpoint slide.
  • Lord of the Bored - Thursday, August 8, 2019 - link

    Well, it is a little of both on both sides.
    Intel's been doing really well in part because AMD bet hard on Bulldozer and it didn't pay out.

    Similarly, when AMD's made really good processors but Intel was on their game, it didn't much matter. The Athlon and the P2/3 traded blows in the Megahertz wars, but in the end AMD couldn't actually break Intel because Intel made crooked business deals*backspace* because AMD was great, but not actually BETTER.

    The Athlon 64 was legendary because AMD was at the top of their game and Intel was riding THEIR Bulldozer into the ground at the same time. If the Pentium Mobile hadn't existed, thus delaying a Netburst replacement, things would be very different right now.

Log in

Don't have an account? Sign up now