Test Bed and Setup - Compiler Options

For the rest of our performance testing, we’re disclosing the details of the various test setups:

AMD - Dual EPYC 7763 / 7713 / 75F3 / 7662

In terms of testing the new EPYC 7003 series CPUs, unfortunately due to our malfunctioning Daytona server, we weren’t able to get first-hand experience with the hardware. AMD graciously gave us remote access to one of their server clusters – we had full controls of the system in terms of BMC as well as BIOS settings.

CPU ​2x AMD EPYC 7763 (2.45-3.500 GHz, 64c, 256 MB L3, 280W) /
2x AMD EPYC 7713 (2.00-3.365 GHz, 64c, 256 MB L3, 225W) /
2x AMD EPYC 75F3 (3.20-4.000 GHz, 32c, 256 MB L3, 280W) /
2x AMD EPYC 7662 (2.00-3.300 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks Varying
Motherboard Daytona reference board: S5BQ
PSU PWS-1200

Software wise, we ran Ubuntu 20.10 images with the latest release 5.11 Linux kernel. Performance settings both on the OS as well on the BIOS were left to default settings, including such things as a regular Schedutil based frequency governor and the CPUs running performance determinism mode at their respective default TDPs unless otherwise indicated.

AMD - Dual EPYC 7742

Our local AMD EPYC 7742 system, due to the aforementioned issues with the Daytona hardware, is running on a SuperMicro H11DSI Rev 2.0.

CPU ​2x AMD EPYC 7742 (2.25-3.4 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks Crucial MX300 1TB
Motherboard SuperMicro H11DSI0
PSU EVGA 1600 T2 (1600W)

As an operating system we’re using Ubuntu 20.10 with no further optimisations. In terms of BIOS settings we’re using complete defaults, including retaining the default 225W TDP of the EPYC 7742’s, as well as leaving further CPU configurables to auto, except of NPS settings where it’s we explicitly state the configuration in the results.

The system has all relevant security mitigations activated against speculative store bypass and Spectre variants.

Ampere "Mount Jade" - Dual Altra Q80-33

The Ampere Altra system we’re using the provided Mount Jade server as configured by Ampere. The system features 2 Altra Q80-33 processors within the Mount Jade DVT motherboard from Ampere.

In terms of memory, we’re using the bundled 16 DIMMs of 32GB of Samsung DDR4-3200 for a total of 512GB, 256GB per socket.

CPU ​2x Ampere Altra Q80-33 (3.3 GHz, 80c, 32 MB L3, 250W)
RAM 512 GB (16x32 GB) Samsung DDR4-3200
Internal Disks Samsung MZ-QLB960NE 960GB
Samsung MZ-1LB960NE 960GB
Motherboard Mount Jade DVT Reference Motherboard
PSU 2000W (94%)

The system came preinstalled with CentOS 8 and we continued usage of that OS. It’s to be noted that the server is naturally Arm SBSA compatible and thus you can run any kind of Linux distribution on it.

The only other note to make of the system is that the OS is running with 64KB pages rather than the usual 4KB pages – this either can be seen as a testing discrepancy or an advantage on the part of the Arm system given that the next page size step for x86 systems is 2MB – which isn’t feasible for general use-case testing and something deployments would have to decide to explicitly enable.

The system has all relevant security mitigations activated, including SSBS (Speculative Store Bypass Safe) against Spectre variants.

Intel - Dual Xeon Platinum 8280

For the Intel system we’re also using a test-bench setup with the same SSD and OS image as on the EPYC 7742 system.

Because the Xeons only have 6-channel memory, their maximum capacity is limited to 384GB of the same Micron memory, running at a default 2933MHz to remain in-spec with the processor’s capabilities.

CPU 2x Intel Xeon Platinum 8280  (2.7-4.0 GHz, 28c, 38.5MB L3, 205W)
RAM 384 GB (12x32 GB) Micron DDR4-3200 (Running at 2933MHz)
Internal Disks Crucial MX300 1TB
Motherboard ASRock EP2C621D12 WS
PSU EVGA 1600 T2 (1600W)

The Xeon system was similarly run on BIOS defaults on an ASRock EP2C621D12 WS with the latest firmware available.

The system has all relevant security mitigations activated against the various vulnerabilities.

Compiler Setup

For compiled tests, we’re using the release version of GCC 10.2. The toolchain was compiled from scratch on both the x86 systems as well as the Altra system. We’re using shared binaries with the system’s libc libraries.

It’s to be noted that for AMD’s latest Zen3-based EPYC 7003 CPUs, GCC 10.2 did not yet offer compatibility with the relevant -znver3 CPU target. Due to our goal to keep apples-to-apples comparisons between the various systems, we’re resorted to using the same -znver2 binaries on the new EPYC 3rd generation parts.

AMD notes performance benefits using a new LLVM 11 based AOCC 3.0 featuring Zen3 performance optimisations. The new compiler version is to be released at the time of publishing, and thus we hadn’t had the opportunity to verify these claims.

CPU List and SoC Updates Topology, Memory Subsystem & Latency
Comments Locked

120 Comments

View All Comments

  • mode_13h - Saturday, March 20, 2021 - link

    Okay, thanks for confirming with them.
  • mode_13h - Saturday, March 20, 2021 - link

    It's not the easiest thing to confirm with a test, since you'd have to come along behind the writer and observe that a write that SHOULD still be in cache isn't.
  • CBeddoe - Monday, March 15, 2021 - link

    I'm excited by AMD's continuing design improvements.
    Can't wait to see what happens with the next node shrink. Intel has some catching up to do.
  • Ppietra - Tuesday, March 16, 2021 - link

    Can someone please explain how is it possible that the power consumption of the all package is so much higher than the power consumption of the actual cores doing the work?
  • Spunjji - Friday, March 19, 2021 - link

    Because the I/O die is running on an older 14nm process and is servicing all of the cores. In a 64-core CPU, the per-core power use of the I/O die is less than 2W. Still too much, of course, but in context not as obscene as it looks when you look at the total power.
  • Elstar - Tuesday, March 16, 2021 - link

    Lest it go unsaid, I really appreciate the "compile a big C++ project" benchmark (i.e. LLVM). Thank you!
  • Spunjji - Tuesday, March 16, 2021 - link

    "To that end, all we have to compare Milan to is Intel’s Cascade Lake Xeon Scalable platform, which was the same platform we compared Rome to."

    Says it all, really. Good work AMD, and cheers to the team for the review!
  • Hifihedgehog - Tuesday, March 16, 2021 - link

    Sysadmin: Ram? Rome?

    AMD: Milan, darling, Milan...
  • Ivan Argentinski - Tuesday, March 16, 2021 - link

    Congrats for going more in-depth for the per-core performance! For many enterprise buyers, this is the most (only?) important metric. I do suspect, that in this regard, the 8 core 72F3 will actually be the best 3rd gen EPYC!

    But to better understand this, we need more test and per-core comparisons. I would suggest comparing:
    * All current AMD fast/frequency optimized CPUs - EPYC 72F3, 73F3, ...
    * Previous gen AMD fast/frequency CPUs like EPYC 7F32, ...
    * Intel Frequency optimized CPUs like Xeon Gold 6250, 6244, ...

    The only metric that matters is per-core performance under full *sustained* load.

    Exploring the dynamic TDP of AMD EPYC 3rd gen is also an interesting option. For example, I am quite curious about configuring 72F3 with 200W instead of the default 180W.
  • Andrei Frumusanu - Saturday, March 20, 2021 - link

    If we get more SKUs to test, I'll be sure to do so.

Log in

Don't have an account? Sign up now