Test Bed and Setup - Compiler Options

For the rest of our performance testing, we’re disclosing the details of the various test setups:

AMD - Dual EPYC 7763 / 7713 / 75F3 / 7662

In terms of testing the new EPYC 7003 series CPUs, unfortunately due to our malfunctioning Daytona server, we weren’t able to get first-hand experience with the hardware. AMD graciously gave us remote access to one of their server clusters – we had full controls of the system in terms of BMC as well as BIOS settings.

CPU ​2x AMD EPYC 7763 (2.45-3.500 GHz, 64c, 256 MB L3, 280W) /
2x AMD EPYC 7713 (2.00-3.365 GHz, 64c, 256 MB L3, 225W) /
2x AMD EPYC 75F3 (3.20-4.000 GHz, 32c, 256 MB L3, 280W) /
2x AMD EPYC 7662 (2.00-3.300 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks Varying
Motherboard Daytona reference board: S5BQ
PSU PWS-1200

Software wise, we ran Ubuntu 20.10 images with the latest release 5.11 Linux kernel. Performance settings both on the OS as well on the BIOS were left to default settings, including such things as a regular Schedutil based frequency governor and the CPUs running performance determinism mode at their respective default TDPs unless otherwise indicated.

AMD - Dual EPYC 7742

Our local AMD EPYC 7742 system, due to the aforementioned issues with the Daytona hardware, is running on a SuperMicro H11DSI Rev 2.0.

CPU ​2x AMD EPYC 7742 (2.25-3.4 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks Crucial MX300 1TB
Motherboard SuperMicro H11DSI0
PSU EVGA 1600 T2 (1600W)

As an operating system we’re using Ubuntu 20.10 with no further optimisations. In terms of BIOS settings we’re using complete defaults, including retaining the default 225W TDP of the EPYC 7742’s, as well as leaving further CPU configurables to auto, except of NPS settings where it’s we explicitly state the configuration in the results.

The system has all relevant security mitigations activated against speculative store bypass and Spectre variants.

Ampere "Mount Jade" - Dual Altra Q80-33

The Ampere Altra system we’re using the provided Mount Jade server as configured by Ampere. The system features 2 Altra Q80-33 processors within the Mount Jade DVT motherboard from Ampere.

In terms of memory, we’re using the bundled 16 DIMMs of 32GB of Samsung DDR4-3200 for a total of 512GB, 256GB per socket.

CPU ​2x Ampere Altra Q80-33 (3.3 GHz, 80c, 32 MB L3, 250W)
RAM 512 GB (16x32 GB) Samsung DDR4-3200
Internal Disks Samsung MZ-QLB960NE 960GB
Samsung MZ-1LB960NE 960GB
Motherboard Mount Jade DVT Reference Motherboard
PSU 2000W (94%)

The system came preinstalled with CentOS 8 and we continued usage of that OS. It’s to be noted that the server is naturally Arm SBSA compatible and thus you can run any kind of Linux distribution on it.

The only other note to make of the system is that the OS is running with 64KB pages rather than the usual 4KB pages – this either can be seen as a testing discrepancy or an advantage on the part of the Arm system given that the next page size step for x86 systems is 2MB – which isn’t feasible for general use-case testing and something deployments would have to decide to explicitly enable.

The system has all relevant security mitigations activated, including SSBS (Speculative Store Bypass Safe) against Spectre variants.

Intel - Dual Xeon Platinum 8280

For the Intel system we’re also using a test-bench setup with the same SSD and OS image as on the EPYC 7742 system.

Because the Xeons only have 6-channel memory, their maximum capacity is limited to 384GB of the same Micron memory, running at a default 2933MHz to remain in-spec with the processor’s capabilities.

CPU 2x Intel Xeon Platinum 8280  (2.7-4.0 GHz, 28c, 38.5MB L3, 205W)
RAM 384 GB (12x32 GB) Micron DDR4-3200 (Running at 2933MHz)
Internal Disks Crucial MX300 1TB
Motherboard ASRock EP2C621D12 WS
PSU EVGA 1600 T2 (1600W)

The Xeon system was similarly run on BIOS defaults on an ASRock EP2C621D12 WS with the latest firmware available.

The system has all relevant security mitigations activated against the various vulnerabilities.

Compiler Setup

For compiled tests, we’re using the release version of GCC 10.2. The toolchain was compiled from scratch on both the x86 systems as well as the Altra system. We’re using shared binaries with the system’s libc libraries.

It’s to be noted that for AMD’s latest Zen3-based EPYC 7003 CPUs, GCC 10.2 did not yet offer compatibility with the relevant -znver3 CPU target. Due to our goal to keep apples-to-apples comparisons between the various systems, we’re resorted to using the same -znver2 binaries on the new EPYC 3rd generation parts.

AMD notes performance benefits using a new LLVM 11 based AOCC 3.0 featuring Zen3 performance optimisations. The new compiler version is to be released at the time of publishing, and thus we hadn’t had the opportunity to verify these claims.

CPU List and SoC Updates Topology, Memory Subsystem & Latency
Comments Locked

120 Comments

View All Comments

  • aryonoco - Tuesday, March 16, 2021 - link

    Thanks for the excellent article Andrei and Ian. Really appreciate your work.

    Just wondering, is Johan no longer inlvolved in server reviews? I'll really miss him.
  • Andrei Frumusanu - Saturday, March 20, 2021 - link

    Johan is no longer part of AT.
  • SanX - Tuesday, March 16, 2021 - link

    In summary, the difference in performance 9 vs 8 for (Milan vs Rome) means they are EQUAL. Not a single specific application which shows more than that. So much for the many months of hype and blahblah.
  • tyger11 - Tuesday, March 16, 2021 - link

    Okay, now give us the new Zen 3 Threadripper Pro!
  • AusMatt - Wednesday, March 17, 2021 - link

    Page 4 text: "a 255 x 255 matrix" should read: "a 256 x 256 matrix".
  • hmw - Friday, March 19, 2021 - link

    What was the stepping for the Milan CPUs? B0? or B1?
  • mkbosmans - Saturday, March 20, 2021 - link

    These inter-core synchronisation latency plots are slightly misleading, or at least not representative of "real software". By fixing the cache line that is used to the first core in the system and then ping-ponging it between to other cores you do not measure core-core latency, but rather core-to-cacheline-to-core, as expressed in the article. This is not how inter-thread communication usually works (in well-designed software).
    Allocating the cache line on the memory local to one of the ping-pong threads would make the plot more informative (although a bit more boring).
  • mode_13h - Saturday, March 20, 2021 - link

    Are you saying a single memory address is used for all combinations of core x core?

    Ultimately, I wonder if it makes any difference which NUMA domain the address is in, for a benchmark like this. Once it's in L1 cache, that's what you're measuring, no matter the physical memory address.

    Also, I take issue with the suggestion that core-to-core communication necessarily involves memory in one of the core's NUMA domains. A lot of cases where real-world software is impacted by core-to-core latency involves global mutexes and atomic counters that won't necesarily be local to either core.
  • mkbosmans - Saturday, March 20, 2021 - link

    Yes, otherwise the SE quadrant (socket 2 to socket 2 communication) would look identical to the NW quadrant, right?

    It does matter on which NUMA node the address is in, this is exactly what is addressed later in the article about Xeon having a better cache coherency protocol where this is less of an issue.

    From the software side, I was more thinking of HPC applications where a situation of threads exchanging data that is owned by one of them is the norm, e.g. using OpenMP or MPI. That is indeed a different situation from contention on global mutexes.
  • mode_13h - Saturday, March 20, 2021 - link

    How often is MPI used for communication *within* a shared-memory domain? I tend to think of it almost exclusively as a solution for inter-node communication.

Log in

Don't have an account? Sign up now