Test Bed and Setup - Compiler Options

For the rest of our performance testing, we’re disclosing the details of the various test setups:

Ampere "Mount Jade" - Dual Altra Max M128-30 / Altra Q80-33

For the Ampere system, we’re still using our Mount Jade server, manufactured by Wiwynn, featuring Ampere’s Mount Jade DVT reference motherboard, testing both the Q80-33 and the new M128-30.

In terms of memory, we’re using the bundled 16 DIMMs of 32GB of Samsung DDR4-3200 for a total of 512GB, 256GB per socket.

CPU 2x Ampere Altra Max M128-30 (3.0 GHz, 128c, 16MB L3, 250W)
​2x Ampere Altra Q80-33 (3.3 GHz, 80c, 32 MB L3, 250W)
RAM 512 GB (16x32 GB) Samsung DDR4-3200
Internal Disks Samsung MZ-QLB960NE 960GB
Samsung MZ-1LB960NE 960GB
Motherboard Mount Jade DVT Reference Motherboard
PSU 2000W (94%)

We’re running an Ubuntu 21.04 server image – naturally because the system is Arm SBSA compatible, you’re able to boot off any compatible generic Linux distribution off of it.

The system has all relevant security mitigations activated, including SSBS (Speculative Store Bypass Safe) against Spectre variants.

The system has all relevant security mitigations activated against the various vulnerabilities.

AMD - Dual EPYC 7763 / 75F3 / 7443 / 7343 / 72F3 

Our AMD platform for the Milan generation is a GIGABYTE’s MZ72-HB0 rev.3.0 board as the primary test platform for the EPYC 7763, 75F3, 7443, 7343 and 72F3. The system is running under full default settings, meaning performance or power determinism as configured by AMD in their default SKU fuse settings.

CPU ​2x AMD EPYC 7763 (2.45-3.500 GHz, 64c, 256 MB L3, 280W) /
2x AMD EPYC 75F3 (3.20-4.000 GHz, 32c, 256 MB L3, 280W) /
2x AMD EPYC 7443 (2.85-4.000 GHz, 24c, 128 MB L3, 200W) /
2x AMD EPYC 7343 (3.20-3.900 GHz, 16c, 128 MB L3, 190W) /
2x AMD EPYC 72F3 (3.70-4.100 GHz, 8c, 256MB L3, 180W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks Crucial MX300 1TB
Motherboard GIGABYTE MZ72-HB0 (rev. 3.0)
PSU EVGA 1600 T2 (1600W)

Software wise, we ran Ubuntu 20.10 images with the latest release 5.11 Linux kernel. Performance settings both on the OS as well on the BIOS were left to default settings, including such things as a regular Schedutil based frequency governor and the CPUs running performance determinism mode at their respective default TDPs unless otherwise indicated.

AMD - Dual EPYC 7713 / 7662

A few performance figures of the EPYC Milan 7713 and Rome 7762 parts were done on AMD’s Daytona system – unfortunately we no longer have access to these SKUs so this is why the differing platform.

CPU 2x AMD EPYC 7713 (2.00-3.365 GHz, 64c, 256 MB L3, 225W) /
2x AMD EPYC 7662 (2.00-3.300 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks Varying
Motherboard Daytona reference board: S5BQ
PSU PWS-1200

AMD - Dual EPYC 7742

Our local AMD EPYC 7742 system is running on a SuperMicro H11DSI Rev 2.0.

CPU ​2x AMD EPYC 7742 (2.25-3.4 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks Crucial MX300 1TB
Motherboard SuperMicro H11DSI0
PSU EVGA 1600 T2 (1600W)

As an operating system we’re using Ubuntu 20.10 with no further optimisations. In terms of BIOS settings we’re using complete defaults, including retaining the default 225W TDP of the EPYC 7742’s, as well as leaving further CPU configurables to auto, except of NPS settings where it’s we explicitly state the configuration in the results.

The system has all relevant security mitigations activated against speculative store bypass and Spectre variants.

Intel - Dual Xeon Platinum 8380

For our new Ice Lake test system based on the Whiskey Lake platform, we’re using Intel’s SDP (Software Development Platform 2SW3SIL4Q, featuring a 2-socket Intel server board (Coyote Pass).

The system is an airflow optimised 2U rack unit with otherwise little fanfare.

Our review setup solely includes the new Intel Xeon 8380 with 40 cores, 2.3GHz base clock, 3.0GHz all-core boost, and 3.4GHz peak single core boost. That’s unusual about this part as noted in the intro, it’s running at a default 205W TDP which is above what we’ve seen from previous generation non-specialised Intel SKUs.

CPU 2x Intel Xeon Platinum 8380 (2.3-3.4 GHz, 40c, 60MB L3, 270W)
RAM 512 GB (16x32 GB) SK Hynix DDR4-3200
Internal Disks Intel SSD P5510 7.68TB
Motherboard Intel Coyote Pass (Server System S2W3SIL4Q)
PSU 2x Platinum 2100W

The system came with several SSDs including Optane SSD P5800X’s, however we ran our test suite on the P5510 – not that we’re I/O affected in our current benchmarks anyhow.

As per Intel guidance, we’re using the latest BIOS available with the 270 release microcode update.

Intel - Dual Xeon Platinum 8280

For the older Cascade Lake Intel system we’re also using a test-bench setup with the same SSD and OS image as on the EPYC 7742 system.

Because the Xeons only have 6-channel memory, their maximum capacity is limited to 384GB of the same Micron memory, running at a default 2933MHz to remain in-spec with the processor’s capabilities.

CPU 2x Intel Xeon Platinum 8280  (2.7-4.0 GHz, 28c, 38.5MB L3, 205W)
RAM 384 GB (12x32 GB) Micron DDR4-3200 (Running at 2933MHz)
Internal Disks Crucial MX300 1TB
Motherboard ASRock EP2C621D12 WS
PSU EVGA 1600 T2 (1600W)

The Xeon system was similarly run on BIOS defaults on an ASRock EP2C621D12 WS with the latest firmware available.

Compiler Setup

For compiled tests, we’re using the release version of GCC 10.2. The toolchain was compiled from scratch on both the x86 systems as well as the Altra system. We’re using shared binaries with the system’s libc libraries.

Introduction - 2nd Gen Altra 128 Cores Mesh Setup & Memory Subsystem
Comments Locked

60 Comments

View All Comments

  • dullard - Thursday, October 7, 2021 - link

    Far too many people mistakenly think it is AMD vs Intel. In reality it is ARM vs (AMD + Intel together).
  • TheinsanegamerN - Thursday, October 7, 2021 - link

    In reality it's AMD VS INTEL, with ARM the red headed stepchild with 3 extra chromosomes drooling in the corner. x86 still commands 99% of the server market.
  • DougMcC - Thursday, October 7, 2021 - link

    And the reason is price/performance. These chips are pricey for what they deliver, and it shows in amazon instance costs. We looked at moving to graviton 2 instances in aws and even with the in-house pricing advantage there we would be losing 55+% performance for <25% price advantage.
  • eastcoast_pete - Thursday, October 7, 2021 - link

    Was/is it really that bad? Wow! I thought AWS is making a value play for their gravitons, your example suggests that isn't working so great.
  • mode_13h - Thursday, October 7, 2021 - link

    Could be that demand is simply outstripping their supply. Amazon isn't immune from chip shortages either, you know?
  • DougMcC - Thursday, October 7, 2021 - link

    It was for us. Could be that there are workload issues specific to us, though as a pretty basic j2ee app it's somewhat hard for me to imagine that we are unique.
  • lightningz71 - Friday, October 8, 2021 - link

    It is VERY workload dependent.
  • lemurbutton - Friday, October 8, 2021 - link

    Graviton2 is now 50% of all new instances at AWS.
  • DougMcC - Friday, October 8, 2021 - link

    Not super surprising. Even with the massive loss of performance, it's still cheaper. If you don't need performance, why wouldn't you choose the cheapest thing?
  • Wilco1 - Friday, October 8, 2021 - link

    In most cases Graviton is not only cheaper but also significantly faster. It's easy to find various examples:

    https://docs.keydb.dev/blog/2020/03/02/blog-post/
    https://about.gitlab.com/blog/2021/08/05/achieving...
    https://yegorshytikov.medium.com/aws-graviton-2-ar...
    https://www.instana.com/blog/best-practices-for-an...

Log in

Don't have an account? Sign up now