Disclaimer June 25th: The benchmark figures in this review have been superseded by our second follow-up Milan review article, where we observe improved performance figures on a production platform compared to AMD’s reference system in this piece.

Conclusion: AMD's Return to High Performance Compute

On the crest of this launch, AMD has showcased that it can supply enterprise processors to the market again. After the decade or more of its Opteron brand being successful, and then fading away, the EPYC product lines have delineated a clear roadmap from AMD to re-enter this space. Back in the launch of the first generation EPYC, in June 2017, AMD promised an ambitious three year roadmap involving significant performance improvements and a return to the high-performance x86 compute space, culminating in today’s launch.

The goal throughout that time was to bring customers back into the fold – to show them that AMD has ambitious roadmaps and that the company can execute and deliver, while offering a competitive market. As a result, AMD’s lead OEM partners are now doing sizeable volume, over 10% market share, and AMD is scoring big wins in major computing contracts such as two thirds of the US exascale systems, such as Frontier and El Capitan. Frontier, as we learned in our interview with AMD’s Forrest Norrod, is using a custom EPYC Milan based processor called ‘Trento’, while El Capitan will be designed with the next generation EPYC after Milan, called Genoa.

Two Sides of a Coin

Milan is really an evolution and iteration of the design principles that made Rome, with the new chip being defined by its use of the newer Zen3 microarchitecture and chiplet design, including larger characteristic changes such as the new unified 32MB L3 cache shared amongst 8 cores in a single CCX/CCD. Where we see the direct results of these new improvements is in great uplifts in single-threaded and per-thread performance, with figures routinely reaching +20-25% in a wide variety of workloads. The new Milan parts have cores that better take advantage of the larger caches, and higher boost frequencies across the whole stack means that per-core performance has seen big gains.

Particularly new chips such as the EPYC 75F3 with 32 cores and 4 GHz boost are offering very unique differentiation compared to anything else in the market right now, and AMD is sure to gain a lot of success in use-cases which either are limited by per-core software licensing or have service-level-agreements and require higher per-core performance than delivered by the higher density core SKUs.

Where things aren’t quite as positive is in the generational peak performance metrics under full load of all cores. The problem here seems to be generational regressions on the power consumption of the 'un-core' parts of Milan, i.e everything that isn't the core – meaning most likely the new faster IOD, or possibly the new L3 cache design, is increasing the base power. This means idle power is higher, and power available to the cores (at full load) falls behind, decreasing socket efficiency compared to Rome. So, while AMD has invested into doing a smaller redesign of the IOD in Milan to achieve better latencies and higher memory performance, it has come at a cost of socket efficiency and performance for some of the parts. There’s no real silver lining here to the situation, and it's easily Milan’s glass jaw that hinders it from achieving even better performance.

For the future, if Genoa is able to ditch the 14nm IOD in favour of a more modern process node, and employ advanced packaging technologies such as X3D, and more efficient power management, even a 50 W reduction in power on the part of the un-core parts would actually signify a +50% increase in the power envelope available for the cores, as well as help AMD enable lower total power offerings below 155 W on the latest generation core. 

AMD Retains x86 Performance Leadership

From a competitive standpoint, Milan continues to strengthen and maintain a very stark one-sided performance advantage against its biggest competitor, Intel. Rome had already offered more raw socket performance than the best Intel had to offer at the time, and the gap is currently quite large as Intel has not updated in that time. Intel has stated that its Ice Lake Xeon-SP family will come sometime soon, however unless Intel manages to close the core count gap, then AMD looks to be in very good shape.

Meanwhile, as AMD is focused on Intel, the Arm competition has also entered the market with force through 2020, and designs such as the Ampere Altra are able to outperform the new top Milan SKUs in many throughput-bound workloads. AMD still has very clear advantages, such as much superior memory performance through huge caches, or vastly superior per-thread performance with specialised dedicated SKUs. Still, it leaves AMD in a spot as they can’t claim to be the outright performance leader under every scenario, and offers another generational target to consider as it develops future cores. 

AMD sets its own bar quite high with Milan - by aggressively emphasising its performance gains in the middle of the product stack, the general enterprise market will look on these parts very favorably. There is always room for improvement, but if AMD equip themselves with a good IO update next generation, EPYC could stand to gain better-than-generational performance in the future. But as it stands, the product is a very solid offering in light of the competition in the market. 

Compiling LLVM, NAMD Performance
Comments Locked

120 Comments

View All Comments

  • aryonoco - Tuesday, March 16, 2021 - link

    Thanks for the excellent article Andrei and Ian. Really appreciate your work.

    Just wondering, is Johan no longer inlvolved in server reviews? I'll really miss him.
  • Andrei Frumusanu - Saturday, March 20, 2021 - link

    Johan is no longer part of AT.
  • SanX - Tuesday, March 16, 2021 - link

    In summary, the difference in performance 9 vs 8 for (Milan vs Rome) means they are EQUAL. Not a single specific application which shows more than that. So much for the many months of hype and blahblah.
  • tyger11 - Tuesday, March 16, 2021 - link

    Okay, now give us the new Zen 3 Threadripper Pro!
  • AusMatt - Wednesday, March 17, 2021 - link

    Page 4 text: "a 255 x 255 matrix" should read: "a 256 x 256 matrix".
  • hmw - Friday, March 19, 2021 - link

    What was the stepping for the Milan CPUs? B0? or B1?
  • mkbosmans - Saturday, March 20, 2021 - link

    These inter-core synchronisation latency plots are slightly misleading, or at least not representative of "real software". By fixing the cache line that is used to the first core in the system and then ping-ponging it between to other cores you do not measure core-core latency, but rather core-to-cacheline-to-core, as expressed in the article. This is not how inter-thread communication usually works (in well-designed software).
    Allocating the cache line on the memory local to one of the ping-pong threads would make the plot more informative (although a bit more boring).
  • mode_13h - Saturday, March 20, 2021 - link

    Are you saying a single memory address is used for all combinations of core x core?

    Ultimately, I wonder if it makes any difference which NUMA domain the address is in, for a benchmark like this. Once it's in L1 cache, that's what you're measuring, no matter the physical memory address.

    Also, I take issue with the suggestion that core-to-core communication necessarily involves memory in one of the core's NUMA domains. A lot of cases where real-world software is impacted by core-to-core latency involves global mutexes and atomic counters that won't necesarily be local to either core.
  • mkbosmans - Saturday, March 20, 2021 - link

    Yes, otherwise the SE quadrant (socket 2 to socket 2 communication) would look identical to the NW quadrant, right?

    It does matter on which NUMA node the address is in, this is exactly what is addressed later in the article about Xeon having a better cache coherency protocol where this is less of an issue.

    From the software side, I was more thinking of HPC applications where a situation of threads exchanging data that is owned by one of them is the norm, e.g. using OpenMP or MPI. That is indeed a different situation from contention on global mutexes.
  • mode_13h - Saturday, March 20, 2021 - link

    How often is MPI used for communication *within* a shared-memory domain? I tend to think of it almost exclusively as a solution for inter-node communication.

Log in

Don't have an account? Sign up now