AMD EPYC Milan Review Part 2: Testing 8 to 64 Cores in a Production Platformby Andrei Frumusanu on June 25, 2021 9:30 AM EST
Conclusion & End Remarks
Today’s review article is in a sense very much a follow-up piece to our original Milan review back in March. There are two sides of today’s new performance numbers: the improved positioning and fixed power behaviour of the high-core count SKUs, as well as the addition of the 16- and 24-core count units into the competitive landscape.
Starting off with the platform change, the issues that very odd power behaviour that we initially encountered on AMD’s Daytona platform back in March do not exhibit themselves on our new GIGABYTE test platform. This means that the high idle power figures of +100W are gone, and that power deficit went back towards the actual CPU cores, allowing for the EPYC 7763 and EPYC 75F3 we’ve retested today to perform quite a bit better than what we had initially published in our Milan and follow-up Ice Lake SP reviews. The 7763’s socket performance increased +7.9% in SPECint2017, which many core-bound compute workloads seeing increases of +13%.
In general, this is a much welcome resolution to the one thorn in the eye we initially encountered with Milan – and it now represents a straight up upgrade over Rome in every aspect, without compromises.
The second part of today’s review revolves around the lower core count SKUs in the Milan line-up.
Starting off with the 8-core 72F3, this is admittedly quite the odd part, and will not be of use for everybody but the most specialised deployments. The one thing about the chip that makes it stand out is excellent per-thread performance. The use-cases for the chip are those where per-core licenses play a large role in the total cost of ownership, and here the chip should fill that role well.
The 24-core EPYC 7443 and 16-core EPYC 7343 are more interesting in the mid-stack given their excellent performance. Naturally, socket performance is lower than the higher core count SKUs, but it scales sub-linearly down.
The most interesting comparisons today were pitting the 24- and 16-core Milan parts against Intel’s newest 28-core Xeon 6330 based on the new Ice Lake SP microarchitecture. The AMD parts are also in the same price range to Intel’s chip, at $2010 and $1565 versus $1894. The 16-core chip actually mostly matches the performance of the 28-competitor in many workloads while still showcasing a large per-thread performance advantage, while the 24-core part, being 6% more expensive, more notably showcases both large total +26% throughput and large +47% per-thread performance leadership. Database workloads are admittedly still AMD’s weakness here, but in every other scenario, it’s clear which is the better value proposition.