Conclusion & End Remarks

Today’s review article is in a sense very much a follow-up piece to our original Milan review back in March. There are two sides of today’s new performance numbers: the improved positioning and fixed power behaviour of the high-core count SKUs, as well as the addition of the 16- and 24-core count units into the competitive landscape.

Starting off with the platform change, the issues that very odd power behaviour that we initially encountered on AMD’s Daytona platform back in March do not exhibit themselves on our new GIGABYTE test platform. This means that the high idle power figures of +100W are gone, and that power deficit went back towards the actual CPU cores, allowing for the EPYC 7763 and EPYC 75F3 we’ve retested today to perform quite a bit better than what we had initially published in our Milan and follow-up Ice Lake SP reviews. The 7763’s socket performance increased +7.9% in SPECint2017, which many core-bound compute workloads seeing increases of +13%.

In general, this is a much welcome resolution to the one thorn in the eye we initially encountered with Milan – and it now represents a straight up upgrade over Rome in every aspect, without compromises.

The second part of today’s review revolves around the lower core count SKUs in the Milan line-up.

Starting off with the 8-core 72F3, this is admittedly quite the odd part, and will not be of use for everybody but the most specialised deployments. The one thing about the chip that makes it stand out is excellent per-thread performance. The use-cases for the chip are those where per-core licenses play a large role in the total cost of ownership, and here the chip should fill that role well.

The 24-core EPYC 7443 and 16-core EPYC 7343 are more interesting in the mid-stack given their excellent performance. Naturally, socket performance is lower than the higher core count SKUs, but it scales sub-linearly down.

The most interesting comparisons today were pitting the 24- and 16-core Milan parts against Intel’s newest 28-core Xeon 6330 based on the new Ice Lake SP microarchitecture. The AMD parts are also in the same price range to Intel’s chip, at $2010 and $1565 versus $1894. The 16-core chip actually mostly matches the performance of the 28-competitor in many workloads while still showcasing a large per-thread performance advantage, while the 24-core part, being 6% more expensive, more notably showcases both large total +26% throughput and large +47% per-thread performance leadership. Database workloads are admittedly still AMD’s weakness here, but in every other scenario, it’s clear which is the better value proposition.

Compiling Performance / LLVM
Comments Locked

58 Comments

View All Comments

  • Threska - Sunday, June 27, 2021 - link

    Seems the only thing blunted is the economics of throwing more hardware at the problem. Actual technical development has taken off because all the chip-makers have multiple customers across many domains. That's why Anandtech and others are able to have articles like they have.
  • tygrus - Sunday, June 27, 2021 - link

    Reminds me of the inn keeper from Les Miserables. Nice to your face with lots of good promises but then tries to squeeze more money out of the customer at every turn.
  • tygrus - Sunday, June 27, 2021 - link

    I was ofcourse referring to the SW not the CPU.
  • 130rne - Tuesday, September 14, 2021 - link

    What the hell did I just read? Just came across this, I had no idea the enterprise side was this fucked. They are scalping the ungodly dog shit out of their own customers. So you obviously can't duplicate their software in house meaning you're forced to use their software to be competitive, that seems to be the gist. So I buy a stronger cpu, usually a newer model, yeah? And it's more power efficient, and I restrict the software to a certain number of threads on those cpus, they'll just switch the pricing model because I have a better processor. This would incentivize me to buy cheaper processors with less threads, yeah? Buy only what I need.
  • 130rne - Tuesday, September 14, 2021 - link

    Continued- basically gimping my own business, do I have that right? Yes? Ok cool, just making sure.
  • eachus - Thursday, July 15, 2021 - link

    There is a compelling use case that builders of military systems will be aware of. If you have an in-memory database and need real-time performance, this is your chip. Real-time doesn't mean really fast, it means that the performance of any command will finish within a specified time. So copy the database on initialization into the L3 cache, and assuming the process is handing the data to another computer for further processing, the data will stay in the cache. (Writes, of course, will go to main memory as well, but that's fine. You shouldn't be doing many writes, and again the time will be predictable--just longer.)

    I've been retired for over a decade now, so I don't have any knowledge of systems currently being developed.

    Who would use a system like this? A good example would be a radar recognition and countermeasures database. The fighter (or other aircraft) needs that data within milliseconds, microseconds is better.
  • hobbified - Thursday, August 19, 2021 - link

    At the time I was involved in that (~2010) it was per-core, with multiple cores on a package counting as "half a CPU" — that is, 1 core = 1CPU license, two 1-core packages = 2CPU license, one 2-core package = 1CPU license, 4 cores total = 2CPU license, etc.

    I'm told they do things in a completely different (but no less money-hungry) way these days.
  • lemurbutton - Friday, June 25, 2021 - link

    Can we get some metrics on $/performance as well as power/performance? I think the Altra part would be better value there.
  • schujj07 - Friday, June 25, 2021 - link

    "Database workloads are admittedly still AMD’s weakness here, but in every other scenario, it’s clear which is the better value proposition." I find this conclusion a bit odd. In MultiJVM max-jOPS the 2S 24c 7443 has ~70% the performance of the 2S 40c 8380 (SNC1 best result) despite having 60% the cores of the 8380. In the critical-jOPS the 7443's performance is between the 8380's SNC1 & SNC2 results despite the core disadvantage. To me that means that the DB performance of the Epyc isn't a weakness.

    I have personally run the SAP HANA PRD performance test on Epyc 7302's & 7401's. Both CPUs passed the SAP HANA PRD performance test requirements on ESXi 6.7 U3. However, I do not have scores from Intel based hosts for comparison of scores.
  • schujj07 - Friday, June 25, 2021 - link

    The DB conclusion also contradicts what I have read on other sites. https://www.servethehome.com/amd-epyc-7763-review-... Look at the MariaDB numbers for explanation of what is being analyzed. Their 32c Epyc &543p vs Xeon 6314U is also a nice core count vs core count comparison. https://www.servethehome.com/intel-xeon-gold-6314u... In that the Epyc is ~20%+ faster in Maria than the Xeon.

Log in

Don't have an account? Sign up now