SPEC - MT Performance (16xlarge 64vCPU)

While the core scaling figures are interesting from an academical standpoint, what’s even more interesting is seeing the absolute throughput numbers compared to the competition. We’re starting off with SPECrate results with 64-rate runs, fully utilising the vCPUs of the EC2 16xlarge instances.

Again, there’s the conundrum of the apples-and-oranges comparison between the Graviton2’s 64 physical cores versus the 32 cores plus SMT setups of the AMD and Intel platforms, but again, that’s how Amazon is positioning these systems in terms of throughput capacity and instance pricing. You could argue that if you can parallelise your workload above a certain amount of threads, it doesn’t matter on whether you can achieve the higher throughput through more cores or through mechanisms such as SMT. Remember, when talking about silicon die area, you could at minimum probably fit 2 N1 cores in the same area than an AMD Zen core or an Intel core (probably an even higher number in the latter comparison).

SPECint2006 Rate Estimated Scores (64 vCPU)

The Graviton2’s performance is absolutely impressive across the board, beating the Intel Cascade Lake system by quite larger margins in a lot of the workloads. AMD’s Epyc system here doesn’t fare well at all and is showing its age.

SPECfp2006(C/C++) Rate Estimated Scores (64 vCPU)

It’s particularly in the non-memory bound workloads that the Graviton2 manages to position itself significantly ahead, and here the advantage of having a two-fold physical core lead with essentially double the execution resources shows its benefits.

SPEC2006 Rate-64 Estimated Total (16xlarge)

In the overall SPECrate2006 results, the Graviton2 is shy of Arm’s projection of a 1300 score, but again the Amazon chip does clock in a bit lower and has less cache than what Arm had envisioned in their presentations a year ago.

Nevertheless, the Graviton2 has the performance lead here even against the Intel Cascade Lake based EC2 instances, which is quite surprising given the latter’s cost structure, and indicator of what to come later in the cost analysis.

SPECint2017 Rate Estimated Scores (64 vCPU)

Arm’s physical core count advantage here continues to show in the execution intensive workloads of SPECint2017, showcasing some very large performance leads in many workloads. The performance leap on important workloads such as 502.gcc again isn’t too great over the Intel system for example – Amazon and Arm definitely could do better here if the chip would have had more cache available.

SPECfp2017 Rate Estimated Scores (64 vCPU) (copy)

In SPECfp2017, there’s more workloads in which the Xeon system’s 2-socket setup with a 50% memory channel advantage does show up, able to result in more available bandwidth and thus give the more memory intensive workloads in this suite a good performance advantage over the Graviton2 system. Still, the Arm chip fares very competitively and does put the older AMD EPYC processor in its place, and yes again, we have to remind ourselves that things would be quite different here if we’d be able to include Rome in our charts.

SPEC2017 Rate-64 Estimated Total (16xlarge)

Overall, the Graviton2 system has an undisputed lead in the SPECint2017 suite, whilst just edging out on average the Xeon system in the FP suite, only losing out in situations where the Xeon’s higher memory bandwidth comes at play.

SPEC - Multi-Core Performance Scaling SPEC - MT Performance (4xlarge 16 vCPU)
Comments Locked

96 Comments

View All Comments

  • eek2121 - Tuesday, March 10, 2020 - link

    It is worth noting AnandTech’s own numbers: https://www.anandtech.com/show/14694/amd-rome-epyc...
  • RallJ - Tuesday, March 10, 2020 - link

    I understand that, but consider everything boils down to just $/vCPU/hr, I think a discussion around the new Xeon Gold R is warranted. For example, the existing dual-socket Xeon Amazon is using can be substituted by the new 6248R for 60% lower price while providing a modest turbo and base frequency improvement at lower a slight TDP reduction versus the existing Platinum they have. Unless Amazon decides to pocket the saving, that would have a massive impact on the vCPU $ comparison.

    https://www.anandtech.com/show/15542/intel-updates...
  • Andrei Frumusanu - Tuesday, March 10, 2020 - link

    Hyperscalers never pay full list price for their special SKUs, so comparisons to public new SKUs like the 6248R are not relevant.

    We're happy to update the landscape once EC2 introduces newer generation instances, but for now, these are the current prices and costs for what's available today and in the next few months.
  • Spunjji - Wednesday, March 11, 2020 - link

    I'm confused. Either you can think that everything boils down to $/vCPU/hr, in which case the only thing that's relevant is what Amazon actually offer, or you can think that "a discussion around the 'new' Xeon Gold R is warranted". They're mutually exclusive.
  • close - Tuesday, March 10, 2020 - link

    Great write-up Andrei. One question (I hope I didn't miss the answer in the article). Does Amazon's chip come out in front in the cost analysis because Amazon decided to take a loss or overcharge the other options, or is it an organic difference where it's intrinsically better?
  • Andrei Frumusanu - Tuesday, March 10, 2020 - link

    We have no idea of Amazon's internal cost structure, so take the cost analysis from and end-user TCO perspective.
  • eek2121 - Tuesday, March 10, 2020 - link

    I suspect the TDP of this chip is likely in the 150 watt range. We also know nothing about the operating environment of any of the chips. For example, the chip is rated for DDR4 3200, but is it running at 3200 speeds? The EPYC chip likely is NOT. So many questions here...
  • Andrei Frumusanu - Tuesday, March 10, 2020 - link

    It is running 3200, Amazon confirmed that.

    They didn't comment on TDP, but given Arm and Ampere's figures, I think my estimate is correct.
  • Flunk - Thursday, April 9, 2020 - link

    They're comparing VMs with the same cost/hour. What number of cores/threads is isn't really relevant.
  • autarchprinceps - Sunday, October 25, 2020 - link

    That’s exactly why they reserved the entire hardware. If you run only a single workload on SMT, that single thread can use the entire core. That’s kind of the point of SMT.

Log in

Don't have an account? Sign up now