SPEC - Multi-Threaded Performance - Aggregate

Switching over to the aggregate geomean scores for the suites, we see a more moderate view of the generational improvements of the Altra Max chip:

SPECint2017 Base Rate-N Estimated Performance

In the integer suite, the M128-30 only sees a 6-10% advantage over the Q80-33 depending on the 2- or 1-socket scores. It’s a smidge faster than the EPYC 7763, but there’s more considerations to have than just the total scores.

SPECfp2017 Base Rate-N Estimated Performance

In the floating-point suite, the system also sees rather lacklustre figures of only 3-4% advantage of the M128-30 over the Q80-33.

The general problem of these scores showcase is a trend of the new Altra Max design, and that is that it’s not as general-purpose as we tend to expect for a CPU. Even though we see regular large workload gains of 30-45%, the way the suite is designed for the “base” scores is that we’re running all workloads with the same number of instances, something which at 128 cores on the Altra Max inevitably leads to performance regressions in anything that is more demanding on memory and caches.

When we first heard of the Altra Max only featuring a 16MB cache, we were quite pessimistic of this aspect of the design, well – that was also true of the 32MB cache of the 80-core Altra, where performance in some workloads just can not scale well beyond a certain core count due to the shared resource contention.

SPEC - Multi-Threaded Performance - Subscores SPEC - Single-Threaded Performance
Comments Locked

60 Comments

View All Comments

  • dullard - Thursday, October 7, 2021 - link

    Far too many people mistakenly think it is AMD vs Intel. In reality it is ARM vs (AMD + Intel together).
  • TheinsanegamerN - Thursday, October 7, 2021 - link

    In reality it's AMD VS INTEL, with ARM the red headed stepchild with 3 extra chromosomes drooling in the corner. x86 still commands 99% of the server market.
  • DougMcC - Thursday, October 7, 2021 - link

    And the reason is price/performance. These chips are pricey for what they deliver, and it shows in amazon instance costs. We looked at moving to graviton 2 instances in aws and even with the in-house pricing advantage there we would be losing 55+% performance for <25% price advantage.
  • eastcoast_pete - Thursday, October 7, 2021 - link

    Was/is it really that bad? Wow! I thought AWS is making a value play for their gravitons, your example suggests that isn't working so great.
  • mode_13h - Thursday, October 7, 2021 - link

    Could be that demand is simply outstripping their supply. Amazon isn't immune from chip shortages either, you know?
  • DougMcC - Thursday, October 7, 2021 - link

    It was for us. Could be that there are workload issues specific to us, though as a pretty basic j2ee app it's somewhat hard for me to imagine that we are unique.
  • lightningz71 - Friday, October 8, 2021 - link

    It is VERY workload dependent.
  • lemurbutton - Friday, October 8, 2021 - link

    Graviton2 is now 50% of all new instances at AWS.
  • DougMcC - Friday, October 8, 2021 - link

    Not super surprising. Even with the massive loss of performance, it's still cheaper. If you don't need performance, why wouldn't you choose the cheapest thing?
  • Wilco1 - Friday, October 8, 2021 - link

    In most cases Graviton is not only cheaper but also significantly faster. It's easy to find various examples:

    https://docs.keydb.dev/blog/2020/03/02/blog-post/
    https://about.gitlab.com/blog/2021/08/05/achieving...
    https://yegorshytikov.medium.com/aws-graviton-2-ar...
    https://www.instana.com/blog/best-practices-for-an...

Log in

Don't have an account? Sign up now