Conclusion & End Remarks

We’ve been hearing about Arm in the server space for many years now, with many people claiming “it’s coming”; “it’ll be great”, only for the hype to fizzle out into relative disappointment once the performance of the chips was put under the microscope. Thankfully, this is not the case for the Graviton2: not only were Amazon and Arm able to deliver on all of their promises, but they've also hit it out of the park in terms of value against the incumbent x86 players.

The Graviton2 is the quintessential reference Neoverse N1 platform as envisioned by Arm, aiming for nothing less than disruption of the datacentre market and making Arm servers a competitive reality. The chip is not only  able to compete in terms of raw throughput thanks to its 64 physical cores in a single socket, but it also manages to showcase competitive single-thread performance, keeping in line with AMD and Intel systems in the market.

The Amazon chip isn’t perfect, we definitely would have wanted to see more L3 cache integrated into the mesh interconnect as the 32MB does seem quite mediocre for handling 64 cores, and the chip does suffer from this aspect in terms of its performance scaling in memory heavy workloads. Only Amazon knows if this is a real-world bottleneck for the chip and the kind of workloads that are typical in the cloud.

Performance wise, there’s a big empty outline of an elephant in the room that's been missing from our data today, and that’s AMD’s new EPYC2 Rome processors. AMD has showed it had been able to vastly scale performance and do away with a lot of the limitations presented by the first generation EPYC processors that we saw today. Even if we can somewhat estimate the performance that Rome would represent against the Graviton2, we don’t have any idea of what kind of pricing Amazon will be launching the new c5a type instances at.

In terms of value, the Graviton2 seemingly ends up with top grades and puts the competition to shame. This aspect not only will be due to the Graviton2’s performance and efficiency, but also due to the fact that suddenly Amazon is now vertically integrated for its EC2 hardware platforms. If you’re an EC2 customer today, and unless you’re tied to x86 for whatever reason, you’d be stupid not to switch over to Graviton2 instances once they become available, as the cost savings will be significant.

What does this mean for non-Amazon users? Well the Arm server has become a reality, and companies such as Ampere and their new Altra server chips are trying to quickly follow up with the same recipe as the Graviton2 and offer similar ready-made meals for the non-Amazons of the world. These chips however will have to compete with AMD’s Rome, and later in the year the new Milan, which won’t be easy. Meanwhile Intel doesn’t seem to be a likely competitor in the short term while they’re attempting to resolve their issues.

Long-term, things are looking bright for the Arm ecosystem. Arm themselves are aiming to maintain a yearly 20-25% compound annual growth rate for performance, and Ampere already stated they’re looking for yearly hardware refreshes. We don’t know Amazon’s plans, but I imagine it’ll be similar, if not skipping some generations. Around the 2022 timeframe we should see Matterhorn-based products, Arm’s new Very Large™ CPU microarchitecture which should again accelerate things dramatically. In a similar sense, the newly founded Nuvia has lofty goals for their entrance into the datacentre market, and they do have the design talent with a track record to possibly deliver, in a few years’ time.

The Graviton2 is a great product, and we’re looking forward to see more such successful designs from the Arm ecosystem.

Cost Analysis - An x86 Massacre
Comments Locked

96 Comments

View All Comments

  • Wilco1 - Friday, March 13, 2020 - link

    Developing a chip based on a standard Arm core is much cheaper. Arm chip volumes are much higher than Intel and AMD, the costs are spread out over billions of chips.
  • ksec - Tuesday, March 10, 2020 - link

    ARM's licensing comparatively speaking is extremely cheap even for their most expensive N1 Core Blueprint. The development and production cost are largely on ARM's because of the platform model. So Amazon is only really paying for the cost to Fab with TSMC, I would be surprised if those chip cost more than $300. Which is at least a few thousand less than Intel or even AMD.

    Amazon will have to paid for all the software cost though. Making sure all their tools, and software runs on ARM. That is very expensive in engineering cost, but paid off in long term.
  • extide - Friday, March 13, 2020 - link

    Actual production cost is going to be more like $50 or so. WAY less than $300.
  • ksec - Monday, March 30, 2020 - link

    Only the Wafer Cost alone would be $50+ assuming 100% yield. That is excluding licensing and additional R&D. At their volume I would not be surprised it stack up to $300
  • FunBunny2 - Tuesday, March 10, 2020 - link

    "Vertical integration is powerful."

    I find it amusing that compute folks are reinventing the wheel from Henry Ford!! River Rouge.
  • mrvco - Tuesday, March 10, 2020 - link

    It would be interesting to see how the AWS instances compare to performance-competitive Azure instances on a value basis.
  • kliend - Tuesday, March 10, 2020 - link

    Anecdotally, Yes. Amazon is always trying to bring in users for little/no immediate profit.
  • skaurus - Tuesday, March 10, 2020 - link

    At scale, predictability is more important in infrastructure than cost. It may seem that if we have everything we need compiled for Arm, we can just switch over. But these things often look easier in theory than practice. I'd be wary to move existing service to Arm instances, or even starting a new one when I just want to iterate fast and just be sure that underlying level doesn't have any new surprises.
    It will be fine If I have time to experiment, or later, when the dust settles. Right now, I doubt that switching over to these instances once they are available, is actually easy or even smart decision.
  • FunBunny2 - Tuesday, March 10, 2020 - link

    "It may seem that if we have everything we need compiled for Arm, we can just switch over. But these things often look easier in theory than practice. "

    with language compliant compilers, I don't buy that argument. it can certainly be true that RISC-ier processors yield larger binaries and slower performance, but real application failure has to be due to OS mismatches. C is the universal assembler.
  • mm0zct - Wednesday, March 11, 2020 - link

    Beware that in C struct packing is ABI dependent, if you write out a struct to disk on x86_64, and try and read it back in on Aarch64, you might have a bad time unless you use the packed pragma and use specified-width types. This is the sort of thing that might get you if you try to migrate between architectures.

    Also many languages (including C) have hand optimised math libraries with inline assembler, which might still be using plain-C fallbacks on other architectures. There was a good article discussing the migration to Aarch64 at Cloudflare, they particulary encountered issues with go not being optimised on Aarch64 yet https://blog.cloudflare.com/arm-takes-wing/

Log in

Don't have an account? Sign up now