Performance Targets: What Are The Numbers?

Naturally all this talk about performance and efficiency needs to substantiated with some concrete numbers. In the context of today’s announcement, most performance figures disclosed by Arm were relative improvements compared to the A72 Cosmos platform, which might not be the most relevant data-point in terms of trying to actually place the N1 in the competitive landscape, however we also have some more concrete absolute figures we’ll try to put some more context behind shortly.

The comparison to the A72 at the same frequency as well as a similarly configured system with SLC configuration, the new N1 outright smashes its predecessor platform / microarchitecture. The figures here represent single-threaded performance in SPEC. In integer workloads we see PPC (performance per clock) and absolute performance gains from 60 to 70%. The floating point benchmarks are even more impressive with gains ranging from 100 to 120%. The data-points represent modelled and emulated performance estimates, the actual real-life performance improvements will higher due other SoC-level improvements as well as software improvements that aren’t available in existing actual A72 silicon products.

Arm again iterates the very large compute performance improvement compared to existing solutions, achieving beyond 2x performance boosts in vector workloads. Naturally, the N1’s ARMv8.2 ISA implementation also means that it supports 8-bit dot product as well as FP16 half-precision instructions which are particularly well fitted for machine-learning workloads, achieving performance boosts near 5x the predecessor platform.

Overall, Arm’s comparison to the A72 makes sense in the context that this is to its predecessor, however we have to keep in mind that the Cortex A72 was a core that was first introduced back in 2015 with first silicon products being released late that year as well as 2016, while the new Neoverse N1 in all likelihood isn’t something which we’ll be seeing in products for another 12-18 months, resulting in a ~3-4 year time span between the two products.

Arm did also divulge absolute SPEC numbers, and here we can have some more interesting analysis to competing platforms:

For a Neoverse N1 64-core hyperscale reference design running at about 2.6GHz, Arm proclaims single-threaded SPECint2006 Speed score of ~37 while reaching an estimated multi-threaded scores of 1310. The figures here are achieved in a quite low whole-server TDP of only 105W. The figures weren’t run actual silicon but rather estimated on Arm’s server farm in an emulation environment with RTL.

Arm made a big note that among the many efforts to improve performance for the Arm ecosystem isn’t only offering better hardware, but also better software. Over the last few years Arm has put a lot of effort into improving open-source tools and compilers, such as GCC. Comparing the latest GCC9 version to GCC5, we’re seeing improvements of 15-13% in integer and floating-point workloads. It’s to be noted that the optimisations made here are real-world use-case improvements, and not targeted changes that are meant to improve SPEC scores.

In order to put context around Arm’s numbers, I went ahead and compiled a set of binaries with GCC8 and had Ian run it on Intel’s and AMD’s latest and greatest, an Xeon W-3172X as well as a AMD Epyc 7601. It’s to be noted that the compiler flags weren’t exactly the same – both AnandTech’s and Arm’s builds were running under –Ofast, however Arm also added some minor flags which I hadn’t had the chance and time to cross-check, as well as enabling LTO. I’m not too concerned about the flag variations, however LTO will give Arm a 2-3% performance advantage over our internal numbers. It’s also to be noted that Arm’s single-threaded figures are marked as “Peak” scores, meaning each individual workload was run with the best performing compiler flags, while our internal figures are “Base” scores, meaning we’re running the same flags across all binaries and tests.

Edit: 25/02/2019: Arm have reached out to clarify that the performance scores were in fact Base runs and without LTO - the slides in question were mixing things up. Thus we have proper apples-to-apples comparisons in our numbers versus Arm's internal numbers.

As always we have to disclose that the below figures are merely internal estimates as they’re not official SPEC submissions. SPEC CPU 2006 has also been deprecated in favour of SPEC CPU 2017. Arm stated that they’ve shared SPEC CPU 2006 figures as that’s still the industry standard at the moment which gives users and customers the best context, and in the coming year or so they’ll switch over to also sharing SPEC CPU 2017 numbers. As for us at AnandTech, I’ve prepared SPEC CPU 2017 and Ian and I will be adopting it in our benchmark suites for PC/server CPUs as well as mobile SoCs in the coming weeks and months.

SPECint2006 Speed Base - Estimates (GCC8) 

In terms of single-threaded performance, the N1 looks to be outright outstanding. With an estimated score of 37, it would beat the most recent and best-performing Arm server CPU, Cavium’s ThunderX2, by a significant margin. It’s to be noted that the real-world performance difference would be smaller than depicted in the above figures: GCC8 notably improved loop vectorisation in 456.hmmer which will give it a 1-2% overall score boost, and of course we have to take into account 2-3% difference due to Arm’s different compiler flags.

Intel’s W-3175X is hardly the most representative hyperscaler CPU, however it gives context as to what Intel’s top-end single-threaded performance in their best multi-core CPUs is. As a reminder, the W-3175X has a single-threaded boost clock of 4.5GHz, significantly above what we see in server SKUs such as the Xeon 8180. AMD’s Epyc 7601 is a more representative data-point against what a hyperscale design such as the N1 would compete against, again as a reminder this is a 3.2GHz single-threaded boost clock on the part of AMD’s first generation Zen core.

What surprised me the most about Arm’s quoted ST score of ~37 is that it’s significantly higher than what we measure on the Cortex A76, which scores in at about ~26 on actual hardware. Software and compiler considerations aside, one of the explanations for this huge 42% performance discrepancy could be the N1’s much better memory and cache system. Here the full system bandwidth is 6x higher than on mobile SoCs, and naturally in a single-threaded workload the thread would have full access to the Neoverse’s N1 64MB SLC cache, a whopping 16x bigger than the L3 in current mobile Cortex A76 designs. If the performance difference is indeed explained by the memory subsystem, it just gives to explain just how important it is to the performance scaling of a core.

SPECint2006 Rate Base - Estimates (GCC8)

Switching over to multi-threaded workloads represented by SPECrate2006, we have to note that this is a best-case scaling scenario for all platforms as there is no serialisation or inter-thread communication, as the test suite simply runs multiple processes in parallel. Even with this in mind, Arm’s projected results for a N1 64 core design are just outright impressive considering the fact that we’re talking about TDPs much smaller than any of AMD and Intel’s solutions, creating a performance and efficiency gap that I have a hard time seeing the x86 solutions being able to compete against.

We have to remember that we’re comparing a 64 core platform against AMD and Intel’s current 32/28 core platforms. A more fair comparison would be AMD’s upcoming Rome with 64 CPU cores, here even if AMD manages to outright double multi-threaded performance and match Arm’s projected MT numbers, I don’t see them be able to at the same time lower the TDP to match Arm’s estimated 105W target (The Epyc 7601 has a TDP of 180W, Rome details haven’t been announced yet).

SPEC’s Rate benchmarking scoring scales linearly with the instance count. In this case, if we divide Arm’s 1310 figure by the 64 cores of the system, we get a per-instance score of around ~20.5, which seems much more realistic and in-line with the Cortex A76 results we measure on current mobile devices.

Arm’s performance predictions for the Cortex A76 were quite spot on to what we measured on actual devices. We thus are more inclined to give Arm credence and the benefit of the doubt in regards to today’s projected Neoverse N1 scores. The figures do make sense, and are in line with what we saw the microarchitecture able to achieve in mobile.

Naturally we shouldn’t come to any conclusion until we actually have the actual hardware in our hands, but the presented figures are certainly promising if they can be realised by vendors implementing Neoverse N1 systems.

N1 Hyperscale Reference Design & Scaling The Neoverse E1 CPU: A small SMT core for the data-plane
Comments Locked

101 Comments

View All Comments

  • eastcoast_pete - Thursday, February 21, 2019 - link

    I stand corrected on IBM still selling x86 servers (they don't), and am actually glad to hear that Power is also used in smaller shops. It's just that I haven't run into too many Power systems around here. It's a very capable arch.
  • Kevin G - Thursday, February 21, 2019 - link

    The big Power users are also the big cloud providers. Google and Amazon have reportedly taken a liking to openPower hardware. Facebook has reportedly looking into openPower as well. Granted this are small scale compared to the number of x86 systems these companies have, it was a much need shot of energy into the Power platform.
  • nevcairiel - Wednesday, February 20, 2019 - link

    Microsoft already ported Windows Server to ARM, and their entire development stack has support for ARM and ARM64 now, so its only a matter of time for the other server products to be made available.
  • HStewart - Wednesday, February 20, 2019 - link

    It really funny that Microsoft did not trust that environment enough to created Surface using ARM processor.
  • GreenReaper - Wednesday, February 20, 2019 - link

    Uh . . . that's reportedly because Intel came begging them *not* to for the Surface Go 2018 (and probably cut them a very nice deal on the Pentium Gold as a result): https://www.techradar.com/uk/news/microsoft-surfac...

    As mentioned, you can also compile for 64-bit ARM in VS now. This is a major win for some apps which truly require native execution (which is not all of them, but enough to be a pain):
    https://blogs.windows.com/buildingapps/2018/11/15/...

    Will it actually become a viable platform as a result of all this? I suspect it still won't be the default in five years, but in cost-conscious areas it could end up with a foothold. Even if Microsoft doesn't go down that route, it may be open for others to do so for specific purposes, such as education.
  • eastcoast_pete - Wednesday, February 20, 2019 - link

    Not funny; rather, cautious. None of the A76 designs were in silico when MS designed the current Surface. When you specc out a design like the Surface, you base it on what's available at that time, not what might be around next year. Otherwise, the chance of ending up with egg on one's face is uncomfortably high.
  • eastcoast_pete - Wednesday, February 20, 2019 - link

    I agree with you, but, as we all know, businesses buy the hardware that can run the software they want or need, not the other way around. In this regard, I am curious if Oracle and SAP are porting their offerings to ARM64 server. If both of those are on board, this design would have a great chance to get strong traction.
  • HStewart - Wednesday, February 20, 2019 - link

    One thing that I am concern in this article is that this chip is mark as Simulated in charts which to me is just a marketing term. Also it comparing again existing 2018 designs for both Intel and AMD. An actual fair comparison with Sunny Cove based cpu with more units and such.

    I also think that just increase cores is not the best way to handle performance, in todays world single core performance is still very important but this depends on the market the chip is intended for but the important part is software compatibility.
  • Antony Newman - Wednesday, February 20, 2019 - link

    H.

    A Simulated vs Historical point was made in the article. Perhaps you need to reread?

    Also : Single Core performance is very important - especially when they are all running flat out.
    Intel has to throttle down their multi core beasts so the chips don’t catch fire at 14nm.
    At 10nm - Intel will be able to sustain a few more cores before throttling.
    And before Intel are at (Intel) 7nm, ARM will likely overtake Intel on the IPC front (assuming that ARM’s ‘prediction is as accurate as my own)

    AJ
  • eastcoast_pete - Wednesday, February 20, 2019 - link

    Single core is still important for client computers, but much less so for servers.

Log in

Don't have an account? Sign up now