Benchmarks Versus Reality

Ever run into the problem that your manager wants a clear and short answer, while the real story has lots of nuances? (ed: and hence AnandTech) The short but inaccurate answer almost always wins. It is human nature to ignore complex stories and to prefer easy to grasp answers. The graph below is a perfect illustration of that. Although this one has been produced by Intel, almost everybody in the industry, including the ARM SoC companies, love the simplicity it affords in describing the competitive situation.

The graph compares the ICC compiled & published results for SPECint_rate_base2006 with some of the claimed (gcc compiled?) results of the ARM server SoC vendors.

The graph shows two important performance vectors: throughput and single core performance. The former (X-axis) is self-explanatory, the latter (Y-axis) should give an indication of response times (latency). The two combined (x, y coordinate) should give you an idea on how the SoC/CPU performs in most applications that are not perfectly parallel. It is a very elegant way to give a short and crystal clear answer to anyone with a technical or scientific background.

But there are many drawbacks. The main problem is "single core performance". Since this is just diving the score by the number of cores, this favors the CPUs with some form of hardware multi-threading. But in many cases, the extra threads only help with throughput and not with latency. For example, if there are a few heavy SQL requests that keep you waiting, adding threads to a core does not help at all, on the contrary. So the graph above gives a 20% advantage to the SMT capable cores of Intel on y-axis, while hyperthreading is most of the time a feature that boosts throughput.

Secondly, dividing throughput by the number of cores means also that you favor the architectures that are able to run many instances of SPECint. In other words, it is all about memory bandwidth and cache size. So if a CPU does not scale well, the graph will show a lower per core performance. So basically this kind of graph creates the illusion of showing two performance parameters (throughput and latency), but it is in fact showing throughput and something that is more related to throughput (throughput normalized per core?) than latency. And of course, SPECint_rate is only a very inaccurate proxy for server compute performance: IPC is higher than in most server applications and there is too much emphasis on cache size and memory bandwidth. Running 32 parallel instances of an application is totally different from running one application with 32 threads.

This is definitely not written to defend or attack any vendor: many vendors publish and abuse these kind of graphs to make their point. Our point is that it is very likely that this kind of graph gives you a very inaccurate and incomplete view of the competition.

But as the saying goes, the proof is in the pudding, so let's put together a framework for comparing these high level overviews with real world testing. First step, let's pretend the graph above is accurate. So the Cavium ThunderX has absolutely terrible single threaded performance: one-fifth that of the best Xeon D, not even close to any of the other ARM SoCs. A ThunderX core cannot even deliver half the performance of an ARM Cortex-A57 core (+/- 10 points per core), which is worse than the humble Cortex-A53. It does not get any better: the throughput of a single ThunderX SoC is less than half of the Xeon D-1581. The single threaded performance of the Xeon D-1581 is only 57% of the Xeon E5-2640's and it cannot compete with the throughput of even a single Xeon E5-2640 (2S = 2.2 times the Xeon-D 1581).

Second step, do some testing instead of believing vendor claims or published results from SPEC CPU2006. Third step, compare the graph above with our test results...

Memory Subsystem: Latency Measurements Single-Threaded Integer Performance: SPEC CPU2006
Comments Locked

82 Comments

View All Comments

  • silverblue - Thursday, June 16, 2016 - link

    I think AMD themselves admitted that the Opteron X1100 was for testing the waters, with K12 being the first proper solution, but that was delayed to get Zen out of the door. I imagine that both products will be on sale concurrently at some point, but even with AMD's desktop-first approach for Zen, it will probably still come to the server market before K12 (both are due 2017).
  • junky77 - Thursday, June 16, 2016 - link

    still, quite strange, no? AMD is in the server business for years. I'm not talking about their ARM solution only, but their other solutions seem to be less interesting..
  • silverblue - Thursday, June 16, 2016 - link

    I am looking forward to both Zen and K12; there's very little chance that AMD will fail with both.
  • name99 - Wednesday, June 15, 2016 - link

    " It is the first time the Xeon D gets beaten by an ARM v8 SoC..."

    The Apple A9X in the 12" iPad Pro delivers 40GB/s on Stream...
    (That's the Stream built into Geekbench. Conceivably it's slightly different from what's being measured here, but it delivers around 25GB/s for standard desktop/laptop Intel CPUs, and for the A9 and the 9" iPad's A9X, so it seems in the same sort of ballpark.)
  • aryonoco - Thursday, June 16, 2016 - link

    Fantastic article as always Johan. Thank you so much for your very informative articles. I can only imagine how much time and effort writing this article took. It is very much appreciated.

    The first good showing by an ARMv8 server. Nearly 5 years later than expected, but they are getting there. This thing was still produced on 28 HKMG. Give it one more year, a jump to 14nm, and a more mature software ecosystem, and I think the Xeons might finally have some competition on their hands.
  • JohanAnandtech - Thursday, June 16, 2016 - link

    Thank you, and indeed it was probably the most time consuming review ... since Calxeda. :-)
    Yes, there is potential.
  • iwod - Thursday, June 16, 2016 - link

    Even if the ThunderX is half the price of equivalent Xeon, I would still buy Intel Xeon instead. This isn't Smartphone market. In Server, The cost memory and Storage, Networking etc adds up. Not only does it uses a lot more power in Idle, the total TCO AND Pref / Watts still flavours Intel.

    There is also the switching cost of Software involved.
    And those who say Single Core / Thread Performance dont matter have absolutely no idea what they are talking about.

    As far as I can tell, Xeon-D offers a very decent value proposition for even the ARM SoC minded vendors. This will likely continue to be the case as we move to 10nm. I just dont see how ARM is going to get their 20% market share by 2020 as they described in their Shareholder meetings.
  • rahvin - Thursday, June 16, 2016 - link

    If you have to switch software on your severs because you switch architecture you are doing something wrong and are far too dependent on proprietary products. I'm being a bit facetious here but the only reason architecture should limit you is you are using Microsoft products or are in a highly specialized computing field. Linux should dominate your general servers.
  • kgardas - Friday, June 17, 2016 - link

    Even if you are on Linux, still stack support is best on i386/amd64. Look at IBM how it throws a lot of money to get somewhere with POWER8. ARM can't do that, so it's more on vendors to do that and they are doing it a little bit more slowly. Anyway, even AArch64 will mature in LLVM/GCC tool chain, GNU libC, musl libC, linux kernel etc but it'll take some time...
  • tuxRoller - Thursday, June 16, 2016 - link

    Aarch64 has very limited conditional execution support.
    http://infocenter.arm.com/help/index.jsp?topic=/co...

Log in

Don't have an account? Sign up now