Benchmarks Versus Reality

Ever run into the problem that your manager wants a clear and short answer, while the real story has lots of nuances? (ed: and hence AnandTech) The short but inaccurate answer almost always wins. It is human nature to ignore complex stories and to prefer easy to grasp answers. The graph below is a perfect illustration of that. Although this one has been produced by Intel, almost everybody in the industry, including the ARM SoC companies, love the simplicity it affords in describing the competitive situation.

The graph compares the ICC compiled & published results for SPECint_rate_base2006 with some of the claimed (gcc compiled?) results of the ARM server SoC vendors.

The graph shows two important performance vectors: throughput and single core performance. The former (X-axis) is self-explanatory, the latter (Y-axis) should give an indication of response times (latency). The two combined (x, y coordinate) should give you an idea on how the SoC/CPU performs in most applications that are not perfectly parallel. It is a very elegant way to give a short and crystal clear answer to anyone with a technical or scientific background.

But there are many drawbacks. The main problem is "single core performance". Since this is just diving the score by the number of cores, this favors the CPUs with some form of hardware multi-threading. But in many cases, the extra threads only help with throughput and not with latency. For example, if there are a few heavy SQL requests that keep you waiting, adding threads to a core does not help at all, on the contrary. So the graph above gives a 20% advantage to the SMT capable cores of Intel on y-axis, while hyperthreading is most of the time a feature that boosts throughput.

Secondly, dividing throughput by the number of cores means also that you favor the architectures that are able to run many instances of SPECint. In other words, it is all about memory bandwidth and cache size. So if a CPU does not scale well, the graph will show a lower per core performance. So basically this kind of graph creates the illusion of showing two performance parameters (throughput and latency), but it is in fact showing throughput and something that is more related to throughput (throughput normalized per core?) than latency. And of course, SPECint_rate is only a very inaccurate proxy for server compute performance: IPC is higher than in most server applications and there is too much emphasis on cache size and memory bandwidth. Running 32 parallel instances of an application is totally different from running one application with 32 threads.

This is definitely not written to defend or attack any vendor: many vendors publish and abuse these kind of graphs to make their point. Our point is that it is very likely that this kind of graph gives you a very inaccurate and incomplete view of the competition.

But as the saying goes, the proof is in the pudding, so let's put together a framework for comparing these high level overviews with real world testing. First step, let's pretend the graph above is accurate. So the Cavium ThunderX has absolutely terrible single threaded performance: one-fifth that of the best Xeon D, not even close to any of the other ARM SoCs. A ThunderX core cannot even deliver half the performance of an ARM Cortex-A57 core (+/- 10 points per core), which is worse than the humble Cortex-A53. It does not get any better: the throughput of a single ThunderX SoC is less than half of the Xeon D-1581. The single threaded performance of the Xeon D-1581 is only 57% of the Xeon E5-2640's and it cannot compete with the throughput of even a single Xeon E5-2640 (2S = 2.2 times the Xeon-D 1581).

Second step, do some testing instead of believing vendor claims or published results from SPEC CPU2006. Third step, compare the graph above with our test results...

Memory Subsystem: Latency Measurements Single-Threaded Integer Performance: SPEC CPU2006
Comments Locked

82 Comments

View All Comments

  • silverblue - Thursday, June 16, 2016 - link

    I'm not sure how this is relevant. Johan doesn't review graphics cards, other people at Anandtech do. I bet Guru3D has a much bigger team for that, and I imagine that they have a much narrower scope (i.e. no server stuff).

    I don't think I've looked at a review recently that hasn't had the comments section polluted with "where is the review for x".
  • UrQuan3 - Wednesday, June 15, 2016 - link

    Intel allows their Xeons to sometimes pull double their TDP? No wonder our new machines trip breakers long before I thought they would. I need to test instead of assuming accurate documentation.

    I can see why you chose C-Ray, I'm just sorry a more general ray tracer was not chosen. Still, not it's intended market, though I am suddenly very interested. Ray-tracing and video encoding are my top two tasks.
  • Meteor2 - Thursday, June 16, 2016 - link

    The 'T' in 'TDP' is for thermal. It's a measure of the maximum waste heat which needs to be removed over a certain period of time.
  • UrQuan3 - Wednesday, June 22, 2016 - link

    Yes, it stands for thermal, but power doesn't consumed doesn't just disappear. Convert it to light, convert it to motion, convert it to heat, etc. In this case there is a small amount of motion (electrons) and the rest has to be heat. I expect much higher instantaneous pulls, but this was sustained power. Anyway, I will track down the AVX documentation mentioned below.

    I saw the h264ref. I'll be curious about x264 (handbrake) as the authors seem interested in ARM in the last few years. Unsurprisingly, it is far less optimized than x64. I benchmarked handbrake on the Pi2, Pandaboard, and CI-20 last year, just to see what it would do.
  • JohanAnandtech - Thursday, June 16, 2016 - link

    C-Ray was just a place holder to measure FPU energy consumption. I look into bringing a more potent raytracer into our benchmark suite (povray)

    Video encoding was in the review though, somewhat (h264ref).
  • patrickjp93 - Friday, June 17, 2016 - link

    ARM chips with vector extensions allow it as well. Intel provides separate documentation for AVX-workload TDPs.
  • Antony Newman - Wednesday, June 15, 2016 - link

    Fascinating article.

    Why would Cavium not try and use 54 x A73s in their next chip?

    If ARM are not in the business of making Silicon, and ARM think the '1.2W Ares' will help them break into the Server market ... Then Why do we think ARM isn't working with the likes of Cavium to get a Server SoC that rocks the Intel boat?

    Typos From memory : send -> sent. Through-> thought. There were a few others.

    AJ
  • name99 - Thursday, June 16, 2016 - link

    How do you know ARM aren't working with such a vendor?
    ARM has always said that they expect ARM server CPUs to only be marginally competitive (for very limited situations) in 2017, and to only be really competitive in 2020.

    That suggests, among other things, that if they are working with partners, they have a target launch between those two dates, and they regard all launches before 2017 as essentially nice for PR and fr building up the ecosystem, but essentially irrelevant for commercial purposes.
  • rahvin - Thursday, June 16, 2016 - link

    The problem as pointed out early in this article is that ARM keeps targeting Intel's current products, not the ones that will be out when they get their products out. We've had almost a dozen vendors get to the point of releasing the chip and drop it because it is simply not competitive with Intel. Most of these arm products were under taken when Intel was targeting performance without regard to performance/watt. Now that intel targets the later metric arm server chips haven't been competitive with them.

    Fact is Intel could decimate and totally take over all the markets arm chips occupy, but to do it they'd have to cannibalize their existing high profit sales. This is why they keep canceling Atom chips, the chips turned out so good they were worried they'd cannibalize much more expensive products. This is the reason Avoton is highly restricted in what products and price segments it's allowed into. If Intel opened the flood gates on Avoton they would risk cannibalizing their own server profits.
  • junky77 - Wednesday, June 15, 2016 - link

    So, they did what AMD couldn't for years? I'm trying to figure it out.. their offering seems to be a lot more interesting than AMD's stuff currently

Log in

Don't have an account? Sign up now