Single-Threaded Integer Performance: SPEC CPU2006

Even though SPEC CPU2006 is more HPC and workstation oriented, it contains a good variety of integer workloads. Running SPEC CPU2006 is a good way to evaluate single threaded (or core) performance. The main problem is that the results submitted are "overengineered" and it is very hard to make any fair comparisons.

So we wanted to keep the settings as "real world" as possible. We welcome constructive criticism to reach that goal. So we used:

  • 64 bit gcc: most used compiler on Linux, good all round compiler that does not try to "break" benchmarks (libquantum...)
  • -Ofast: compiler optimization that many developers may use
  • -fno-strict-aliasing: necessary to compile some of the subtests
  • base run: every subtest is compiled in the same way.

The ultimate objective is to measure performance in applications where for some reason – as is frequently the case – a "multi thread unfriendly" task keeps us waiting.

Nobody expect the ThunderX to be a single threaded performance wonder. Cavium clearly stated that they deliberately went for a high core count with pretty simple cores. As a result, single threaded performance was not a priority.

However, Facebook and other hyperscalers have indicated that they definitely prefer to get the single threaded performance of a Xeon D. So any competitor challenging Intel should try to keep up with the Xeon D in single threaded performance and offer a throughput-per-dollar/watt bonus. So it is very interesting to measure what single threaded performance the current ThunderX can offer.

Subtest
SPEC CPU2006
Integer
Application Type Cavium
ThunderX
2 GHz
Xeon D-1557
1.5-2.1
Xeon D-1587
1.8-2.4
Xeon E5-2640 v4
2.4-2.6
Xeon E5-2690 v3
2.6-3.5
Xeon E5-2699 v4
2.2-3.6
Xeon E5-2699 v4
2.2-3.6
(+HT)
400.perlbench Spam filter 8.3 24.7 29 33.4 39 32.2 36.6
401.bzip2 Compression 6.5 15.1 17.2 19.8 24.2 19.2 25.3
403.gcc Compiling 10.8 23.1 27.2 30 37.2 28.9 33.3
429.mcf Vehicle scheduling 10.2 32.6 38.4 40.4 44.8 39 43.9
445.gobmk Game AI 9.2 17.4 20.2 22.7 28.1 22.4 27.7
456.hmmer Protein seq. analyses 4.8 19 21.7 25.1 28 24.2 28.4
458.sjeng Chess 8.8 19.8 22.8 25.6 31.5 24.8 28.3
462.libquantum Quantum sim 5.8 47.9 58.2 60.3 78 59.2 67.3
464.h264ref Video encoding 11.9 32 36.6 41.9 56 40.7 40.7
471.omnetpp Network sim 7 17.3 23 23.6 30.9 23.5 29.9
473.astar Pathfinding 7.9 14.7 17.2 19.8 24.4 18.9 23.6
483.xalancbmk XML processing 8.4 27.8 33.3 36.2 45.1 35.4 41.8

Although some of you have a mathematical mind and are able to easily decipher these kinds of tables, let the rest of us be lazy and translate this into percentages. We make the Xeon D-1581 the baseline. The Xeon D-1557's performance is more or less the single threaded performance some of the important customers such as Facebook like to have.

Subtest
SPEC CPU2006
Integer
Application Type Cavium
ThunderX
2 GHz
Xeon D-1557
1.5-2.1
Xeon D-1581
1.5-2.1
Xeon E5-2640
2.4-2.6
400.perlbench Spam filter 29% 85% 100% 115%
401.bzip2 Compression 38% 88% 100% 115%
403.gcc Compiling 40% 85% 100% 110%
429.mcf Vehicle scheduling 27% 85% 100% 105%
445.gobmk Game AI 46% 86% 100% 112%
456.hmmer Protein seq. analyses 22% 88% 100% 116%
458.sjeng Chess 39% 87% 100% 112%
462.libquantum Quantum sim 10% 82% 100% 104%
464.h264ref Video encoding 33% 87% 100% 114%
471.omnetpp Network sim 30% 75% 100% 103%
473.astar Pathfinding 46% 85% 100% 115%
483.xalancbmk XML processing 25% 83% 100% 109%

First of all, single threaded is somewhat better than we expected when we received the first architectural details (a very simple dual issue core with high latency shared L2). However, this is still a fraction of the Xeon D's single threaded performance, which means that ThunderX doesn't look very impressive to companies which feel that single threaded performance should not be lower than a low end Xeon D. The latter is 2 to 4 times faster. On average, the Xeon D-1581 delivers 3 times faster single threaded performance than the ThunderX, but not 5!

SPEC CPU2006 allows us to characterize the ThunderX core a bit better. We ignore libquantum because it has a very special profile: you can triple the score with specific compiler settings, but those settings reduce performance by 2-30%(!) in some other subtests. Those compiler settings optimize cache utilization by splitting records of an array in separate arrays. Combine this with software loop prefetching and libquantum numbers can indeed double or triple. Since libquantum is hardly relevant for the server world and is known for being a target for all kind of benchmark trickery, we ignore it in our comparison.

Mcf exhibits a large amount of data cache misses and memory controller usage. Mcf is also "horribly low IPC" software, so beefy execution backends do not help. Despite those facts, the ThunderX does not do well in mcf. Mcf does a lot of pointer chasing, so the high latency L2-cache and the high latency DRAM access are slowing things down. That is probably also true for XML processing and the network simulator: those subtests have the highest data cache misses.

The shallow pipeline and relatively powerful gshare branch predictor make the ThunderX a better than expected performer in the chess (sjeng), pathfinding (astar), compiling (gcc) and AI (gobmk). Although the gobmk has a relatively high branch misprediction rate on a gshare branch predictor (the highest of all subtests), the ThunderX core can recover very quickly thanks to its 9 stage pipeline. Notice also that gobmk and gcc have relatively large instruction footprints, which gives the ThunderX and its 78 KB I-cache an advantage.

That is also true for the perl, but that benchmark has a relatively high IPC and needs a beefier execution backend. Indeed, the more compute intensive (and thus high IPC sub tests) perlbench and hmmer perform badly relative to the Intel core. In these benchmarks, the wide architecture of the Intel cores pays off.

Benchmarks Versus Reality Multi-Threaded Integer Performance: SPEC CPU2006
Comments Locked

82 Comments

View All Comments

  • TheinsanegamerN - Thursday, June 16, 2016 - link

    While you are right on the actual age of the chip, if you dont compare efficiency on different nodes, how on earth would you know if you made any progress?

    Unless you are suggesting that one should never compare one generation of chips to another, which is simply ludicrous. Where is this "you cane compare two different nodes" mindset coming from? I've seen it in the GPU forums as well, and it makes no sense.
  • shelbystripes - Wednesday, June 15, 2016 - link

    The E5-2600 v3 is a Haswell part, meaning it's Intel's second ("tock") core design on 22nm. So not only is this a smaller process, it's a second-gen optimization on a smaller process.

    For a first-gen 28nm part that includes power-hungry features like multiple 10GbE, these are some very promising initial results. A 14nm die shrink should create some real improvements off the bat in terms of performance per watt, and further optimizations from there should make this thing really shine.

    Given that Intel hasn't cracked 10nm at all yet, and it'll take a while for 10nm Xeons to show up once they do, Cavium has room to play catch-up. I mean, hell, they're keeping up/surpassing Xeon D in some use cases NOW, and that's a 14nm part. What Cavium needs most is power optimization at this point, and I'm sure they'll get there in time.
  • Michael Bay - Thursday, June 16, 2016 - link

    Good to know Intel is keeping you up to date with what`s happening in their uv labs.
  • rahvin - Thursday, June 16, 2016 - link

    Last I saw Intel is already running their test fabs at 10nm. Once they perfect it in the test fabs it only takes them about 6 months to roll it into a full scale fab. Maybe you an point to this source that indicates Intel has failed at 10nm.
  • kgardas - Wednesday, June 15, 2016 - link

    Nice article, but really looking to see testing of ThunderX2 and X-Gene 3. Will be interesting as Intel seems to be kind of struggling with single-threaded performance recently...
  • Drazick - Wednesday, June 15, 2016 - link

    Just a question.
    You emphasized the performance are x3 instead of x5 but I bet Intel used Intel ICC for those tests.

    Intel works hard on their Compilers and anyone who wants to extract the best of Intel CPU uses them as well.

    Since CPU means Compilers, if Intel has advantage on that department you should show that as well.

    Namely give us some results using Intel ICC.

    Thank You.
  • UrQuan3 - Wednesday, June 15, 2016 - link

    Of course, if Anandtech uses ICC, they should use better flags in gcc for ARM/ThunderX as well (core specific flags, NEON, etc). Both ICC and targeted flags give improvements. Often large ones. This was a generic test.
  • JohanAnandtech - Thursday, June 16, 2016 - link

    For integer workloads, ICC is not that much faster than gcc (See Andreas Stiller's work). And there is the fact that ICC requires licensing and other time consuming stuff. From a linux developer/administrator perspective, it is much easier just to use gcc, you simply install it from repositories, no licensing headaches and very decent performance (about 90% of icc). So tha vast majority of the **NON HPC ** software is compiled with gcc. Our added value is that we show how the processors compare with the most popular compiler on linux. That is the big difference between benchmarking to put a CPU in the best light and benchmarking to show what most people will probably experience.

    Until Intel makes ICC part of the typical linux ecosystem, it is not an advantage at all in most non-HPC software.
  • patrickjp93 - Friday, June 17, 2016 - link

    His work is woefully incomplete, lacking any analysis on vectorized integer workloads, which Intel destroys GCC in to the tune of a 40% lead.
  • phoenix_rizzen - Wednesday, June 15, 2016 - link

    "The one disadvantage of all Supermicro boards remains their Java-based remote management system. It is a hassle to get it working securely (Java security is a user unfriendly mess), and it lacks some features like booting into the BIOS configuration system, which saves time."

    It's IPMI, you can use any IPMI client to connect to it. Once you give it an IP and password in the BIOS, you can connect to it using your IPMI client of choice. There's also a web interface that provides most of the features of their Java client (I think that uses Java as well, but just for the console).

    For our SuperMicro servers, I just use ipmitool from my Linux station and have full access to the console over the network, including booting it into the BIOS, managing the power states, and even connecting to the serial console over the network.

    Not sure why you'd consider a full IPMI 2.0 implementation a downside just because the default client sucks.

Log in

Don't have an account? Sign up now