Multi-Threaded Integer Performance: SPEC CPU2006

The value of SPEC CPU2006 int rate is questionable as it puts too much emphasis on bandwidth and way too little emphasis on data synchronization. However, it does give some indication of the total "raw" integer compute power available.

Subtest
SPECCPU2006
integer
Application type Cavium
ThunderX
2 GHz
Xeon D-1587
1.8-2.4
Xeon E5-2640 v4
2.4-2.6
400.perlbench Spam filter 372 394 322
401.bzip2 Compression 166 225 216
403.gcc Compiling 257 218 265
429.mcf Vehicle scheduling 110 130 224
445.gobmk Game AI 411 337 269
456.hmmer Protein seq. analyses 198 299 281
458.sjeng Chess 412 362 283
462.libquantum Quantum sim 139 126 231
464.h264ref Video encoding 528 487 421
471.omnetpp Network sim 121 127 172
473.astar Pathfinding 143 165 195
483.xalancbmk XML processing 227 219 266

On average, the ThunderX delivers the throughput of an Xeon D1581 or Xeon E5-2640. There are some noticeable differences between the subtest though, especially if you check the scalability.

Subtest
SPECCPU2006
integer
Application type Cavium
ThunderX
2 GHz
(48 copies)
Xeon D-1587
1.8-2.3
(32 copies)
Xeon E5-2640 v4
2.4-2.6
(20 copies)
400.perlbench Spam filter 43x 14x 10x
401.bzip2 Compression 25x 13x 11x
403.gcc Compiling 22x 8x 9x
429.mcf Vehicle scheduling 15x 3x 6x
445.gobmk Game AI 41x 17x 12x
456.hmmer Protein seq. analyses 42x 14x 11x
458.sjeng Chess 47x 16x 11x
462.libquantum Quantum sim 8x 2x 4x
464.h264ref Video encoding 42x 13x 10x
471.omnetpp Network sim 17x 6x 7x
473.astar Pathfinding 16x 10x 10x
483.xalancbmk XML processing 27x 7x 7x

Mcf is memory latency bound, but if you run 32 threads on the Xeon D, you completely swamp its memory subsystem. The ThunderX and Xeon E5 scale better simply because they can deliver better bandwidth... but one has to wonder if this has anything to do with what people who actually use mcf will experience, as mcf is mostly latency bound. It seems like a corner case.

The XML processing testis probably a lot closer to the real world: it is much easier to split XML (or JSON) processing into many parallel parts (one per request). This is something that fits the ThunderX very well, it edges out the best Xeon D. The same is true for the video encoding tests. This indicates that the ThunderX is most likely a capable Content Delivery Network (CDN) server.

Gcc and sjeng scale well and as a result, the Thunder-X really shines in these subtests.

Single-Threaded Integer Performance: SPEC CPU2006 Comparing With the Other ARMs: Single-Threaded Compression/Decompression
Comments Locked

82 Comments

View All Comments

  • TheinsanegamerN - Thursday, June 16, 2016 - link

    While you are right on the actual age of the chip, if you dont compare efficiency on different nodes, how on earth would you know if you made any progress?

    Unless you are suggesting that one should never compare one generation of chips to another, which is simply ludicrous. Where is this "you cane compare two different nodes" mindset coming from? I've seen it in the GPU forums as well, and it makes no sense.
  • shelbystripes - Wednesday, June 15, 2016 - link

    The E5-2600 v3 is a Haswell part, meaning it's Intel's second ("tock") core design on 22nm. So not only is this a smaller process, it's a second-gen optimization on a smaller process.

    For a first-gen 28nm part that includes power-hungry features like multiple 10GbE, these are some very promising initial results. A 14nm die shrink should create some real improvements off the bat in terms of performance per watt, and further optimizations from there should make this thing really shine.

    Given that Intel hasn't cracked 10nm at all yet, and it'll take a while for 10nm Xeons to show up once they do, Cavium has room to play catch-up. I mean, hell, they're keeping up/surpassing Xeon D in some use cases NOW, and that's a 14nm part. What Cavium needs most is power optimization at this point, and I'm sure they'll get there in time.
  • Michael Bay - Thursday, June 16, 2016 - link

    Good to know Intel is keeping you up to date with what`s happening in their uv labs.
  • rahvin - Thursday, June 16, 2016 - link

    Last I saw Intel is already running their test fabs at 10nm. Once they perfect it in the test fabs it only takes them about 6 months to roll it into a full scale fab. Maybe you an point to this source that indicates Intel has failed at 10nm.
  • kgardas - Wednesday, June 15, 2016 - link

    Nice article, but really looking to see testing of ThunderX2 and X-Gene 3. Will be interesting as Intel seems to be kind of struggling with single-threaded performance recently...
  • Drazick - Wednesday, June 15, 2016 - link

    Just a question.
    You emphasized the performance are x3 instead of x5 but I bet Intel used Intel ICC for those tests.

    Intel works hard on their Compilers and anyone who wants to extract the best of Intel CPU uses them as well.

    Since CPU means Compilers, if Intel has advantage on that department you should show that as well.

    Namely give us some results using Intel ICC.

    Thank You.
  • UrQuan3 - Wednesday, June 15, 2016 - link

    Of course, if Anandtech uses ICC, they should use better flags in gcc for ARM/ThunderX as well (core specific flags, NEON, etc). Both ICC and targeted flags give improvements. Often large ones. This was a generic test.
  • JohanAnandtech - Thursday, June 16, 2016 - link

    For integer workloads, ICC is not that much faster than gcc (See Andreas Stiller's work). And there is the fact that ICC requires licensing and other time consuming stuff. From a linux developer/administrator perspective, it is much easier just to use gcc, you simply install it from repositories, no licensing headaches and very decent performance (about 90% of icc). So tha vast majority of the **NON HPC ** software is compiled with gcc. Our added value is that we show how the processors compare with the most popular compiler on linux. That is the big difference between benchmarking to put a CPU in the best light and benchmarking to show what most people will probably experience.

    Until Intel makes ICC part of the typical linux ecosystem, it is not an advantage at all in most non-HPC software.
  • patrickjp93 - Friday, June 17, 2016 - link

    His work is woefully incomplete, lacking any analysis on vectorized integer workloads, which Intel destroys GCC in to the tune of a 40% lead.
  • phoenix_rizzen - Wednesday, June 15, 2016 - link

    "The one disadvantage of all Supermicro boards remains their Java-based remote management system. It is a hassle to get it working securely (Java security is a user unfriendly mess), and it lacks some features like booting into the BIOS configuration system, which saves time."

    It's IPMI, you can use any IPMI client to connect to it. Once you give it an IP and password in the BIOS, you can connect to it using your IPMI client of choice. There's also a web interface that provides most of the features of their Java client (I think that uses Java as well, but just for the console).

    For our SuperMicro servers, I just use ipmitool from my Linux station and have full access to the console over the network, including booting it into the BIOS, managing the power states, and even connecting to the serial console over the network.

    Not sure why you'd consider a full IPMI 2.0 implementation a downside just because the default client sucks.

Log in

Don't have an account? Sign up now