Multi-Threaded Integer Performance: SPEC CPU2006

The value of SPEC CPU2006 int rate is questionable as it puts too much emphasis on bandwidth and way too little emphasis on data synchronization. However, it does give some indication of the total "raw" integer compute power available.

Subtest
SPECCPU2006
integer
Application type Cavium
ThunderX
2 GHz
Xeon D-1587
1.8-2.4
Xeon E5-2640 v4
2.4-2.6
400.perlbench Spam filter 372 394 322
401.bzip2 Compression 166 225 216
403.gcc Compiling 257 218 265
429.mcf Vehicle scheduling 110 130 224
445.gobmk Game AI 411 337 269
456.hmmer Protein seq. analyses 198 299 281
458.sjeng Chess 412 362 283
462.libquantum Quantum sim 139 126 231
464.h264ref Video encoding 528 487 421
471.omnetpp Network sim 121 127 172
473.astar Pathfinding 143 165 195
483.xalancbmk XML processing 227 219 266

On average, the ThunderX delivers the throughput of an Xeon D1581 or Xeon E5-2640. There are some noticeable differences between the subtest though, especially if you check the scalability.

Subtest
SPECCPU2006
integer
Application type Cavium
ThunderX
2 GHz
(48 copies)
Xeon D-1587
1.8-2.3
(32 copies)
Xeon E5-2640 v4
2.4-2.6
(20 copies)
400.perlbench Spam filter 43x 14x 10x
401.bzip2 Compression 25x 13x 11x
403.gcc Compiling 22x 8x 9x
429.mcf Vehicle scheduling 15x 3x 6x
445.gobmk Game AI 41x 17x 12x
456.hmmer Protein seq. analyses 42x 14x 11x
458.sjeng Chess 47x 16x 11x
462.libquantum Quantum sim 8x 2x 4x
464.h264ref Video encoding 42x 13x 10x
471.omnetpp Network sim 17x 6x 7x
473.astar Pathfinding 16x 10x 10x
483.xalancbmk XML processing 27x 7x 7x

Mcf is memory latency bound, but if you run 32 threads on the Xeon D, you completely swamp its memory subsystem. The ThunderX and Xeon E5 scale better simply because they can deliver better bandwidth... but one has to wonder if this has anything to do with what people who actually use mcf will experience, as mcf is mostly latency bound. It seems like a corner case.

The XML processing testis probably a lot closer to the real world: it is much easier to split XML (or JSON) processing into many parallel parts (one per request). This is something that fits the ThunderX very well, it edges out the best Xeon D. The same is true for the video encoding tests. This indicates that the ThunderX is most likely a capable Content Delivery Network (CDN) server.

Gcc and sjeng scale well and as a result, the Thunder-X really shines in these subtests.

Single-Threaded Integer Performance: SPEC CPU2006 Comparing With the Other ARMs: Single-Threaded Compression/Decompression
Comments Locked

82 Comments

View All Comments

  • vivs26 - Wednesday, June 15, 2016 - link

    Not necessarily - (read Amdahl's law of diminishing returns). The performance actually depends on the workload. Having a million cores guarantees nothing in terms of performance unless the workload is parallelizable which in the real world is not as much as we think it could be. I'm curious to see how xeon merged with altera programmable fabric performs than ARM on a server.
  • maxxbot - Wednesday, June 22, 2016 - link

    Technically true but every generation that millstone gets a little smaller, the die area and power needed to translate x86 into uops isn't huge and reduces every generation.
  • jardows2 - Wednesday, June 15, 2016 - link

    Interesting. Faster in a few workloads where heavy use of multi-thread is important, but significantly slower in more single thread workloads. For server use, you don't always want parallelized tasks. The results are pretty much across the board for all the processors tested: If the ThunderX was slower, it was slower than all the Intel chips. If it were faster, it was faster than all but the highest end Intel Chips. With the price only being slightly lower than the cheapest Intel chip being sold, I don't think this is going to be a Xeon competitor at all, but will take a few niche applications where it can do better.

    With no significant energy savings, we should be looking forward to the ThunderX2 to see if it will bring this into a better alternative.
  • ddriver - Wednesday, June 15, 2016 - link

    There is hardly a server workload where you don't get better throughput by throwing more cores and servers at it. Servers are NOT about parallelized task, but about concurrent tasks. That's why while desktops are still stuck at 8 cores, server chips come with 20 and more... Server workloads are usually very simple, it is just that there is a lot of them. They are so simple and take so little time it literally makes no sense parallelizing them.
  • jardows2 - Wednesday, June 15, 2016 - link

    In the scenario you described, the single-thread performance takes on even more importance, thus highlighting the advantage the Xeon's currently have in most server configurations.
  • niva - Wednesday, June 15, 2016 - link

    Not if the Xeon doesn't have enough cores to actually process 40+ singlethreaded tasks con-currently.
  • hechacker1 - Wednesday, June 15, 2016 - link

    But kernels and VMWare know how to schedule multiple threads on 1 core if it's not being fully utilized. Single threaded IPC can make up for not having as many cores. See the iPhone SoCs for another example.
  • ddriver - Wednesday, June 15, 2016 - link

    Not if you have thousands of concurrent workloads and only like 8 cores. As fast as each core might be, the overhead from workload context switching will eat it up.
  • willis936 - Thursday, June 16, 2016 - link

    Yeah if each task is not significantly longer than a context switch. Context switches are very fast, especially with processors with many sets of SMT registers per core.
  • ddriver - Thursday, June 16, 2016 - link

    If what you suggest is correct, then intel would not be investing chip TDP in more cores but higher clocks and better single threaded performance. Clearly this is not the case, as they are pushing 20 cores at the fairly modest 2.4 Ghz.

Log in

Don't have an account? Sign up now