Multi-Threaded Integer Performance on one core: SPEC CPU2006

Broadly speaking, the value of SPEC CPU2006's int rate test is questionable, as it puts too much emphasis on bandwidth and way too little emphasis on data synchronization. However, it does give some indication of the total "raw" integer compute power available.

We will make an attempt to understand the differences between IBM and Intel, but to be really accurate we would need to profile the software and runs dozens of tests while looking at the performance counters. That would have set back this article a bit too much. So we can only make an educated guess based upon what the existing academic literature says and our experiences with both architectures.

The Intel CPU performance is the 100% baseline in each column.

Subtest
SPEC CPU2006
Integer
Application
Type
IBM
POWER8
vs
Xeon E5-2699v4
Single
Thread
IBM
​POWER8
vs
Xeon E5-2699v4
Max
Thread
IBM
​POWER8
vs
Xeon E5-2699v4
Top
performance
400.perlbench Spam filter N/A N/A N/A
401.bzip2 Compress 91% 139% 139%
403.gcc Compiling 111% 185% 185%
429.mcf Vehicle scheduling 121% 167% 167%
445.gobmk Game AI 90% 156% 156%
456.hmmer Protein seq. analyses 79% 79% 101%
458.sjeng Chess 69% 117% 117%
462.libquantum Quantum
sim
76% 160% 162%
464.h264ref Video encoding 80% 120% 131%
471.omnetpp Network
sim
100% 141% 141%
473.astar Pathfinding 87% 156% 156%
483.xalancbmk XML processing 70% 116% 116%

On (geometric) average, a single thread running on the IBM POWER8 core runs about 13% slower than on an Intel Broadwell architecture core. So our suspicion that Intel is still a bit better at extracting parallelism when running a single thread is confirmed.

Intel gains the upper-hand in the applications where branch prediction plays an important role: chess (sjeng), pathfinding (astar), protein seq. analysis (hmmer), and AI (gobmk). Intel's branch misprediction penalty is lower if the other branch is available in the µop cache (the Decode Stream Buffer) and Intel has a few clever tricks that the IBM core does not have like the loop stream detector.

Where the POWER8 core shines is in the benchmarks where memory latency is important and where the load units are a bottleneck, like vehicle scheduling (mcf). This is also true, but in lesser degree, for the network simulation (omnetpp). The reason might be that omnetpp puts a lot of pressure on the OoO buffers, and Intel's architecture offers more room with its unified buffers, whereas IBM POWER8's buffers are more partitioned (see for example the issue queue). Meanwhile XML processing does a lot of pointer chasing, but quick profiling has shown that this benchmark mostly hits the L2, and somewhat the L3. So there's no disadvantage for Intel there. On the flip side, Xalancbmk is the benchmark with the highest pressure on the ROB. Again, the larger OOO buffers for one thread might help Intel to do better.

POWER8 also does well in GCC, which has a high percentage of branches in the instruction mix, but very few branch mispredictions. GCC compiling is latency sensitive, so a 3 cycle L1, a 13 cycle L2, and the fast 8MB L3 help.

Finally, the pathfinding (astar) benchmark does some intensive pointer chasing, but it misses the L1- and L2-cache much less often than xalancbmk, and has the highest amount of branch misprediction. So the impact of the pointer chasing and memory latency is thus minimal.

Once all threads are active, the IBM POWER8 core is able to outperform the Intel CPU by 41% (geomean average).

Single-Threaded Integer Performance: SPEC CPU2006 Closing Thoughts
Comments Locked

124 Comments

View All Comments

  • zodiacfml - Thursday, July 21, 2016 - link

    Like a good TV series, I can't wait for the next episode.
  • aryonoco - Friday, July 22, 2016 - link

    OK, this is literally why Anandtech is the best in the tech journalism industry.

    There is nowhere else on the net that you can find a head to head comparison between POWER and Xeon, and unless you work in the tech department of a Fortune 500 company, this information has just not been available, until now.

    Johan, thank you for your work on this article. I did give you beef in your previous article about using LE Ubuntu but I concede your point. Very happy to you are writing more for Anandtech these days.

    Xeons really need some competition. Whether that competition comes from POWER or ARM or Zen, I am happy to see some competition. IBM has big plans for POWER9. Hopefully this is just the start of things to come.
  • JohanAnandtech - Friday, July 22, 2016 - link

    Thanks! it is very exciting to perform benchmarks that nobody has published yet :-).

    In hindsight, I have to admit that the first article contained too few benchmarks that really mattered for POWER8. Most of our usual testing and scripting did not work, and so after lot of tinkering, swearing and sweat I got some benchmarks working on this "exotic to me" platform. The contrast between what one would expect to see on POWER8 and me being proud of being able to somewhat "tame the beast" could not have been greater :-). In other words, there was a learning curve.
  • tipoo - Friday, July 22, 2016 - link

    I found it very interesting as well and would certainly not mind seeing more from this space, like maybe Xeon Phi and SPARC M7
  • jospoortvliet - Tuesday, July 26, 2016 - link

    Amen. But, to not ask to much, just the prospect of part 2 of the Power benchmark is already super exciting. Yes, the Internetz need more of this!
  • Daniel Egger - Friday, July 22, 2016 - link

    Not quite sure what the Endianess of a systems adds to the competitive factor. Maybe someone could elaborate why it is so important to run a system in LE?
  • ZeDestructor - Friday, July 22, 2016 - link

    Not much, really, with the compilers being good and all that.

    Really, it's quite clearly there just for some excellent alliteration.
  • JohanAnandtech - Friday, July 22, 2016 - link

    Basically LE reduces the barrier for an IBM server being integrated in x86 dominated datacenter.

    see https://www.ibm.com/developerworks/community/blogs...

    Just a few reasons:

    "Numerous clients, software partners, and IBM’s own software developers have told us that porting their software to Power becomes simpler if the Linux environment on Power supports little endian mode, more closely matching the environment provided by Linux on x86. This new level of support will *** lower the barrier to entry for porting Linux on x86 software to Linux on Power **."

    "A system accelerator programmer (GPU or FPGA) who needs to share memory with applications running in the system processor must share data in an pre-determined endianness for correct application functionality."
  • Daniel Egger - Friday, July 22, 2016 - link

    While correct in theory, this hasn't been a problem for the last 20 years. People are used to using BE on PPC/POWER, the software, the drivers and the infrastructure are very mature (as a matter of fact it was my job 15 years ago to make sure they are). PPC/POWER actually have configurable endianess so if someone wanted to go LE earlier it would have easily been possible but only few ever attempted that stunt; so why have the big disruption now?
  • KAlmquist - Friday, July 22, 2016 - link

    I assume that this is about selling POWER boxes to companies that currently run all x86 servers, and have a bunch of custom software that they might be willing to recompile. If the customer has to spend a bunch of time fixing endian dependencies in his software in order to get it to work on POWER, it will probably be less expensive for them to simply stick with x86.

Log in

Don't have an account? Sign up now