Closing Thoughts

Testing both the IBM POWER8 and the Intel Xeon V4 with an unbiased compiler gave us answers to many of the questions we had. The bandwidth advantage of POWER8's subsystem has been quantified: IBM's most affordeable core can offer twice as much bandwidth than Intel's, at least if your application is not (perfectly) vectorized.

Despite the fact that POWER8 can sustain 8 instructions per clock versus 4 to 5 for modern Intel microarchitectures, chips based on Intel's Broadwell architecture deliver the highest instructions per clock cycle rate in most single threaded situations. The larger OoO buffers (available to a single thread!) and somewhat lower branch misprediction penalty seem to the be most likely causes.

However, the difference is not large: the POWER8 CPU inside the S812LC delivers about 87% of the Xeon's single threaded performance at the same clock. That the POWER8 would excel in memory intensive workloads is not a suprise. However, the fact that the large L2 and eDRAM-based L3 caches offer very low latency (at up to 8 MB) was a surprise to us. That the POWER8 won when using GCC to compile was the logical result but not something we expected.

The POWER8 microarchitecture is clearly built to run at least two threads. On average, two threads gives a massive 43% performance boost, with further peaks of up to 84%. This is in sharp contrast with Intel's SMT, which delivers a 18% performance boost with peaks of up to 32%. Taken further, SMT-4 on the POWER8 chip outright doubles its performance compared to single threaded situations in many of the SPEC CPU subtests.

All in all, the maximum throughput of one POWER8 core is about 43% faster than a similar Broadwell-based Xeon E5 v4. Considering that using more cores hardly ever results in perfect scaling, a POWER8 CPU should be able to keep up with a Xeon with 40 to 60% more cores.

To be fair, we have noticed that the Xeon E5 v4 (Broadwell) consumes less power than its formal TDP specification, in notable contrast to its v3 (Haswell) predecessor. So it must be said that the power consumption of the 10 core POWER8 CPU used here is much higher. On paper this is 190W + 64W Centaur chips, versus 145W for the Intel CPU. Put in practice, we measured 221W at idle on our S812LC, while a similarly equipped Xeon system idled at around 90-100W. So POWER8 should be considered in situations where performance is a higher priority than power consumption, such as databases and (big) data mining. It is not suited for applications that run close to idle much of the time and experience only brief peaks of activity. In those markets, Intel has a large performance-per-watt advantage. But there are definitely opportunities for a more power hungry chip if it can deliver significantly greater performance.

Ultimately the launch of IBM's LC servers deserves our attention: it is a monumental step forward for IBM to compete with Intel in a much larger part of the market. Those servers seem to be competitively priced with similar Xeon systems and can access the same Little Endian data as an x86 server. But can POWER8 based system really deliver a significant performance advantage in real server applications? In the next article we will explore the S812LC and its performance in a real server situations, so stay tuned.

Multi-Threaded Integer Performance: SPEC CPU2006
Comments Locked

124 Comments

View All Comments

  • HellStew - Wednesday, July 27, 2016 - link

    It depends what kind of software you are running. If you are running giant backend workloads on x86, you can seamlessly migrate that data to PPC while keeping custom front ends running on x86.
  • aryonoco - Saturday, July 23, 2016 - link

    Johan, maybe the little endian-ness makes a difference in porting proprietary software, but pretty much all open source software on Linux has supported BE POWER for a long time.

    If you get the time and the inclination Johan, it would be great if you could say do some benchmarks on BE RHEL 7 vs LE RHEL 7 on the same POWER 8 system. I think it would make for fascinating reading in itself, and would show if there are any differences when POWER operates in BE mode vs LE mode.
  • aryonoco - Saturday, July 23, 2016 - link

    Actually scrap that, seems like IBM is fully focusing on LE for Linux on POWER in future. I'm not sure there will be many BE Linux distributions officially supporting POWER9 anyway. So your choice of focusing on LE Linux on POWER is fully justified.
  • HellStew - Wednesday, July 27, 2016 - link

    Side note: Once you are running KVM, you can run any mix of BE and LE linux varieties side by side. I'm running FedoraBE, SuSE BE, Ubuntu LE, CentOS LE, and (yes a very slow copy of windows) on one of these chips
  • rootbeerrail - Saturday, July 23, 2016 - link

    If a machine is completely isolated, it doesn't matter much to the machine. I personally find BE easier to read in hex dumps because it follows the left-to-right nature of English numbers, but there are reasons to use LE for human understanding as well.

    The problem shows up the instance one tries to interchange binary data. If the endian order does not match, the data is going to get scrambled. Careful programming can work around this issue, but not everyone is a careful programmer - there's a lot of 'get something out the door' from inexperienced or lazy people. If everything is using the same conventions (not only endian, but size of the binary data types (less of a problem now that most everything has converged to 64-bit)), it's not an issue. Thus having LE on Power makes the interchange of binary data easier with the X86 world.
  • errorr - Friday, July 22, 2016 - link

    Great Article! Just an FYI, the term "just" as in "just out" on the first page has different meanings on opposite sides of the Atlantic and is usually avoided in writing for international audiences. I'm not quite sure which one is used her. The NaE would mean 'just out' in that it had come out right before while the BrE would mean it came out right after the time period referenced in the sentence.
  • xCalvinx - Friday, July 22, 2016 - link

    awesome!!..keepup the good work..looking forward to Part2!! ... actualy cant wait.. hurryup lol.. :)

    double thumbsup
  • Mpat - Friday, July 22, 2016 - link

    Skylake does not have 5 decoders, it is still 4. I know that that segment of the optimization manual is written in a cryptic way, but this's what actually happened: up until Broadwell there are 4 decoders and a max bandwidth from the decoder segment of 4 uops. If the first decoder (the complex one) produces 4 uops from one x86 op, the other decoders can't work. If the first produces 3, then the second can produce 1, etc. this means that the decoders can produce one of these combinations of uops from an x86 op, depending on how complex a task the first decoder has: 1/1/1/1, 2/1/1, 3/1, or 4. Skylake changes this so the max bandwidth from that segment is now 5, and the legal combinations become 1/1/1/1, 2/1/1/1, 3/1/1, and 4/1. You still can't do 1/1/1/1/1, so there is still only 4 decoders. Make sense?
  • ReaperUnreal - Friday, July 22, 2016 - link

    Why do the tests with GCC? Why not give each platform their full advantage and go with ICC on Intel and xLC on Power? The compiler can make a HUGE difference with benchmarks.
  • Michael Bay - Saturday, July 23, 2016 - link

    It`s right in the text why.

Log in

Don't have an account? Sign up now