It is the widest superscalar processor on the market, one that can issue up to 10 instructions and sustain 8 per clock: IBM's POWER8. IBM's POWER CPUs have always captured the imagination of the hardware enthusiast; it is the Tyrannosaurus Rex, the M1 Abrams of the processor world. Still, despite a flood of benchmarks and reports, it is very hard to pinpoint how it compares to the best Intel CPUs in performance wise. We admit that our own first attempt did not fully demystify the POWER8 either, due to the fact that some immature LE Linux software components (OpenJDK, MySQL...) did not allow us to run our enterprise workloads.

Hence we're undertaking another attempt to understand what the strengths and weaknesses are of Intel's most potent challenger. And we have good reasons besides curiosity and geekiness: IBM has just recently launched the IBM S812LC, the most affordable IBM POWER based server ever. IBM advertises the S812LC with "Starting at $4,820". That is pretty amazing if you consider that this is not some basic 1U server, but a high expandable 2U server with 32 (!) DIMM slots, 14 disk bays, 4 PCIe Gen 3 slots, and 2 redundant power supplies.

Previous "scale out" models SL812 and SL822 were competitively priced too ... until you start populating the memory slots! The required CDIMMs cost no less than 4(!) times more than RDIMMs, which makes those servers very unattractive for the price conscious buyers that need lots of memory. The S812LC does not have that problem: it makes use of cheap DDR3 RDIMMs. And when you consider that the actual street prices are about 20-25% lower, you know that IBM is in Dell territory. There is more: servers from Inventec, Inspur, and Supermicro are being developed, so even more affordable POWER8 servers are on the way. A POWER8 server is thus quite affordable now, and it looks like the trend is set.

To that end, we decided that we want to more accurately measure how the POWER8 architecture compares to the latest Xeons. In this first article we are focusing on characterizing the microarchitecture and the "raw" integer performance. Although the POWER8 architecture has been around for 2 years now, we could not find any independent Little Endian benchmark data that allowed us to compare POWER8 processors with Intel's Xeon processors in a broad range of applications.

Notice our emphasis on "Little Endian". In our first review, we indeed tested on a relatively immature LE Ubuntu 14.04 for OpenPOWER. Some people felt that this was not fair as the POWER8 would do a lot better on top of a Big Endian operating system simply because of the software maturity. But the market says otherwise: if IBM does not want to be content with fighting Oracle in an ever shrinking high-end RISC market, they need to convince the hyper scalers and the thousands of smaller hosting companies. POWER8 Server will need to find a place inside their x86 dominated datacenters. A rich LE Linux software ecosystem is the key to open the door to those datacenters.

When it comes to taking another crack at our testing, we found out that running Ubuntu 15.10 (16.04 was just out yet when we started testing) solved a lot of the issues (OpenJDK, MySQL) that made our previous attempt at testing the POWER8 so hard and incomplete. Therefore we felt that despite 2 years of benchmarking on POWER8, an independent LE Linux-focused article could still add value.

Inside the Beast(s)
Comments Locked

124 Comments

View All Comments

  • Michael Bay - Sunday, July 24, 2016 - link

    Hardware does not exist for its own sake, it exists to run software. AT is entirely correct in their methodology.
  • jospoortvliet - Tuesday, July 26, 2016 - link

    I'd argue it is the other way around, GCC might leave 5-10% performance on the table in some niche cases but does just fine most of the time. There's a reason Intel and IBM contribute to GCC - to make sure it doesn't get too far behind as they know very well most of their customers use these compilers and not their proprietary ones.

    Of course, for scientific computing and other niches it makes all the difference and one can argue these heavy systems ARE for niche markets but I still think it was a sane choice to go with GCC.
  • abufrejoval - Thursday, August 4, 2016 - link

    Actually exercising 90% of all transistors on a CPU die these days, is both very hard to do (next to impossible) and will only slow the clock to avoid overstepping TDP.

    And I seriously doubt that the GCC will underuse a CPU at 10% its computational capacity.

    Actually from what I saw the GCC by itself (compiling) was best at exploiting the full 8T potential of the Power8. And since the GCC is compiled by itself, that speaks for the quality of machine code that it can produce, if the source allows it. And that speaks for the quality of the GCC source code, ergo prove you can do better before you rant.
  • abufrejoval - Thursday, August 4, 2016 - link

    Well this is part 1 and describes one scenario. What you want is another scenario and of course it's a valid if a very distinct one.

    Actually distinct is the word here: You'd be using a vendor's compiler if your main job is a distinct workload, because you'd want to squeeze every bit of performance out of that.

    The problem with that is of course, that any distinct workload makes it rather boring for the general public because they cannot translate the benchmark to their environment.

    AT aims to satisfy the broadest meaningful audience and Johan as done a great, great job at that.

    I'm sure he'll also write a part 4711 for you specifically, if you make it economically attractive.

    Hell, even I'd do that given the proper incentive!
  • Zan Lynx - Sunday, July 24, 2016 - link

    Using GCC as the compiler is also why (in my opinion) the Intel chips aren't using their full TDP. Large areas of Intel chips are dedicated to vector operations in SSE and AVX. If you don't issue those instructions then half the chip isn't even being used.

    Some gamers who love their overclocked Intel chips have actually complained to game engine developers who add AVX to the game engine. Because it ruins their overclock even if the game runs much faster. Then they're in the situation of being forced to clock down from 4.5 GHz to 3.7 in order to avoid lockups or thermal throttling.
  • Kevin G - Sunday, July 24, 2016 - link

    The Xeon E3 v3's had different clock speeds for AVX code: it consumed too much power and got too hot while under total load.

    This holds true on the E5 v4's but the AVX penalty is done on a core-by-core basis, not across the entire chip. The result is improved performance in mixed workloads. This is a good thing as AVX hasn't broken out much beyond the HPC markets.
  • talonted - Monday, July 25, 2016 - link

    For those interested in getting a Power8 workstation. Check out Talos.

    https://www.raptorengineering.com/TALOS/prerelease...
  • 137ben - Monday, July 25, 2016 - link

    I made an account to say that this article (along with the subsequent stock-cooler comparison article) is why I really love Anandtech. A lot of the code I run/write for my research is CPU-bottlenecked. Still, until the last year or so, I didn't know very much about hardware. Now, reading Anandtech, I have learned so much more about the hardware I depend on from this website than from any other website. Most just repeat announcements or run meaningless cursory synthetic benchmarks. The fact that Johan De Gelas has written such a deep dive into the inner workings of something as complex as a server CPU architecture, and done it in a way that I can understand, is remarkable. Great job Anandtech, keep it up and I'll always come back.
  • JohanAnandtech - Thursday, July 28, 2016 - link

    You made me a happy man, I achieved my goal :-)
  • alpha754293 - Wednesday, July 27, 2016 - link

    Excellent work and review as always Johan. I would have been interest to see how the two processors perform in floating point intensive benchmarks though...

Log in

Don't have an account? Sign up now