Comparing Benchmarks: AT vs IBM

Before we close things out, let's spend a moment summarizing our results and comparing the performance we saw to the kind of performance advantages that IBM advertises POWER8 is capable of.

From a high level perspective, the S822L is more expensive and consumes a lot more power than a comparable Xeon system.

With limited optimization and with the current Ubuntu 15.04, the performance-per-watt ratio favors Intel even more as the POWER8 barely outperforms the very efficient 120W TDP Xeons. So there is no denying that the Intel systems offer a superior performance/watt ratio.

However, it would be unfair to base our judgement on our first attempt as we have to admit this our first real attempt to benchmark and test the POWER8 platform. It is very likely that we will manage to extract quite a bit more performance out of the system on our second attempt. IBM POWER8 also has a big advantage in memory bandwidth. But we did not manage to port OpenFOAM to the POWER platform, probably the most likely candidate for leveraging that advantage.

We are less convinced that the POWER8 platform has a huge "raw CPU compute advantage," contrary to what for example IBM's SPECJBB (85% faster ) and SAP (29% faster) results seem to suggest.

For example, IBM's own SPECjEnterprise®2010 benchmarking shows that:

SAP is "low IPC" (hard to run many instructions in parallel in one thread) software that benefits much from low latency caches. The massive L3-cache (12-cores, 96 MB) and huge thread count are probably giving the IBM POWER8 the edge. The RAM bandwidth also helps, but in a lesser degree. IBM clearly built POWER8 with this kind of software in mind. We had individual threadcount intensive benchmarks (LZMA decompression) and L3-cache sensitive benchmarks (ElasticSearch), but t o be fair to IBM, none of our benchmarks leveraged the three strongest points (threadcount, L3-cache size and memory bandwidth) all at once like SAP.

SPECJBB2013 has recently been discontinued as it was not reliable enough. We tend to trust the jEnterprise test a lot more. In any case, the best POWER8 has a 17% advantage there.

Considering that the POWER8 inside that S824 has 20% more cores and a 3% higher clockspeed, our 3.4 GHz 10-core CPU would probably be slightly behind the Xeon E5-2697 v3. We found out that the 10-core POWER8 is slightly faster than Xeon E5-2695 v3. The Xeon E5-2695 v3 is very similar to the E5-2697 v3, it is just running at a 10% lower clockspeed (All core turbo: 2.8GHz vs 3.1GHz). So all in all, our benchmarks seems to be close to the official benchmarks, albeit slightly lower.

Closing Thoughts: A Mix of Xeon "E5" and "E7"

So let's sum things up. The IBM S822L is definitely not a good choice for those looking to lower their energy bills or operate in a location with limited cooling. The pricing of the CDIMMs causes it to be more expensive than a comparable Xeon E5 based server. However, you get something in return: the CDIMMs should offer higher reliability and are more similar to the memory subsystem of the E7 than the E5. Also, PCIe adapters are hot-pluggable on the S822L and can be replaced without bringing down the system. With most Xeon E5 systems, only disks, fans and PSU are hot-pluggable.

In a number of ways then, the S822L is more a competitor to dual Xeon E7 systems than it is to dual Xeon E5 systems. In fact, a dual Xeon E7 server consumes in the 600-700W range, and in that scenario the power usage of S822L (700-800W) does not seem outrageous anymore.

The extra reliability is definitely a bonus when running real time data analytics or virtualization. A failing memory chip may cost a lot when you running fifty virtual machines on top of a server. Even in some HPC or batch data analytics applications where you have to wait for hours for a certain result that is being computed in an enormous amount of memory, the cost savings of being able to survive a failing memory chip might be considerable.

One more thing: for those who need full control, the fact that every layer in the software stack is open makes the S822L very attractive. For now, the available "OpenCompute" Xeon servers that are also "open" seem to mostly density optimized servers and the openess seems limited on several levels. Rackspace felt that the current OpenCompute servers are not "open enough", and went for OpenPOWER servers instead. In all those markets, the S822L is very interesting alternative to the dual Xeon E5 servers.

Ultimately however, the performance-per-dollar Xeon E5 competitors will most likely be OpenPOWER third party servers. Those servers do not use CDIMMS, but regular RDIMMs. Other components such as disks, networkcards and PSUs will probably be cheaper but potentially also slightly less reliable.

All in all, the arrival of OpenPOWER servers is much more exciting than most of us anticipated. Although the IBM POWER8 servers can not beat the performance/watt ratio of the Xeon, we now have a server processor that is not only cheaper than Intel's best Xeons, but that can also keep up with them. Combine that with the fact that IBM has lined up POWER8+ for next year and a whole range of server vendors is building their own POWER8 based servers, and we have a lot to look forward to!

Energy and Pricing
Comments Locked

146 Comments

View All Comments

  • hissatsu - Friday, November 6, 2015 - link

    You might want to look more closely. Thought it's a bit blurry, I'm almost certain that's the 80+ Platinum logo, which has no color.
  • DanNeely - Friday, November 6, 2015 - link

    That's possible; it looks like there's something at the bottom of the logo. Google image search shows 80+ platinum as a lighter silver/gray than 80+ silver; white is only the original standard.
  • Shezal - Friday, November 6, 2015 - link

    Just look up the part number. It's a Platinum :)
  • The12pAc - Thursday, November 19, 2015 - link

    I have a S814, it's Platinum.
  • johnnycanadian - Friday, November 6, 2015 - link

    Oh yum! THIS is what I still love about AT: non-mainstream previews / reviews. REALLY looking forward to more like this. I only wish SGI still built workstation-level machines. :-(
  • mapesdhs - Tuesday, November 10, 2015 - link


    Indeed, but it'd need a hefty change in direction at SGI to get back into workstations again, so very unlikely for the forseeable future. They certainly have the required base tech (NUMALink6, MPI offload, etc.), namely lots of sockets/cores/RAM coupled with GPUs for really heavy tasks (big data, GIS, medical, etc.), ie. a theoretical scalable, shared-memory workstation. But the market isn't interested in advanced performance solutions like this atm, and the margin on standard 2/4-socket systems isn't worthwhile, it'd be much cheaper to buy a generic Dell or HP (plus, it's only above this no. of sockets that their own unique tech comes into play). Pity, as the equivalent of a UV 30/300 workstation would be sweet (if expensive), though for virtually all of the tasks discussed in this article, shared memory tech isn't relevant anyway. The notion of connectable, scalable, shared memory workstations based on NV gfx, PCIe and newer multi-core MIPS CPUs was apparently brought up at SGI way back before the Rackable merger, but didn't go anywhere (not viable given the financial situation at the time). It's a neat concept, eg. imagine being able to connect two or more separate ordinary 2/4-socket XEON workstations together (each fitted with, say, a couple of M6000s) to form a single combined system with one OS instance and resources pool, allowing users to combine & split setups as required to match workloads, but it's a notion whose time has not yet come.

    Of course, what's missing entirely is the notion of advanced but costly custom gfx, but again there's no market for that atm either, at least not publicly. Maybe behind the scenes NV makes custom stuff the way SGI used to for relevant customers (DoD, Lockheed, etc.), but SGI's products always had some kind of commercially available equivalent from which the custom builds were derived (IRx gfx), whereas atm there's no such thing as a Quadro with 30000 cores and 100GB RAM that costs $50K and slides into more than one PCIe slot which anyone can buy if they have the moolah. :D

    Most of all though, even if the demand existed and the tech could be built, it'd never work unless SGI stopped using its pricing-is-secret reseller sales model. They should have adopted a direct sales setup long ago, order on the site, pricing configurator, etc., but that never happened, even though the lack of such an option killed a lot of sales. Less of an issue with the sort of products they sell atm, but a better sales model would be essential if they were to ever try to sell workstations again, and that'd need a huge PR/sales management clearout to be viable.

    Pity IBM couldn't pay NV to make custom gfx, that'd be interesting, but then IBM quit the workstation market aswell.

    Ian.
  • mostlyharmless - Friday, November 6, 2015 - link

    "There is definitely a market for such hugely expensive and robust server systems as high end RISC machines are good for about 50.000 servers. "

    Rounding error?
  • DanNeely - Friday, November 6, 2015 - link

    50k clients would be my guess.
  • FunBunny2 - Friday, November 6, 2015 - link

    (dot) versus (comma) most likely. Euro centric versus 'Murcan centric.
  • DanNeely - Friday, November 6, 2015 - link

    If that was the case, a plain 50 would be much more appropriate.

Log in

Don't have an account? Sign up now