Comparing Benchmarks: AT vs IBM

Before we close things out, let's spend a moment summarizing our results and comparing the performance we saw to the kind of performance advantages that IBM advertises POWER8 is capable of.

From a high level perspective, the S822L is more expensive and consumes a lot more power than a comparable Xeon system.

With limited optimization and with the current Ubuntu 15.04, the performance-per-watt ratio favors Intel even more as the POWER8 barely outperforms the very efficient 120W TDP Xeons. So there is no denying that the Intel systems offer a superior performance/watt ratio.

However, it would be unfair to base our judgement on our first attempt as we have to admit this our first real attempt to benchmark and test the POWER8 platform. It is very likely that we will manage to extract quite a bit more performance out of the system on our second attempt. IBM POWER8 also has a big advantage in memory bandwidth. But we did not manage to port OpenFOAM to the POWER platform, probably the most likely candidate for leveraging that advantage.

We are less convinced that the POWER8 platform has a huge "raw CPU compute advantage," contrary to what for example IBM's SPECJBB (85% faster ) and SAP (29% faster) results seem to suggest.

For example, IBM's own SPECjEnterprise®2010 benchmarking shows that:

SAP is "low IPC" (hard to run many instructions in parallel in one thread) software that benefits much from low latency caches. The massive L3-cache (12-cores, 96 MB) and huge thread count are probably giving the IBM POWER8 the edge. The RAM bandwidth also helps, but in a lesser degree. IBM clearly built POWER8 with this kind of software in mind. We had individual threadcount intensive benchmarks (LZMA decompression) and L3-cache sensitive benchmarks (ElasticSearch), but t o be fair to IBM, none of our benchmarks leveraged the three strongest points (threadcount, L3-cache size and memory bandwidth) all at once like SAP.

SPECJBB2013 has recently been discontinued as it was not reliable enough. We tend to trust the jEnterprise test a lot more. In any case, the best POWER8 has a 17% advantage there.

Considering that the POWER8 inside that S824 has 20% more cores and a 3% higher clockspeed, our 3.4 GHz 10-core CPU would probably be slightly behind the Xeon E5-2697 v3. We found out that the 10-core POWER8 is slightly faster than Xeon E5-2695 v3. The Xeon E5-2695 v3 is very similar to the E5-2697 v3, it is just running at a 10% lower clockspeed (All core turbo: 2.8GHz vs 3.1GHz). So all in all, our benchmarks seems to be close to the official benchmarks, albeit slightly lower.

Closing Thoughts: A Mix of Xeon "E5" and "E7"

So let's sum things up. The IBM S822L is definitely not a good choice for those looking to lower their energy bills or operate in a location with limited cooling. The pricing of the CDIMMs causes it to be more expensive than a comparable Xeon E5 based server. However, you get something in return: the CDIMMs should offer higher reliability and are more similar to the memory subsystem of the E7 than the E5. Also, PCIe adapters are hot-pluggable on the S822L and can be replaced without bringing down the system. With most Xeon E5 systems, only disks, fans and PSU are hot-pluggable.

In a number of ways then, the S822L is more a competitor to dual Xeon E7 systems than it is to dual Xeon E5 systems. In fact, a dual Xeon E7 server consumes in the 600-700W range, and in that scenario the power usage of S822L (700-800W) does not seem outrageous anymore.

The extra reliability is definitely a bonus when running real time data analytics or virtualization. A failing memory chip may cost a lot when you running fifty virtual machines on top of a server. Even in some HPC or batch data analytics applications where you have to wait for hours for a certain result that is being computed in an enormous amount of memory, the cost savings of being able to survive a failing memory chip might be considerable.

One more thing: for those who need full control, the fact that every layer in the software stack is open makes the S822L very attractive. For now, the available "OpenCompute" Xeon servers that are also "open" seem to mostly density optimized servers and the openess seems limited on several levels. Rackspace felt that the current OpenCompute servers are not "open enough", and went for OpenPOWER servers instead. In all those markets, the S822L is very interesting alternative to the dual Xeon E5 servers.

Ultimately however, the performance-per-dollar Xeon E5 competitors will most likely be OpenPOWER third party servers. Those servers do not use CDIMMS, but regular RDIMMs. Other components such as disks, networkcards and PSUs will probably be cheaper but potentially also slightly less reliable.

All in all, the arrival of OpenPOWER servers is much more exciting than most of us anticipated. Although the IBM POWER8 servers can not beat the performance/watt ratio of the Xeon, we now have a server processor that is not only cheaper than Intel's best Xeons, but that can also keep up with them. Combine that with the fact that IBM has lined up POWER8+ for next year and a whole range of server vendors is building their own POWER8 based servers, and we have a lot to look forward to!

Energy and Pricing
Comments Locked

146 Comments

View All Comments

  • psychobriggsy - Friday, November 6, 2015 - link

    So you are complaining that your job's selection of hardware has made you earn twice as much?
  • dgingeri - Friday, November 6, 2015 - link

    No, because I don't earn twice as much. I'm not fully trained in AIX, so I have to muddle my way through dealing with the test machines we have. We don't use them for full production machines, just for testing software for our customers. (Which means I have to reinstall the OS on at least one of those machines about every month or so. That is a BIG pain in the behind due to the boot procedure. Where it takes a couple hours to reinstall Windows or Linux, it takes a full day to do it on an AIX machine.)

    I'm trying to advise people to NOT use AIX. It's an awful operating system. I'm also advising people NOT use IBM Power based machines because they are extremely aggravating to work on. Overall, it costs much more to run IBM Power machines, even if they aren't running AIX, than it does to run x86 machines. The up front cost might look competitive, but the maintenance costs are huge. Running AIX on them makes it an order of magnitude more expensive.
  • serpint - Friday, November 6, 2015 - link

    I suggest reading the NIM A-Z handbook. It shouldn't take you more than 10 minutes to fully deploy an AIX system fully built and installed with software. As with Linux, it also shouldn't take more than about 10 minutes to install and fully deploy a server if you have any experience scripting installs.

    The developerworks community inside IBM is possibly the best free resource you could hope for. Also the redbooks.ibm.com site.

    Compared to most *NIX flavors, AIX is UNIX for dummies.
  • agtcovert - Tuesday, November 10, 2015 - link

    If you had a NIM server setup and were using LPARs, loading a functional image of AIX should take 10 minutes flat, on a 1G network.

    If you're loading AIX on a physical machine without using the virtualization, you're wasting the server.
  • agtcovert - Tuesday, November 10, 2015 - link

    I've worked on AIX platforms extensively for about the same amount of time. First, most of these purchases go through a partner and yours must've sucked because we got great support from our IBM partner -- free training, access to experts, that sort of thing.

    Second, I always love the complaining about the cost of the hardware, etc. If you're buying big iron Power servers, the maintenance cost should be near irrelevant. And again, your partner should take care to negotiate that into the deal for 3-5 years ensuring you have access to updates.

    The other thing no one ever talks about is *why* you buy these servers. Why do they take so long to boot? Well, for the frame it self, it's a deep POST. But then, mine were never rebooted in 4 years, and that's for firmware upgrades (online) and a couple of interface card swaps (also done online with no service disruption). Do that on x86. So reason #1 -- RAS, at the hardware level. Seriously, how often did you need to reboot the frame?

    Reason #2 -- for large enterprises, you can do so much with these with relatively few cores they lead to huge licensing savings in Oracle, IBM software. For us, it was over $1m a year ongoing. And no, switching to other software was not an option. We could run an Oracle RAC on 4 cores of Power 7 (at the time) versus the 32 x86 it was on previously. That saves a lot of $.

    The machine reviewed does not run AIX. It's Linux only. So the maintenance, etc. you mention isn't even relevant.

    There are still things that are annoying I suppose. AIX is steeped in legacy to some degree, and certainly not as easy to manage as a Linux box. But there are a lot of guides out there for free -- it took me about a month to be fully productive. And the support costs you pay for -- well, if I ran into a wall, I just opened a PMR. IBM was always helpful
  • nils_ - Wednesday, November 11, 2015 - link

    I'm mostly working in Linux Devops now, but I remember dreading to use all the "classic" Unix machines at my first "real" job 12 years ago. We ran a few IRIX and AIX boxes which were ancient along itself. Hell even the first thing I did on my work Macbook was to replace the BSD userland with GNU wherever possible.

    It's hard to find any information on them and any learning materials are expensive and usually on dead trees. They pretty much want to sell training, consulting etc. along with the often non-competitive Hardware prices since these companies don't actually WANT to sell hardware. They want to sell everything that surrounds it.
  • retrospooty - Friday, November 6, 2015 - link

    The problem with server chips is that its about platform stability. IBM (and others) dropped off the face of the Earth and as mentioned above Intel now has 95% of the market. This chip looks great but will companies buy into it in mass? What if IBM makes another choice to drop off the face of the Earth again and your platform is dead ended? I would have to think long and hard about going with them at this point.
  • FunBunny2 - Friday, November 6, 2015 - link

    Not likely. the mainframe z machines are built using POWER blocks.
  • Kevin G - Friday, November 6, 2015 - link

    POWER and System Z are two different architectures. Case in point, POWER is a RISC design introduced in the 90's where as the System Z mainframes can trace their roots to a CISC design from the 1960's (and it is still possible to run some of that 1960's code unmodified).

    They do share a handful of common parts (think the CDIMMs) to cut down on support costs.
  • plonk420 - Friday, November 6, 2015 - link

    can you run an x264 benchmark on it?? x)

Log in

Don't have an account? Sign up now