Energy and Pricing

Unfortunately, accurately and fairly comparing energy consumption at the system level between the S822L and other systems wasn't something we were able to do, as there were quite a few differences in the hardware configuration. For example, the IBM S822L had two SAS controllers and we had no idea how power hungry that chip under the copper heatsink was. Still there is no doubt that the dual CPU system is by far the most important power consumer when the server system is under load. In case of the IBM system, the Centaur chips will take their fair share too, but those chips are not optional. So we can only get a very rough idea how the power consumption compares.

Xeon E5 299 v3/POWER8 Comparison (System)
Feature 2x Xeon E5-2699v3 2x IBM POWER8 3.4 10c
Idle 110-120W 360-380W

Running NAMD (FP)


Running 7-zip (Integer)



The Haswell core was engineered for mobile use, and there is no denying that Intel's engineers are masters at saving power at low load.

The mightly POWER8 is cooled by a huge heatsink

IBM's POWER8 has pretty advanced power management, as besides p-states, power gating cores and the associated L3-cache should be possible. However, it seems that these features were not enabled out-of-the box for some reason as idle power was quite high. To be fair, we spent much more time on getting our software ported and tuned than on finding the optimal power settings. In the limited time we had with the machine, producing some decent benchmarking numbers was our top priority.

Also, the Centaur chips consume about 16W per chip (Typical, 20W TDP) and as we had 8 of them inside our S822L, those chips could easily be responsible for consuming around 100W.

Interestingly, the IBM POWER8 consumes more energy processing integers than floating point numbers. Which is the exact opposite of the Xeon, which consumes vastly more when crunching AVX/FP code.


Though the cost of buying a system might be only "a drop in the bucket" in the total TCO picture in traditional IT departements running expensive ERP applications, it is an important factor for almost everybody else who buys Xeon systems. It is important to note that the list prices of IBM on their website are too high. It is a bad habit of a typical tier-one OEM.

Thankfully we managed to get some "real street prices", which are between 30% (one server) and 50% (many) lower. To that end we compared the price of the S822L with a discounted DELL R730 system. The list below is not complete, as we only show the cost of the most important components. The idea is to focus on the total system price and show which components contribute the most to the total system cost.

Xeon E7v3/POWER8 Price Comparison
Feature Dell R730 IBM S822L
  Type Price Type Price
Chassis R730 N/A S822L N/A
Processor 2x E5-2697 $5000 2x POWER8 3.42 $3000
RAM 8x 16GB
$2150 8x 16 GB CDIMM (DDR3) $8000
PSU 2x 1100W $500 2x 1400W $1000
Disks SATA or SSD Starting at
SAS HD/SSD +/- $450
Total system price (approx.)   $10k   $15k

With more or less comparable specs, the S822L was about 50% more expensive. However, it was almost impossible to make an apples-to-apples comparison. The biggest "price issue" are the CDIMMs, which are almost 4 times as expensive as "normal" RDIMMs. CDIMMs offer more as they include an L4-cache and some extra features (such as a redundant memory chip for each 9 chips). For most typical current Xeon E5 customers, the cost issue will be important. For a few, the extra redundancy and higher bandwidth will be interesting. Less important, but still significant is the fact that IBM uses SAS disks, which increase the cost of the storage system, especially if you want lots of them.

This cost issue will be much less important on most third party POWER8 systems. Tyan's "Habanero" system for example integrates the Centaur chips on the motherboard, making the motherboard more expensive but you can use standard registered DDR3L RDIMMs, which are much cheaper. Meanwhile the POWER8 processor tends to be very reasonably priced, at around $1500. That is what Dell would charge for an Intel Xeon E5-2670 (12 cores at 2.3-2.6 GHz, 120W). So while Intel's Xeon are much more power efficient than the POWER8 chips, the latter tends to be quite a bit cheaper.

Scale-Out Big Data Benchmark: ElasticSearch Comparing Benchmarks & Closing Thoughts


View All Comments

  • Der2 - Friday, November 6, 2015 - link

    Life's good when you got power. Reply
  • BlueBlazer - Friday, November 6, 2015 - link

    Aye, the power bills will skyrocket. Reply
  • Brutalizer - Friday, November 13, 2015 - link

    It is confusing that sometimes you are benchmarking cores, and sometimes cpus. The question here is "which is the fastest cpu, x86 or POWER8" - and then you should bench cpu vs cpu. Not core vs core. If a core is faster than another core says nothing, you also need to know how many cores there are. Maybe one cpu has 2 cores, and the other has 1.000 cores. So when you tell which core is fastest, you give incomplete information so I still have to check up how many cores and then I can conclude which cpu is fastest. Or can I? There are scaling issues, just because one benchmark runs well on one core, does not mean it runs equally well when run on 18 cores. This means I can not extrapolate from one core to the entire cpu. So I still am not sure which cpu is fastest as you give me information about core performance. Next time, if you want to talk about which cpu is faster, please benchmark the entire cpu. Not core, as you are not talking about which core is faster.

    Here are 20+ world records by SPARC M7 cpu. It is typically 2-3x faster than POWER8 and Intel Xeon, all the way up to >10x faster. For instance, M7 achieves 87% higher saps than E5-2699v3.

    The big difference between POWER and SPARC vs x86, is scalability and RAS. When I say scalability, I talk about scale-up business Enterprise servers with as many as 16- or even 32-sockets, running business software such as SAP or big databases, that require one large single server. SGI UV2000 that scales to 10.000s of cores can only run scale-out HPC number crunching workloads, in effect, it is a cluster. There are no customers that have ever run SGI UV2000 using enterprise business workloads, such as SAP. There are no SAP benchmarks nor database benchmarks on SGI UV2000, because they can only be used as clusters. The UV2000 are exclusively used for number crunching HPC workloads, according to SGI. If you dont agree, I invite you to post SAP benchmarks with SGI UV2000. You wont find any. The thing is, you can not use a small cluster with 10.000 cores and replace a big 16- or 32-socket Unix server running SAP. Scale-out clusters can not run SAP, only scale-up servers can. There does not exist any scale-out clustered SAP benchmarks. All the highest SAP benchmarks are done by single large scale-up servers having 16- or 32-sockets. There are no 1.000-socket clustered servers on the SAP benchmark list.

    x86 is low end, and have for decades stopped at maximum 8-sockets (when we talk about scale-up business servers), and just recently we see 16- and 32- sockets scale-up business x86 servers on the market (HP Kraken, and SGI UV300H) but they are brand new, so performance is quite bad. It takes a couple of generations until SGI and HP have learned and refined so they can ramp up performance for scale-up servers. Also, Windows and Linux has only scaled to 8-sockets and not above, so they need a major rewrite to be able to handle 16-sockets and a few TB of RAM. AIX and Solaris has scaled to 32-sockets and above for decades, were recently rewritten to handle 10s of TB of RAM. There is no way Windows and Linux can handle that much RAM efficiently as they have only scaled to 8-sockets until now. Unix servers scale way beyond 8-sockets, and perform very well doing so. x86 does not.

    The other big difference apart from scalability is RAS. For instance, for SPARC and POWER you can hot swap everything, motherboards, cpu, RAM, etc. Just like Mainframes. x86 can not. Some SPARC cpus can replay instructions if something went wrong. x86 can not.

    For x86 you typically use scale-out clusters: many cheap 1-2 socket small x86 servers in a huge cluster just like Google or Facebook. When they crash, you just swap them out for another cheap server. For Unix you typically use them as a single scale-up server with 16- or 32-sockets or even 64-sockets (Fujitsu M10-4S) running business software such as SAP, they have the RAS so they do not crash.
  • zeeBomb - Friday, November 6, 2015 - link

    New author? Niiiice! Reply
  • Ryan Smith - Friday, November 6, 2015 - link

    Ouch! Poor Johan.=(

    Johan is in fact the longest-serving AT editor. He's been here almost 11 years, just a bit longer than I have.
  • hans_ober - Friday, November 6, 2015 - link

    @Johan you need to post this stuff more often, people are forgetting you :) Reply
  • JohanAnandtech - Friday, November 6, 2015 - link

    well he started with "niiiiice". Could have been much worse. Hi zeeBomb, I am Johan, 43 years old and already 17 years active as a hardware editor. ;-) Reply
  • JanSolo242 - Friday, November 6, 2015 - link

    Reading Johan reminds me of the days of Reply
  • joegee - Friday, November 6, 2015 - link

    The good old days! I remember so many of the great discussions/arguments we had. We had an Intel guy, an AMD guy, and Charlie Demerjian. Johan was there. Mike Magee would stop in. So would Chris Tom, and Kyle Bennett. It was an awesome collection of people, and the discussions were FULL of real, technical points. I always feel grateful when I think back to Ace's. It was quite a place. Reply
  • JohanAnandtech - Saturday, November 7, 2015 - link

    And many more: Paul Demone, Paul Hsieh (K7-architect), Gabriele svelto ... Great to see that people remember. :-) Reply

Log in

Don't have an account? Sign up now