Benchmark Configuration and Methodology

For our testing we installed 64-bit Ubuntu 15.04 Linux (Kernel version 3.19.0) so that we were able to use GCC 4.9.2, which has better support for the POWER8. We tried to keep the colors inside our benchmark graphs consistent: dark blue is IBM, light blue is the latest Intel Xeon generation (Haswell, E5 v3), and gray was reserved for older Intel systems.

Meanwhile on a quick aside, we should point out that IBM's servers also support PowerVM and KVM virtualization, however we decided not to make use of it to keep the complexity of the tests under control. As we explained in the introduction, porting and tuning the usual benchmarks was quite a challenge, and virtualization makes benchmarking a lot more complex. Testing virtualized workloads was thus beyond the scope of this article.

All tests have been done with the help of Kirth and Wannes of the Sizing Servers Lab.

IBM S822L (2U Chassis)

CPU Two IBM POWER8 3.425 GHz 10 cores
RAM 128GB (8x16GB) IBM CDIMMs
Internal Disks 2x 300GB 15K RPM SAS Disks (boot)
1x Intel DC P3700 400 GB (Data and benchmarks)
Motherboard No idea
BIOS version OPAL v3
PSU Dual Emerson 1400W

Intel's Xeon E5 Server – "Wildcat Pass" (2U Chassis)

CPU Two Intel Xeon processor E5-2699 v3 (2.3GHz, 18c, 45MB L3, 145W)
Two Intel Xeon processor E5-2695 v3 (2.3 GHz, 14c, 35MB L3, 120W)
Two Intel Xeon processor E5-2667 v3 (3.2 GHz, 8c, 20MB L3, 135W)
Two Intel Xeon processor E5-2650L v3 (1.8GHz, 12c, 30MB L3, 65W)
RAM 128GB (8x16GB) Samsung M393A2G40DB0 (RDIMM)
Internal Disks 2x Intel MLC SSD710 200GB (boot)
1x Intel DC P3700 400 GB (Data and benchmarks)
Motherboard Intel S2600WTT
BIOS version version 1.01
PSU Delta Electronics 750W DPS-750XB A (80+ Platinum)

All C-states are enabled in both the BIOS.

Other Notes

Both servers are fed by a standard European 230V (16 Amps max.) powerline. The room temperature is monitored and kept at 23°C by our Airwell CRACs.

The L4-cache and Memory Subsystem "Per Core" Integer Performance: 7-Zip
Comments Locked

146 Comments

View All Comments

  • hissatsu - Friday, November 6, 2015 - link

    You might want to look more closely. Thought it's a bit blurry, I'm almost certain that's the 80+ Platinum logo, which has no color.
  • DanNeely - Friday, November 6, 2015 - link

    That's possible; it looks like there's something at the bottom of the logo. Google image search shows 80+ platinum as a lighter silver/gray than 80+ silver; white is only the original standard.
  • Shezal - Friday, November 6, 2015 - link

    Just look up the part number. It's a Platinum :)
  • The12pAc - Thursday, November 19, 2015 - link

    I have a S814, it's Platinum.
  • johnnycanadian - Friday, November 6, 2015 - link

    Oh yum! THIS is what I still love about AT: non-mainstream previews / reviews. REALLY looking forward to more like this. I only wish SGI still built workstation-level machines. :-(
  • mapesdhs - Tuesday, November 10, 2015 - link


    Indeed, but it'd need a hefty change in direction at SGI to get back into workstations again, so very unlikely for the forseeable future. They certainly have the required base tech (NUMALink6, MPI offload, etc.), namely lots of sockets/cores/RAM coupled with GPUs for really heavy tasks (big data, GIS, medical, etc.), ie. a theoretical scalable, shared-memory workstation. But the market isn't interested in advanced performance solutions like this atm, and the margin on standard 2/4-socket systems isn't worthwhile, it'd be much cheaper to buy a generic Dell or HP (plus, it's only above this no. of sockets that their own unique tech comes into play). Pity, as the equivalent of a UV 30/300 workstation would be sweet (if expensive), though for virtually all of the tasks discussed in this article, shared memory tech isn't relevant anyway. The notion of connectable, scalable, shared memory workstations based on NV gfx, PCIe and newer multi-core MIPS CPUs was apparently brought up at SGI way back before the Rackable merger, but didn't go anywhere (not viable given the financial situation at the time). It's a neat concept, eg. imagine being able to connect two or more separate ordinary 2/4-socket XEON workstations together (each fitted with, say, a couple of M6000s) to form a single combined system with one OS instance and resources pool, allowing users to combine & split setups as required to match workloads, but it's a notion whose time has not yet come.

    Of course, what's missing entirely is the notion of advanced but costly custom gfx, but again there's no market for that atm either, at least not publicly. Maybe behind the scenes NV makes custom stuff the way SGI used to for relevant customers (DoD, Lockheed, etc.), but SGI's products always had some kind of commercially available equivalent from which the custom builds were derived (IRx gfx), whereas atm there's no such thing as a Quadro with 30000 cores and 100GB RAM that costs $50K and slides into more than one PCIe slot which anyone can buy if they have the moolah. :D

    Most of all though, even if the demand existed and the tech could be built, it'd never work unless SGI stopped using its pricing-is-secret reseller sales model. They should have adopted a direct sales setup long ago, order on the site, pricing configurator, etc., but that never happened, even though the lack of such an option killed a lot of sales. Less of an issue with the sort of products they sell atm, but a better sales model would be essential if they were to ever try to sell workstations again, and that'd need a huge PR/sales management clearout to be viable.

    Pity IBM couldn't pay NV to make custom gfx, that'd be interesting, but then IBM quit the workstation market aswell.

    Ian.
  • mostlyharmless - Friday, November 6, 2015 - link

    "There is definitely a market for such hugely expensive and robust server systems as high end RISC machines are good for about 50.000 servers. "

    Rounding error?
  • DanNeely - Friday, November 6, 2015 - link

    50k clients would be my guess.
  • FunBunny2 - Friday, November 6, 2015 - link

    (dot) versus (comma) most likely. Euro centric versus 'Murcan centric.
  • DanNeely - Friday, November 6, 2015 - link

    If that was the case, a plain 50 would be much more appropriate.

Log in

Don't have an account? Sign up now