Taking a Closer Look At IBM's S822L

The S822L was mounted in the Xeon server dominated racks inside our experimental datacenter. The build quality of both the rails and server were apparent, a "locking mechanism" made sure the server was easy to mount without a screwdriver and was kept firmly at its place.

The system booted by using the Flexible Service Processor (FSP), which is comparable to the Baseboard Management Controllers as they are both components that allow you to manage and monitor your server thanks to the IPMI specification. The main difference with Xeon system is that the FSP and its related firmware and software is the only way you can control your system. There is no "BIOS" screen or BIOS configuration setup, everything has to be configured and booted via the FSP software. You could say that the "BIOS" and "BMC management software" are now integrated into one central firmware.

To power on the S822L, you have to access the FSP using the open-source IPMItool. Once the server is booted up, the "petitboot" bootloader of OPAL (the OpenPOWER Abstraction Layer firmware) takes over. It scans all bootable instances (disks, network, optical, etc.) for operating systems, and is similar to the GRUB bootloader. From there, you can install Linux like you would on an x86 system.

The cover was covered with a lot of interesting service information about the upgrading and replacing the hardware.

Once we removed the cover, lots of expansion slots became visible.

No less than nine hot plug (!) low profile PCIe Gen 3 slots are available. Four of them are x16, ready for some GPU action. Five are x8. Only one of the PCIe slots is used for the standard quad-gigabit Ethernet adapter. We also had one Emulex FC card installed.

Also installed were two PowerPC based SAS RAID controller(s), capable of RAID-6 and all common RAID levels, which connectto a dual backplane that offers 12 Small Factor Form (2.5 inch) drives. These drives can be SAS SSD or hard disks, which is a reliable but rather expensive storage choice. A DVD drive was also present, which allowed us to install Linux the old-fashioned way.

At the back we find two hot-swappable PSUs, four gigabit Ethernet interfaces, two USB 2.0 ports, an HMC dual-gigabit interface (an HMC is a hardware applicance that can manage several IBM servers) and one system port.

The server is powered by two redundant high quality Emerson 1400W PSUs.

Software Issues Inside the S822L: Hardware Components
Comments Locked

146 Comments

View All Comments

  • usernametaken76 - Thursday, November 12, 2015 - link

    Technically this is not true. IBM had a working version of AIX running on PS/2 systems as late as the 1.3 release. Unfortunately support was withdrawn and future releases of AIX were not compiled for x86 compatible processors. One can still find a copy of this release if one knows where to look. It's completely useless to anyone but a museum or curious hobbyist, but it's out there.
  • zenip - Friday, November 13, 2015 - link

    ...>--click here-
  • Steven Perron - Monday, November 23, 2015 - link

    Hello Johan,

    I was reading this article, and I found it interesting. Since I am a developer for the IBM XL compiler, the comparisons between GCC and XL were particularly interesting. I tried to reproduce the results you are seeing for the LZMA benchmark. My results were similar, but not exactly the same.

    When I compared GCC 4.9.1 (I know a slightly different version that you) to XL 13.1.2 (I assume this is the version you used), I saw XL consistently ahead of GCC, even when I used -O3 for both compilers.

    I'm still interested in trying to reproduce your results, so I can see what XL can do better, so I have a couple questions on areas that could be different.

    1) What version of the XL compiler did you use? I assumed 13.1.2, but it is worth double checking.
    2) Which version of the 7-zip software did you use? I picked up p7zip 15.09.
    3) Also, I noticed when the Power 8 machine was running at full capacity (for me that was 192 threads on a 24 core machine), the results would fluctuate a bit. How many runs did you do for each configuration? Were the results stable?
    4) Did you try XL at the less aggressive and more stable options like "-O3" or "-O3 -qhot"?

    Thanks for you time.
  • Toyevo - Wednesday, November 25, 2015 - link

    Other than the ridiculous price of CDIMMs the power efficiency just doesn't look healthy. For data centers leasing their hardware like Amazon AWS, Google AppEngine, Azure, Rackspace, etc, clients who pay for hardware yet fail to use their allocation significantly help the bottom line of those companies by reduced overheads. For others high usage is a mandatory part of the ROI equation during its period as an operating asset, thus power consumption is a real cost. Even with our small cluster of 12 nodes the power efficiency is a real consideration, let alone companies standardizing toward IBM and utilising 100s or 1000s of nodes that are arguably less efficient.

    Perhaps you could devise some sort of theoretical total cost of ownership breakdown for these articles. My biggest question after all of this is, which one gets the most work done with the lowest overheads. Don't get me wrong though, I commend you and AnandTech on the detail you already provide.
  • AstroGuardian - Tuesday, December 8, 2015 - link

    It's good to have someone challenging Intel, since AMD crap their pants on regular basis
  • dba - Monday, July 25, 2016 - link

    Dear Johan:

    Can you extrapolate how much faster the Sparc S7 will be in your Cluster Benchmarking,
    if the 2 on Die Infiniband ports are Activated, 5, 10, 20% ???

    Thank You, dennis b.

Log in

Don't have an account? Sign up now