Taking a Closer Look At IBM's S822L

The S822L was mounted in the Xeon server dominated racks inside our experimental datacenter. The build quality of both the rails and server were apparent, a "locking mechanism" made sure the server was easy to mount without a screwdriver and was kept firmly at its place.

The system booted by using the Flexible Service Processor (FSP), which is comparable to the Baseboard Management Controllers as they are both components that allow you to manage and monitor your server thanks to the IPMI specification. The main difference with Xeon system is that the FSP and its related firmware and software is the only way you can control your system. There is no "BIOS" screen or BIOS configuration setup, everything has to be configured and booted via the FSP software. You could say that the "BIOS" and "BMC management software" are now integrated into one central firmware.

To power on the S822L, you have to access the FSP using the open-source IPMItool. Once the server is booted up, the "petitboot" bootloader of OPAL (the OpenPOWER Abstraction Layer firmware) takes over. It scans all bootable instances (disks, network, optical, etc.) for operating systems, and is similar to the GRUB bootloader. From there, you can install Linux like you would on an x86 system.

The cover was covered with a lot of interesting service information about the upgrading and replacing the hardware.

Once we removed the cover, lots of expansion slots became visible.

No less than nine hot plug (!) low profile PCIe Gen 3 slots are available. Four of them are x16, ready for some GPU action. Five are x8. Only one of the PCIe slots is used for the standard quad-gigabit Ethernet adapter. We also had one Emulex FC card installed.

Also installed were two PowerPC based SAS RAID controller(s), capable of RAID-6 and all common RAID levels, which connectto a dual backplane that offers 12 Small Factor Form (2.5 inch) drives. These drives can be SAS SSD or hard disks, which is a reliable but rather expensive storage choice. A DVD drive was also present, which allowed us to install Linux the old-fashioned way.

At the back we find two hot-swappable PSUs, four gigabit Ethernet interfaces, two USB 2.0 ports, an HMC dual-gigabit interface (an HMC is a hardware applicance that can manage several IBM servers) and one system port.

The server is powered by two redundant high quality Emerson 1400W PSUs.

Software Issues Inside the S822L: Hardware Components
Comments Locked

146 Comments

View All Comments

  • hissatsu - Friday, November 6, 2015 - link

    You might want to look more closely. Thought it's a bit blurry, I'm almost certain that's the 80+ Platinum logo, which has no color.
  • DanNeely - Friday, November 6, 2015 - link

    That's possible; it looks like there's something at the bottom of the logo. Google image search shows 80+ platinum as a lighter silver/gray than 80+ silver; white is only the original standard.
  • Shezal - Friday, November 6, 2015 - link

    Just look up the part number. It's a Platinum :)
  • The12pAc - Thursday, November 19, 2015 - link

    I have a S814, it's Platinum.
  • johnnycanadian - Friday, November 6, 2015 - link

    Oh yum! THIS is what I still love about AT: non-mainstream previews / reviews. REALLY looking forward to more like this. I only wish SGI still built workstation-level machines. :-(
  • mapesdhs - Tuesday, November 10, 2015 - link


    Indeed, but it'd need a hefty change in direction at SGI to get back into workstations again, so very unlikely for the forseeable future. They certainly have the required base tech (NUMALink6, MPI offload, etc.), namely lots of sockets/cores/RAM coupled with GPUs for really heavy tasks (big data, GIS, medical, etc.), ie. a theoretical scalable, shared-memory workstation. But the market isn't interested in advanced performance solutions like this atm, and the margin on standard 2/4-socket systems isn't worthwhile, it'd be much cheaper to buy a generic Dell or HP (plus, it's only above this no. of sockets that their own unique tech comes into play). Pity, as the equivalent of a UV 30/300 workstation would be sweet (if expensive), though for virtually all of the tasks discussed in this article, shared memory tech isn't relevant anyway. The notion of connectable, scalable, shared memory workstations based on NV gfx, PCIe and newer multi-core MIPS CPUs was apparently brought up at SGI way back before the Rackable merger, but didn't go anywhere (not viable given the financial situation at the time). It's a neat concept, eg. imagine being able to connect two or more separate ordinary 2/4-socket XEON workstations together (each fitted with, say, a couple of M6000s) to form a single combined system with one OS instance and resources pool, allowing users to combine & split setups as required to match workloads, but it's a notion whose time has not yet come.

    Of course, what's missing entirely is the notion of advanced but costly custom gfx, but again there's no market for that atm either, at least not publicly. Maybe behind the scenes NV makes custom stuff the way SGI used to for relevant customers (DoD, Lockheed, etc.), but SGI's products always had some kind of commercially available equivalent from which the custom builds were derived (IRx gfx), whereas atm there's no such thing as a Quadro with 30000 cores and 100GB RAM that costs $50K and slides into more than one PCIe slot which anyone can buy if they have the moolah. :D

    Most of all though, even if the demand existed and the tech could be built, it'd never work unless SGI stopped using its pricing-is-secret reseller sales model. They should have adopted a direct sales setup long ago, order on the site, pricing configurator, etc., but that never happened, even though the lack of such an option killed a lot of sales. Less of an issue with the sort of products they sell atm, but a better sales model would be essential if they were to ever try to sell workstations again, and that'd need a huge PR/sales management clearout to be viable.

    Pity IBM couldn't pay NV to make custom gfx, that'd be interesting, but then IBM quit the workstation market aswell.

    Ian.
  • mostlyharmless - Friday, November 6, 2015 - link

    "There is definitely a market for such hugely expensive and robust server systems as high end RISC machines are good for about 50.000 servers. "

    Rounding error?
  • DanNeely - Friday, November 6, 2015 - link

    50k clients would be my guess.
  • FunBunny2 - Friday, November 6, 2015 - link

    (dot) versus (comma) most likely. Euro centric versus 'Murcan centric.
  • DanNeely - Friday, November 6, 2015 - link

    If that was the case, a plain 50 would be much more appropriate.

Log in

Don't have an account? Sign up now