The L4-cache and Memory Subsystem

Each POWER8 memory controller has access to four "Custom DIMMs" or CDIMMs. Each CDIMMs is in fact a "Centaur" chip and 40 to 80 DRAM chips. The Centaur chip contains the DDR3 interfaces, the memory management logic and a 16 MB L4-cache.

The 16 MB L4-cache is eDRAM technology like the on-die L3-cache. Let us see how the CDIMMs look in reality.

Considering that 4Gb DRAM chips were available in mid 2013, the 1600 MHz 2Gb DRAM chips used here look a bit outdated. Otherwise the (much) more expensive 64GB CDIMMs use the current 4Gb DRAM chips. The S822L has 16 slots and can thus use up to 1TB (64GB x 16) in DIMMs.

Considering that many Xeon E5 servers are limited to 768 GB, 1 TB is more than competitive. Some Xeon E5 servers can reach 1.5 TB with 64 GB LR-DIMMs but not every server supports this rather expensive memory technology. It is very easy to service the CDIMMs: a gentle push on the two sides will allow you to slide them out. The black pieces of plastic between the CDIMMS are just place-holders that protect the underlying memory slots. For our testing we had CDIMMs installed in 8 of our system's 16 slots.

The Centaur chip acts as a 16MB L4-cache to save memory accesses and thus energy, but it needs quite a bit of power (10-20 W) itself and as a result is covered by heatsink. CDIMMs have ECC enabled (8+1 for ECC) and have also an extra spare DRAM chip. As result, a CDIMM has 10 DRAM chips while offering capacity of 8 chips.

That makes the DRAM subsystem of the S822L much more similar to the E7 memory subsystem with the "Scalable memory interconnect" and "Jordan Creek" memory buffer technology than to the typical Xeon E5 servers.

Inside the S822L: Hardware Components Benchmark Configuration and Methodology
Comments Locked

146 Comments

View All Comments

  • psychobriggsy - Friday, November 6, 2015 - link

    So you are complaining that your job's selection of hardware has made you earn twice as much?
  • dgingeri - Friday, November 6, 2015 - link

    No, because I don't earn twice as much. I'm not fully trained in AIX, so I have to muddle my way through dealing with the test machines we have. We don't use them for full production machines, just for testing software for our customers. (Which means I have to reinstall the OS on at least one of those machines about every month or so. That is a BIG pain in the behind due to the boot procedure. Where it takes a couple hours to reinstall Windows or Linux, it takes a full day to do it on an AIX machine.)

    I'm trying to advise people to NOT use AIX. It's an awful operating system. I'm also advising people NOT use IBM Power based machines because they are extremely aggravating to work on. Overall, it costs much more to run IBM Power machines, even if they aren't running AIX, than it does to run x86 machines. The up front cost might look competitive, but the maintenance costs are huge. Running AIX on them makes it an order of magnitude more expensive.
  • serpint - Friday, November 6, 2015 - link

    I suggest reading the NIM A-Z handbook. It shouldn't take you more than 10 minutes to fully deploy an AIX system fully built and installed with software. As with Linux, it also shouldn't take more than about 10 minutes to install and fully deploy a server if you have any experience scripting installs.

    The developerworks community inside IBM is possibly the best free resource you could hope for. Also the redbooks.ibm.com site.

    Compared to most *NIX flavors, AIX is UNIX for dummies.
  • agtcovert - Tuesday, November 10, 2015 - link

    If you had a NIM server setup and were using LPARs, loading a functional image of AIX should take 10 minutes flat, on a 1G network.

    If you're loading AIX on a physical machine without using the virtualization, you're wasting the server.
  • agtcovert - Tuesday, November 10, 2015 - link

    I've worked on AIX platforms extensively for about the same amount of time. First, most of these purchases go through a partner and yours must've sucked because we got great support from our IBM partner -- free training, access to experts, that sort of thing.

    Second, I always love the complaining about the cost of the hardware, etc. If you're buying big iron Power servers, the maintenance cost should be near irrelevant. And again, your partner should take care to negotiate that into the deal for 3-5 years ensuring you have access to updates.

    The other thing no one ever talks about is *why* you buy these servers. Why do they take so long to boot? Well, for the frame it self, it's a deep POST. But then, mine were never rebooted in 4 years, and that's for firmware upgrades (online) and a couple of interface card swaps (also done online with no service disruption). Do that on x86. So reason #1 -- RAS, at the hardware level. Seriously, how often did you need to reboot the frame?

    Reason #2 -- for large enterprises, you can do so much with these with relatively few cores they lead to huge licensing savings in Oracle, IBM software. For us, it was over $1m a year ongoing. And no, switching to other software was not an option. We could run an Oracle RAC on 4 cores of Power 7 (at the time) versus the 32 x86 it was on previously. That saves a lot of $.

    The machine reviewed does not run AIX. It's Linux only. So the maintenance, etc. you mention isn't even relevant.

    There are still things that are annoying I suppose. AIX is steeped in legacy to some degree, and certainly not as easy to manage as a Linux box. But there are a lot of guides out there for free -- it took me about a month to be fully productive. And the support costs you pay for -- well, if I ran into a wall, I just opened a PMR. IBM was always helpful
  • nils_ - Wednesday, November 11, 2015 - link

    I'm mostly working in Linux Devops now, but I remember dreading to use all the "classic" Unix machines at my first "real" job 12 years ago. We ran a few IRIX and AIX boxes which were ancient along itself. Hell even the first thing I did on my work Macbook was to replace the BSD userland with GNU wherever possible.

    It's hard to find any information on them and any learning materials are expensive and usually on dead trees. They pretty much want to sell training, consulting etc. along with the often non-competitive Hardware prices since these companies don't actually WANT to sell hardware. They want to sell everything that surrounds it.
  • retrospooty - Friday, November 6, 2015 - link

    The problem with server chips is that its about platform stability. IBM (and others) dropped off the face of the Earth and as mentioned above Intel now has 95% of the market. This chip looks great but will companies buy into it in mass? What if IBM makes another choice to drop off the face of the Earth again and your platform is dead ended? I would have to think long and hard about going with them at this point.
  • FunBunny2 - Friday, November 6, 2015 - link

    Not likely. the mainframe z machines are built using POWER blocks.
  • Kevin G - Friday, November 6, 2015 - link

    POWER and System Z are two different architectures. Case in point, POWER is a RISC design introduced in the 90's where as the System Z mainframes can trace their roots to a CISC design from the 1960's (and it is still possible to run some of that 1960's code unmodified).

    They do share a handful of common parts (think the CDIMMs) to cut down on support costs.
  • plonk420 - Friday, November 6, 2015 - link

    can you run an x264 benchmark on it?? x)

Log in

Don't have an account? Sign up now