The L4-cache and Memory Subsystem

Each POWER8 memory controller has access to four "Custom DIMMs" or CDIMMs. Each CDIMMs is in fact a "Centaur" chip and 40 to 80 DRAM chips. The Centaur chip contains the DDR3 interfaces, the memory management logic and a 16 MB L4-cache.

The 16 MB L4-cache is eDRAM technology like the on-die L3-cache. Let us see how the CDIMMs look in reality.

Considering that 4Gb DRAM chips were available in mid 2013, the 1600 MHz 2Gb DRAM chips used here look a bit outdated. Otherwise the (much) more expensive 64GB CDIMMs use the current 4Gb DRAM chips. The S822L has 16 slots and can thus use up to 1TB (64GB x 16) in DIMMs.

Considering that many Xeon E5 servers are limited to 768 GB, 1 TB is more than competitive. Some Xeon E5 servers can reach 1.5 TB with 64 GB LR-DIMMs but not every server supports this rather expensive memory technology. It is very easy to service the CDIMMs: a gentle push on the two sides will allow you to slide them out. The black pieces of plastic between the CDIMMS are just place-holders that protect the underlying memory slots. For our testing we had CDIMMs installed in 8 of our system's 16 slots.

The Centaur chip acts as a 16MB L4-cache to save memory accesses and thus energy, but it needs quite a bit of power (10-20 W) itself and as a result is covered by heatsink. CDIMMs have ECC enabled (8+1 for ECC) and have also an extra spare DRAM chip. As result, a CDIMM has 10 DRAM chips while offering capacity of 8 chips.

That makes the DRAM subsystem of the S822L much more similar to the E7 memory subsystem with the "Scalable memory interconnect" and "Jordan Creek" memory buffer technology than to the typical Xeon E5 servers.

Inside the S822L: Hardware Components Benchmark Configuration and Methodology
POST A COMMENT

146 Comments

View All Comments

  • LemmingOverlord - Friday, November 6, 2015 - link

    Mate... Bite your tongue! Johan is THE man when it comes to Datacenter-class hardware. Obviously he doesn't get the same exposure as teh personal technology guys, but he is definitely one of the best reviewers out there (inside and outside AT). Reply
  • joegee - Friday, November 6, 2015 - link

    He's been doing class A work since Ace's Hardware (maybe before, I found him on Ace's though.) He is a cut above the rest. Reply
  • nismotigerwvu - Friday, November 6, 2015 - link

    Johan,

    I think you had a typo on the first sentence of the 3rd paragraph on page 1.

    "After seeing the reader interestin POWER8 in that previous article..."

    Nice read overall and if I hadn't just had my morning cup of coffee I would have missed it too.
    Reply
  • Ryan Smith - Friday, November 6, 2015 - link

    Good catch. Thanks! Reply
  • Essence_of_War - Friday, November 6, 2015 - link

    That performance per watt, it is REALLY hard to keep up with the Xeons there! Reply
  • III-V - Friday, November 6, 2015 - link

    IBM's L1 data cache has a 3-cycle access time, and is twice as large (64KB) as Intel's, and I think I remember it accounting for something like half the power consumption of the core. Reply
  • Essence_of_War - Friday, November 6, 2015 - link

    Whoa, neat bit of trivia! Reply
  • JohanAnandtech - Saturday, November 7, 2015 - link

    Interesting. Got a link/doc to back that up? I have not found such detailed architectural info. Reply
  • Henriok - Friday, November 6, 2015 - link

    Very nice to see tests of non-x86 hardware. It's interesting too se a test of the S822L when IBM just launched two even more price competitive machines, designed and built by Wistron and Tyan, as pure OpenPOWER machines: the S812LC and S822LC. These can't run AIX, and are substantially cheaper than the IBM designed machines. They might lack some features, but they would probably fit nicely in this test. And they are sporting the single chip 12 core version of the POWER8 processor (with cores disabled). Reply
  • DanNeely - Friday, November 6, 2015 - link

    "The server is powered by two redundant high quality Emerson 1400W PSUs."

    The sticker on the PSU is only 80+ (no color). Unless the hotswap support comes with a substantial penalty (if so why); this design looks to be well behind the state of the art. With data centers often being power/hvac limited these days, using a relatively low efficiency PSU in an otherwise very high end system seems bizarre to me.
    Reply

Log in

Don't have an account? Sign up now