The L4-cache and Memory Subsystem

Each POWER8 memory controller has access to four "Custom DIMMs" or CDIMMs. Each CDIMMs is in fact a "Centaur" chip and 40 to 80 DRAM chips. The Centaur chip contains the DDR3 interfaces, the memory management logic and a 16 MB L4-cache.

The 16 MB L4-cache is eDRAM technology like the on-die L3-cache. Let us see how the CDIMMs look in reality.

Considering that 4Gb DRAM chips were available in mid 2013, the 1600 MHz 2Gb DRAM chips used here look a bit outdated. Otherwise the (much) more expensive 64GB CDIMMs use the current 4Gb DRAM chips. The S822L has 16 slots and can thus use up to 1TB (64GB x 16) in DIMMs.

Considering that many Xeon E5 servers are limited to 768 GB, 1 TB is more than competitive. Some Xeon E5 servers can reach 1.5 TB with 64 GB LR-DIMMs but not every server supports this rather expensive memory technology. It is very easy to service the CDIMMs: a gentle push on the two sides will allow you to slide them out. The black pieces of plastic between the CDIMMS are just place-holders that protect the underlying memory slots. For our testing we had CDIMMs installed in 8 of our system's 16 slots.

The Centaur chip acts as a 16MB L4-cache to save memory accesses and thus energy, but it needs quite a bit of power (10-20 W) itself and as a result is covered by heatsink. CDIMMs have ECC enabled (8+1 for ECC) and have also an extra spare DRAM chip. As result, a CDIMM has 10 DRAM chips while offering capacity of 8 chips.

That makes the DRAM subsystem of the S822L much more similar to the E7 memory subsystem with the "Scalable memory interconnect" and "Jordan Creek" memory buffer technology than to the typical Xeon E5 servers.

Inside the S822L: Hardware Components Benchmark Configuration and Methodology
Comments Locked

146 Comments

View All Comments

  • JohanAnandtech - Wednesday, November 11, 2015 - link

    I did not know you used 7500x3500, my testing was inspired on what the rest of the benchmarking community (Phoronix, Serverthehome) was using (obviously, 1024x768 is too small for current servers).

    http://www.anandtech.com/show/9567/the-power-8-rev...
    This answers your question about threads, right?
  • JohanAnandtech - Wednesday, November 11, 2015 - link

    Oh yes, changed the link. Thanks for the feedback!
  • mapesdhs - Thursday, November 12, 2015 - link

    Most welcome! And I really should move the more complex tests to the top of the page...

    Oh, my wording about threads was not what I'd intended. What I meant was, the no. of threads being larger than the supported no. of hardware threads. Thus, for a 12-core Power8 with 8 threads per core, try using 192 or 384 threads, instead of just the nominal 96 one might assume would make sense.

    Ian.
  • MB13 - Wednesday, November 11, 2015 - link

    POWER8 is full of innovation and brings change! An S812LC only costs $6,595 from IBM's external website! http://www-03.ibm.com/systems/power/hardware/s812l...

    The Power scale out boxes will save on your running and software costs as you can reduce your software licensing and server footprint.

    With the OpenPOWER Foundation, you now have companies such as Tyan and Wistron who also create their own POWER8 servers and sell them independently of IBM. If you have not looked at The OpenPOWER Foundation and the innovation it brings through community and collaboration, your missing out big time!

    There is change! Don't get left behind!
  • MB13 - Wednesday, November 11, 2015 - link

    and don't forget - POWER8 runs Little Endian and support the latest versions of RedHat, SUSE and Ubuntu!
    The OpenPOWER servers are Linux only!
  • Gasaraki88 - Wednesday, November 11, 2015 - link

    It's funny how this article is trying to "sell" me the system but I'm still not impressed. Costs more, less performance, and uses more power at idle and load than the Intel system.
  • nils_ - Thursday, November 12, 2015 - link

    What I found the most off putting is that you have to do a lot of work to get some things running with Linux. That's a big cost factor.
  • nils_ - Thursday, November 12, 2015 - link

    Having a lot of software that isn't really well ported is probably going to remain a problem for Power8 for years to come since so few people have access to these kinds of systems and the cost is prohibitive. The great thing with x86 and ARM is that you can use it at home/work pretty easily without shelling out a lot of money. On x86 you can be sure if your software builds locally and runs locally it will also run on your server.
  • svj - Thursday, November 12, 2015 - link

    Well written articles.

    1. I submit that the headline is misleading. Intel x86 does not compete with POWER at the high end. POWER L & LC line of servers are comparable to x86 based servers. IBM POWER is taking the battle to Intel's home turf.
    2. The analysis leaves out cost of SW. Many organizations use commercial software which are priced per core. If POWER can do with 10 cores what Intel does with 18 cores, that means HUGE savings.
    3. OPEN POWER is a huge move. I think the market will start seeing the results soon.
  • alpha754293 - Thursday, November 12, 2015 - link

    An excellent review as always Johan. (haha...to zeeBomb. It is my understanding that Johan doesn't post probably as often as he might have otherwise like to because testing servers/enterprise computing solutions takes a LOT longer than testing/benching consumer-level systems. Some of the HPC applications that I run takes hours to days for each run, so when you're running it, you're running those tests over and over again, and before you know it, a month has gone by (or you've ran out of time with the system) or you have to purposely cut it short so that you can test a variety of software.)

    It's unfortunate that IBM never ported AIX to x86 (unlike Solaris.) I think that there would be more people trying to get into it if the cost of entry (even just to learn) isn't so high. I've looked at getting an old POWER4 system before for that purpose, but by then, the systems are so old and slow that it's like "what's the point?" I think that IBM is literally pricing themselves into extinction (along with their entire hardware/software ecosystem). Unfortunately for many businesses, AIX POWER servers still run their mainframe/backend which means that if you want to get paid $100k+ outta college - go learn AIX on POWER. As the current generation of sysadmins are starting to age and retire out, and they're going to have a hard time finding qualified people, the only way eventually would be that they would have to pay top dollar just to attract people into the field. (Unless they decide to move everything over to the x86/Xeon/Linux world. But for some mainframes (like financial institutions), that's DEFINITELY easier said than don).

Log in

Don't have an account? Sign up now