Future Visions: POWER8 with NVLink

Digging a bit deeper, the shiny new S822LC is a different beast. If offers the "NVIDIA improved" POWER8. The core remained the same but the CPU now comes with NVIDIA's NVlink technology. Four of these NVLink ports allows the S822LC to make a very fast (80 GB/s full duplex) and direct link with the latest and greatest of NVIDIA GPUs: the Tesla P100. Ryan has discussed NVLink and the 16 nm P100 in more detail a few months ago. I quote:

NVLink will allow GPUs to connect to either each other or to supporting CPUs (OpenPOWER), offering a higher bandwidth cache coherent link than what PCIe 3 offers. This link will be important for NVIDIA for a number of reasons, as their scalability and unified memory plans are built around its functionality.

Each P100 has a 720 GB/s of memory bandwidth, powered by 16 GB of HBM2 stacked memory. If you combine that with the fact that the P100 has more than twice the processing power in half precision and double precision floating point (important for machine learning algorithms) than its predecessor, it easy to understand why the data transfers from the CPU to GPU can easily become a bottleneck in some applications.

This means that the "OpenPOWER way of working" has enabled the IBM POWER8 to be the first platform to fully leverage the best of NVIDIA's technology. It is almost certain that Intel will not add NVLink to their products, as Intel went a totally different route with the Xeon and Xeon Phi. NVLink offers 80 GB/s of full-duplex connectivity per GPU, which is provided in the form of 4 20GB/s connections that can be routed between GPUs and CPUs as needed. By comparison, a P100 that plugs into an x16 PCIe 3.0 slot only gets 16 GB/s full duplex to communicate with both the CPU and the other GPUs. So theoretically, a quad NVLink setup from GPU to CPU offers at least 2.5 times more bandwidth. However, IBM claims that in reality the advantage is 2.8x as the NVLink is more efficient than PCIe (83% of theoretical bandwidth vs. 74%).

The NVLink equipped P100 cards will make use of the SXM2 form factor and come with a bonus: they deliver 13% more raw compute performance than the "classic" PCIe card due to the higher TDP (300W vs 250W). By the numbers, this amounts to 5.3 TFLOPS double precision for the SXM2 version, versus 4.7 TFLOPS for the PCIe version.

Recent Developments: OpenPOWER's Potential HPC Comeback Future Visions: POWERed by NVIDIA
Comments Locked

49 Comments

View All Comments

  • PowerOfFacts - Friday, September 16, 2016 - link

    troll
  • BOMBOVA - Friday, October 7, 2016 - link

    Rich info , good scout
  • PowerOfFacts - Friday, September 16, 2016 - link

    Sigh ....
  • PowerOfFacts - Friday, September 16, 2016 - link

    That's strange, this site says you can buy a POWER8 server for $4800. https://www.ibm.com/marketplace/cloud/big-data-inf...

    Screwed up Power (so many times)? Please explain? Compared to what....SPARC? Itanium? If you are talking about those platforms, POWER has 70% of that marketshare. Do you mean against "Good Enough" Intel? Absolutely Intel is the market leader but only in share as it isn't in innovation. Power still delivers enterprise features for AIX and IBM i customers with features Intel could only dream about. Where the future of the data center is going with Linux, well it did take IBM a while to figure out they couldn't do it their way. Now, they are committed 100% (from my perspective as a non-IBMer while also being committed to AIX & IBM i as their is a solid install base there) which we all see in the form of IBM & even non-IBM solutions built by OpenPOWER partners and ISV solutions using little endian Linux. Yes, there are some workloads that require extra work to optimize but for those already optimized or those which can be optimized, those customers can now buy a server for less money that has the potential to outperform Intel by up to 2X, in a system using innovative technology (CAPI & NVLink) that is more reliable. I don't know, IBM may be late and Power has some work to do but I really don't think you can back up your statement that "IBM has screwed up power so many times". Latest OpenPOWER Summit was a huge success. Here is a Google interview https://www.youtube.com/watch?v=f0qTLlvUB-s&fe...

    Oh, but you were probably just trying to be clever and take a few competitive shots.
  • CajunArson - Saturday, September 17, 2016 - link

    Yeah, that $4800 Power server wasn't nearly equivalent to what was benchmarked in this review with the "midrange" server that costs over $11K on the same web page you cited.

    I could build an 8 or 12 core Xeon that would put the hurt on that low-end Power box for less money and continue to save money during every minute of operation.
  • JohanAnandtech - Saturday, September 17, 2016 - link

    " it will cost anywhere from 5-10X" . What do you base this on? Several SKUs of IBM are in the $1500 range. "Something like $10K for the processor". This seems to be about the high-end. The E7s are in the $4.6-7k range. Even if IBM would charge $10k for the high end CPUs, it is nowhere near being 5x more expensive. Unless I am missing something, you seem to have missed that IBM has a scale out range and is offering much more affordable OpenPOWER CPUs.
  • jesperfrimann - Wednesday, September 21, 2016 - link

    IMHO, the place where POWER servers make sense right now, is for use with IBM software. So if you are using something DB2 or WebSphere, where the real cost is the Software licenses.
    Then it's really a Nobrainer. Not that your local IBM sales Guy will like that you'll do a switch to a Linux@Power solution :)

    // Jesper
  • YukaKun - Thursday, September 15, 2016 - link

    For the Java tests, did you change the GC collector settings? Also, why only 24GB for the JVM? I run JBoss with 32GB across our servers. I'd use more, but they still have issues with going to higher levels.

    Cheers!
  • madwolfa - Thursday, September 15, 2016 - link

    Unless working with huge datasets you want to keep your JVM heap size as reasonably low as possible... otherwise there would be a penalty on GC performance. Granted, with this sort of hardware it would be pretty minuscule, but the general rule of thumb still applies...
  • JohanAnandtech - Thursday, September 15, 2016 - link

    No changes to the GC Collector settings. 24 GB for VM = 4x 24 GB + 4x 3 GB for Transaction Injector and 2 GB for the controllor = +/- 110 GB memory. We wanted to run it inside 128 GB as most of our DIMMs are 16 GB at DDR4-2400/2133.

Log in

Don't have an account? Sign up now