Memory Subsystem: Latency Measurements

There is no doubt about it: the performance of modern CPUs depends heavily on the cache subsystem. And some applications depend heavily on the DRAM subsystem too. We used LMBench in an effort to try to measure latency. Our favorite tool to do this, Tinymembench, does not support the POWER architecture yet. That is a pity, because it is a lot more accurate and modern (as it can test with two outstanding requests).

The numbers we looked at were "Random load latency stride=16 Bytes" (LMBench).

Mem
Hierarchy
IBM POWER8 Intel Broadwell
Xeon E5-2640v4
DDR4-2133
Intel Broadwell
Xeon E5-2699v4
DDR4-2400
L1 Cache (cycles) 3 4 4
L2 Cache (cycles) 13 12-15 12-15
L3 Cache 4-8 MB(cycles) 27-28 (8 ns) 49-50 50
16 MB (ns) 55 ns 26 ns 21 ns
32-64 MB (ns) 55-57 ns 75-92 ns 80-96 ns
Memory 96-128 MB (ns) 67-74 ns 90-91 ns 96 ns
Memory 384-512 MB (ns) 89-91 ns 91-93 ns 95 ns

(Note that the numbers for Intel are higher than what we reported in our Cavium ThunderX review. The reason is that we are now using the numbers of LMBench and not those of Tinymembench.)

A 64 KB L1 cache with 4 read ports that can run at 4+ GHz speeds and still maintain a 3 cycle load latency is nothing less than the pinnacle of engineering. The L2 cache excels too, being twice as large (512 KB) and still offering the same latency as Intel's L2.

Once we get to the eDRAM L3 cache, our readings get a lot more confusing. The L3 cache is blistering fast as long as you only access the part that is closest to the core (8 MB). Go beyond that limit (16 MB), and you get a latency that is no less than 7 times worse. It looks like we actually hitting the Centaur chips, because the latency stays the same at 32 and 64 MB.

Intel has a much more predictable latency chart. Xeon's L3 cache needs about 50 cycles, and once you get into DRAM, you get a 90-96 ns latency. The "transistion phase" from 26 ns L3 to 90 ns DRAM is much smaller.

Comparatively, that "transition phase" seems relatively large on the IBM POWER8. We have to go beyond 128 MB before we get the full DRAM latency. And even then the Centaur chip seems to handle things well: the octal DDR-3 1333 MHz DRAM system delivers the same or even slightly better latency as the DDR4-2400 memory on the Xeon.

In summary, IBM's POWER8 has a twice as fast 8 MB L3, while Intel's L3 is vastly better in the 9-32 MB zone. But once you go beyond 32 MB, the IBM memory subsystem delivers better latency. At a significant power cost we must add, because those 4 memory buffers need about 64 Watts.

Memory Subsystem: Bandwidth Single-Threaded Integer Performance: SPEC CPU2006
Comments Locked

124 Comments

View All Comments

  • DomOfSF - Thursday, July 21, 2016 - link

    Johan de Gelas: blowing minds and educating "the rest of us" since...I dunno, a really long time ago (especially in internet years). Great job on the data, but the real good stuff is in your thoughts and analysis. Thank you!
  • close - Saturday, July 23, 2016 - link

    Over a decade...
  • JohanAnandtech - Thursday, July 28, 2016 - link

    13 years in the server business, 18 years now of reviewing hardware :-). Thx !!
  • jamyryals - Thursday, July 21, 2016 - link

    It seems to me, Intel's focus on bringing their CPU architecture design all the way down to 5W is the reason IBM is able to stand out against them. Intel is focused on creating a scalable architecture while IBM can throw the whole kitchen sink at the server market.

    Fascinating article, I really enjoyed it.
  • smilingcrow - Thursday, July 21, 2016 - link

    Intel has plenty of unique features in their server platforms which aren't in the consumer platforms so I don't think that is the issue.
  • jospoortvliet - Tuesday, July 26, 2016 - link

    The basic design of the core still is the same so there is probably at least some truth in the statement of Jamy.
  • Kevin G - Wednesday, July 27, 2016 - link

    Up until this point. Consumer SkyLake and server SkyLake are going to be two different designs. They're certainly related but server SkyLake will have 512 KB of L2 cache per core and support AVX-512 instructions.

    Server SkyLake is also going to support 3D Xpoint DIMMs, though that difference is more with the platform/chipset than the actual CPU core.
  • floobit - Thursday, July 21, 2016 - link

    Very interesting. It seems odd to me that they chose to configure it in a 2U - except for big data clusters, most of the market space I see this playing is dominated by FC to a SAN. Is this a play in the big data cluster space, or the more traditional AIX/DB2/big iron that IBM has owned for so long?
    Some questions I'd have:
    what virtualization is possible with this architecture? presumably just the standard PowerVM? How well does that work?
    What is the impact of IO latency? Could you throw a P3700 or two in here?
  • JohanAnandtech - Thursday, July 21, 2016 - link

    2U: Besides big data storage needs, I suspect 2U is necessary for adequate cooling for the POWER8 chip.

    Virtualization: Linux KVM works well as far as I know.

    We actually tried out a P3700 in there (see: http://www.anandtech.com/show/9567/the-power-8-rev... ) and it worked very well. I asked IBM what a customer should expect when using third party storage (probably no support, but how about waranty?) but no answer yet.
  • mystic-pokemon - Friday, July 22, 2016 - link

    Hi Johan
    2U is not necessary for cooling a POWER 8 Chip. We do that better with our Barreleye (1.25 OU design). Even storage wise Barreleye has 15 Disk storage bay that can be seen in below links.

    http://www.v3.co.uk/v3-uk/news/2453992/google-and-...

    Let me know if you wanna ever benchmark a Barreleye. What specific POWER8 proc are you benchmarking with ? (Turismo?). I believe it does slightly better than S812LC on many benchmarks based on the variant of power8 proc S812LC runs.

Log in

Don't have an account? Sign up now