Memory Subsystem: Latency Measurements

There is no doubt about it: the performance of modern CPUs depends heavily on the cache subsystem. And some applications depend heavily on the DRAM subsystem too. We used LMBench in an effort to try to measure latency. Our favorite tool to do this, Tinymembench, does not support the POWER architecture yet. That is a pity, because it is a lot more accurate and modern (as it can test with two outstanding requests).

The numbers we looked at were "Random load latency stride=16 Bytes" (LMBench).

Mem
Hierarchy
IBM POWER8 Intel Broadwell
Xeon E5-2640v4
DDR4-2133
Intel Broadwell
Xeon E5-2699v4
DDR4-2400
L1 Cache (cycles) 3 4 4
L2 Cache (cycles) 13 12-15 12-15
L3 Cache 4-8 MB(cycles) 27-28 (8 ns) 49-50 50
16 MB (ns) 55 ns 26 ns 21 ns
32-64 MB (ns) 55-57 ns 75-92 ns 80-96 ns
Memory 96-128 MB (ns) 67-74 ns 90-91 ns 96 ns
Memory 384-512 MB (ns) 89-91 ns 91-93 ns 95 ns

(Note that the numbers for Intel are higher than what we reported in our Cavium ThunderX review. The reason is that we are now using the numbers of LMBench and not those of Tinymembench.)

A 64 KB L1 cache with 4 read ports that can run at 4+ GHz speeds and still maintain a 3 cycle load latency is nothing less than the pinnacle of engineering. The L2 cache excels too, being twice as large (512 KB) and still offering the same latency as Intel's L2.

Once we get to the eDRAM L3 cache, our readings get a lot more confusing. The L3 cache is blistering fast as long as you only access the part that is closest to the core (8 MB). Go beyond that limit (16 MB), and you get a latency that is no less than 7 times worse. It looks like we actually hitting the Centaur chips, because the latency stays the same at 32 and 64 MB.

Intel has a much more predictable latency chart. Xeon's L3 cache needs about 50 cycles, and once you get into DRAM, you get a 90-96 ns latency. The "transistion phase" from 26 ns L3 to 90 ns DRAM is much smaller.

Comparatively, that "transition phase" seems relatively large on the IBM POWER8. We have to go beyond 128 MB before we get the full DRAM latency. And even then the Centaur chip seems to handle things well: the octal DDR-3 1333 MHz DRAM system delivers the same or even slightly better latency as the DDR4-2400 memory on the Xeon.

In summary, IBM's POWER8 has a twice as fast 8 MB L3, while Intel's L3 is vastly better in the 9-32 MB zone. But once you go beyond 32 MB, the IBM memory subsystem delivers better latency. At a significant power cost we must add, because those 4 memory buffers need about 64 Watts.

Memory Subsystem: Bandwidth Single-Threaded Integer Performance: SPEC CPU2006
Comments Locked

124 Comments

View All Comments

  • jospoortvliet - Tuesday, July 26, 2016 - link

    The point Johan makes is that his goal is not to get the best bechmark scores but the most relevant real life data. One can argue if he succeeded, certainly the results are interesting but there is much more to the CPU's as usual. And I do think his choice is justified, while much scientific code would be recompiled with a faster compiler (though the cost of ICC might be a problem in a educational setting), many businesses wouldn't go through that effort.

    I personally would love to see a newer Ubuntu & GCC being used, just to see what the difference is, if any. The POWER ecosystem seems to evolve fast so a newer platform and compiler could make a tangible difference.
    But, of course, if you in your usecase would use ICC or xLC, these benches are not perfect.
  • Eris_Floralia - Friday, July 22, 2016 - link

    Are these two processor both tested at the same frequency?or at their stock clock?
  • tipoo - Friday, July 22, 2016 - link

    The latter, page 5

    2.92-3.5 GHz vs 2.2-3.6 GHz
  • abufrejoval - Thursday, August 4, 2016 - link

    Well since Johan really only tested one core on each CPU, it would have been nice to have him verify the actual clock speed of those cores. You'd assume that they'd be able to maintain top speed for any single core workload independent of the number of threads, but checking is better than guessing.
  • roadapathy - Friday, July 22, 2016 - link

    22nm? *yawwwwwwwwwn* Come on IBM, you can do better than that, brah.
  • Michael Bay - Saturday, July 23, 2016 - link

    Nope, 22 is the best SOI has right now. You have to remember it`s nowhere near standard litographies customer-wise.
  • tipoo - Monday, July 25, 2016 - link

    In addition to what Michael Bay (lel) said, remember that only Intel really has 14nm, when TSMC and GloFlo say 14/16nm they really mean 20nm with finfetts.
  • feasibletrash0 - Saturday, July 23, 2016 - link

    using a less capable compiler (GCC) to test a chip, and not using everything the chip has to offer seems incredibly flawed to me, what are you testing exactly
  • aryonoco - Saturday, July 23, 2016 - link

    He's testing what actual software people actually run on these things.

    On your typical Linux host, pretty most everything is compiled with GCC. You want to get into exotic compilers? Sure both IBM and Intel have their exotic proprietary costly compilers, so what. Very few people outside of niche industries use them.

    You want to compare a CPU with CPU? You keep the compiler the same. That's just common sense. It's also how the scientific method works!
  • feasibletrash0 - Sunday, July 24, 2016 - link

    right, but you're comparing, say 10% of the silicon on that chip, and saying that the remaining 90% of the transistors making the chip does not matter, they do; if the software is not using them, that's fine, but it's not an accurate comparison of the hardware, it's a comparison of the software

Log in

Don't have an account? Sign up now