Memory Subsystem: Latency Measurements

There is no doubt about it: the performance of modern CPUs depends heavily on the cache subsystem. And some applications depend heavily on the DRAM subsystem too. We used LMBench in an effort to try to measure latency. Our favorite tool to do this, Tinymembench, does not support the POWER architecture yet. That is a pity, because it is a lot more accurate and modern (as it can test with two outstanding requests).

The numbers we looked at were "Random load latency stride=16 Bytes" (LMBench).

Mem
Hierarchy
IBM POWER8 Intel Broadwell
Xeon E5-2640v4
DDR4-2133
Intel Broadwell
Xeon E5-2699v4
DDR4-2400
L1 Cache (cycles) 3 4 4
L2 Cache (cycles) 13 12-15 12-15
L3 Cache 4-8 MB(cycles) 27-28 (8 ns) 49-50 50
16 MB (ns) 55 ns 26 ns 21 ns
32-64 MB (ns) 55-57 ns 75-92 ns 80-96 ns
Memory 96-128 MB (ns) 67-74 ns 90-91 ns 96 ns
Memory 384-512 MB (ns) 89-91 ns 91-93 ns 95 ns

(Note that the numbers for Intel are higher than what we reported in our Cavium ThunderX review. The reason is that we are now using the numbers of LMBench and not those of Tinymembench.)

A 64 KB L1 cache with 4 read ports that can run at 4+ GHz speeds and still maintain a 3 cycle load latency is nothing less than the pinnacle of engineering. The L2 cache excels too, being twice as large (512 KB) and still offering the same latency as Intel's L2.

Once we get to the eDRAM L3 cache, our readings get a lot more confusing. The L3 cache is blistering fast as long as you only access the part that is closest to the core (8 MB). Go beyond that limit (16 MB), and you get a latency that is no less than 7 times worse. It looks like we actually hitting the Centaur chips, because the latency stays the same at 32 and 64 MB.

Intel has a much more predictable latency chart. Xeon's L3 cache needs about 50 cycles, and once you get into DRAM, you get a 90-96 ns latency. The "transistion phase" from 26 ns L3 to 90 ns DRAM is much smaller.

Comparatively, that "transition phase" seems relatively large on the IBM POWER8. We have to go beyond 128 MB before we get the full DRAM latency. And even then the Centaur chip seems to handle things well: the octal DDR-3 1333 MHz DRAM system delivers the same or even slightly better latency as the DDR4-2400 memory on the Xeon.

In summary, IBM's POWER8 has a twice as fast 8 MB L3, while Intel's L3 is vastly better in the 9-32 MB zone. But once you go beyond 32 MB, the IBM memory subsystem delivers better latency. At a significant power cost we must add, because those 4 memory buffers need about 64 Watts.

Memory Subsystem: Bandwidth Single-Threaded Integer Performance: SPEC CPU2006
Comments Locked

124 Comments

View All Comments

  • Michael Bay - Sunday, July 24, 2016 - link

    Hardware does not exist for its own sake, it exists to run software. AT is entirely correct in their methodology.
  • jospoortvliet - Tuesday, July 26, 2016 - link

    I'd argue it is the other way around, GCC might leave 5-10% performance on the table in some niche cases but does just fine most of the time. There's a reason Intel and IBM contribute to GCC - to make sure it doesn't get too far behind as they know very well most of their customers use these compilers and not their proprietary ones.

    Of course, for scientific computing and other niches it makes all the difference and one can argue these heavy systems ARE for niche markets but I still think it was a sane choice to go with GCC.
  • abufrejoval - Thursday, August 4, 2016 - link

    Actually exercising 90% of all transistors on a CPU die these days, is both very hard to do (next to impossible) and will only slow the clock to avoid overstepping TDP.

    And I seriously doubt that the GCC will underuse a CPU at 10% its computational capacity.

    Actually from what I saw the GCC by itself (compiling) was best at exploiting the full 8T potential of the Power8. And since the GCC is compiled by itself, that speaks for the quality of machine code that it can produce, if the source allows it. And that speaks for the quality of the GCC source code, ergo prove you can do better before you rant.
  • abufrejoval - Thursday, August 4, 2016 - link

    Well this is part 1 and describes one scenario. What you want is another scenario and of course it's a valid if a very distinct one.

    Actually distinct is the word here: You'd be using a vendor's compiler if your main job is a distinct workload, because you'd want to squeeze every bit of performance out of that.

    The problem with that is of course, that any distinct workload makes it rather boring for the general public because they cannot translate the benchmark to their environment.

    AT aims to satisfy the broadest meaningful audience and Johan as done a great, great job at that.

    I'm sure he'll also write a part 4711 for you specifically, if you make it economically attractive.

    Hell, even I'd do that given the proper incentive!
  • Zan Lynx - Sunday, July 24, 2016 - link

    Using GCC as the compiler is also why (in my opinion) the Intel chips aren't using their full TDP. Large areas of Intel chips are dedicated to vector operations in SSE and AVX. If you don't issue those instructions then half the chip isn't even being used.

    Some gamers who love their overclocked Intel chips have actually complained to game engine developers who add AVX to the game engine. Because it ruins their overclock even if the game runs much faster. Then they're in the situation of being forced to clock down from 4.5 GHz to 3.7 in order to avoid lockups or thermal throttling.
  • Kevin G - Sunday, July 24, 2016 - link

    The Xeon E3 v3's had different clock speeds for AVX code: it consumed too much power and got too hot while under total load.

    This holds true on the E5 v4's but the AVX penalty is done on a core-by-core basis, not across the entire chip. The result is improved performance in mixed workloads. This is a good thing as AVX hasn't broken out much beyond the HPC markets.
  • talonted - Monday, July 25, 2016 - link

    For those interested in getting a Power8 workstation. Check out Talos.

    https://www.raptorengineering.com/TALOS/prerelease...
  • 137ben - Monday, July 25, 2016 - link

    I made an account to say that this article (along with the subsequent stock-cooler comparison article) is why I really love Anandtech. A lot of the code I run/write for my research is CPU-bottlenecked. Still, until the last year or so, I didn't know very much about hardware. Now, reading Anandtech, I have learned so much more about the hardware I depend on from this website than from any other website. Most just repeat announcements or run meaningless cursory synthetic benchmarks. The fact that Johan De Gelas has written such a deep dive into the inner workings of something as complex as a server CPU architecture, and done it in a way that I can understand, is remarkable. Great job Anandtech, keep it up and I'll always come back.
  • JohanAnandtech - Thursday, July 28, 2016 - link

    You made me a happy man, I achieved my goal :-)
  • alpha754293 - Wednesday, July 27, 2016 - link

    Excellent work and review as always Johan. I would have been interest to see how the two processors perform in floating point intensive benchmarks though...

Log in

Don't have an account? Sign up now