Memory Subsystem: Latency Measurements

There is no doubt about it: the performance of modern CPUs depends heavily on the cache subsystem, and some application depend heavily on the DRAM subsystem too. Since the ThunderX is a totally new architecture, we decided to invest some time to understand the cache system. We used LMBench and Tinymembench in an effort to try to measure the latency.

The numbers we looked at were "Random load latency stride=16 Bytes" (LMBench). Tinymembench was compiled with -O2 on each server. We looked at both "single random read" and "dual random read".

LMbench offers a test of L1, while Tinymembench does not. So the L1-readings are measured with LMBench. LMbench consistently measured 20-30% higher latency for L2, L3 cache, and 10% higher readings for memory latency. Since Tinymembench allowed us to compare both latency with one (1 req in the table) or two outstanding requests (2 req in the table), we used the numbers measured by Tinymembench.

Mem
Hierarchy
Cavium
ThunderX 2.0
DDR4-2133
Intel
Xeon D
DDR4-2133
Intel Broadwell
Xeon E5-2640v4
DDR4-2133
Intel Broadwell
Xeon E5-2699v4
DDR4-2400
L1-cache (cycles) 3 4 4 4
L2-cache 1 / 2 req (cycles) 40/80 12 12 12
L3-cache 1 / 2 req (cycles) N/A 40/44 38/43 48/57
Memory 1 / 2 req (ns) 103/206 64/80 66/81 57/75

The ThunderX's shallow pipeline and relatively modest OOO capabilities is best served with a low latency L1-cache, and Cavium does not disappoint with a 3 cycle L1. Intel's L1 needs a cycle more, but considering that the Broadwell core has massive OOO buffers, this is not a problem at all.

But then things get really interesting. The L1-cache of the ThunderX does not seem to support multiple outstanding L1 misses. As a result, a second cache miss needs to wait until the first one was handled. Things get ugly when accessing the memory: not only is the latency of accessing the DDR4-2133 much higher, again the second miss needs to wait for the first one. So a second cache miss results in twice as much latency.

The Intel cores do not have this problem, a second request gets only a 20 to 30% higher latency.

So how bad is this? The more complex the core gets, the more important a non-blocking cache gets. The 5/6 wide Intel cores need this badly, as running many instructions in parallel, prefetching data, and SMT all increase the pressure on the cache system, and increase the chance of getting multiple cache misses at once.

The simpler two way issue ThunderX core is probably less hampered by a blocking cache, but it still a disadvantage. And this is something the Cavium engineers will need to fix if they want to build a more potent core and achieve better single threaded performance. This also means that it is very likely that there is no hardware prefetcher present: otherwise the prefetcher would get in the way of the normal memory accesses.

And there is no doubt that the performance of applications with big datasets will suffer. The same is true for applications that require a lot of data synchronization. To be more specific we do not think the 48 cores will scale well when handling transactional databases (too much pressure on the L2) or fluid dynamics (high latency memory) applications.

Memory Subsystem: Bandwidth Benchmarks Versus Reality
Comments Locked

82 Comments

View All Comments

  • silverblue - Thursday, June 16, 2016 - link

    I think AMD themselves admitted that the Opteron X1100 was for testing the waters, with K12 being the first proper solution, but that was delayed to get Zen out of the door. I imagine that both products will be on sale concurrently at some point, but even with AMD's desktop-first approach for Zen, it will probably still come to the server market before K12 (both are due 2017).
  • junky77 - Thursday, June 16, 2016 - link

    still, quite strange, no? AMD is in the server business for years. I'm not talking about their ARM solution only, but their other solutions seem to be less interesting..
  • silverblue - Thursday, June 16, 2016 - link

    I am looking forward to both Zen and K12; there's very little chance that AMD will fail with both.
  • name99 - Wednesday, June 15, 2016 - link

    " It is the first time the Xeon D gets beaten by an ARM v8 SoC..."

    The Apple A9X in the 12" iPad Pro delivers 40GB/s on Stream...
    (That's the Stream built into Geekbench. Conceivably it's slightly different from what's being measured here, but it delivers around 25GB/s for standard desktop/laptop Intel CPUs, and for the A9 and the 9" iPad's A9X, so it seems in the same sort of ballpark.)
  • aryonoco - Thursday, June 16, 2016 - link

    Fantastic article as always Johan. Thank you so much for your very informative articles. I can only imagine how much time and effort writing this article took. It is very much appreciated.

    The first good showing by an ARMv8 server. Nearly 5 years later than expected, but they are getting there. This thing was still produced on 28 HKMG. Give it one more year, a jump to 14nm, and a more mature software ecosystem, and I think the Xeons might finally have some competition on their hands.
  • JohanAnandtech - Thursday, June 16, 2016 - link

    Thank you, and indeed it was probably the most time consuming review ... since Calxeda. :-)
    Yes, there is potential.
  • iwod - Thursday, June 16, 2016 - link

    Even if the ThunderX is half the price of equivalent Xeon, I would still buy Intel Xeon instead. This isn't Smartphone market. In Server, The cost memory and Storage, Networking etc adds up. Not only does it uses a lot more power in Idle, the total TCO AND Pref / Watts still flavours Intel.

    There is also the switching cost of Software involved.
    And those who say Single Core / Thread Performance dont matter have absolutely no idea what they are talking about.

    As far as I can tell, Xeon-D offers a very decent value proposition for even the ARM SoC minded vendors. This will likely continue to be the case as we move to 10nm. I just dont see how ARM is going to get their 20% market share by 2020 as they described in their Shareholder meetings.
  • rahvin - Thursday, June 16, 2016 - link

    If you have to switch software on your severs because you switch architecture you are doing something wrong and are far too dependent on proprietary products. I'm being a bit facetious here but the only reason architecture should limit you is you are using Microsoft products or are in a highly specialized computing field. Linux should dominate your general servers.
  • kgardas - Friday, June 17, 2016 - link

    Even if you are on Linux, still stack support is best on i386/amd64. Look at IBM how it throws a lot of money to get somewhere with POWER8. ARM can't do that, so it's more on vendors to do that and they are doing it a little bit more slowly. Anyway, even AArch64 will mature in LLVM/GCC tool chain, GNU libC, musl libC, linux kernel etc but it'll take some time...
  • tuxRoller - Thursday, June 16, 2016 - link

    Aarch64 has very limited conditional execution support.
    http://infocenter.arm.com/help/index.jsp?topic=/co...

Log in

Don't have an account? Sign up now