Memory Subsystem: Latency Measurements

There is no doubt about it: the performance of modern CPUs depends heavily on the cache subsystem, and some application depend heavily on the DRAM subsystem too. Since the ThunderX is a totally new architecture, we decided to invest some time to understand the cache system. We used LMBench and Tinymembench in an effort to try to measure the latency.

The numbers we looked at were "Random load latency stride=16 Bytes" (LMBench). Tinymembench was compiled with -O2 on each server. We looked at both "single random read" and "dual random read".

LMbench offers a test of L1, while Tinymembench does not. So the L1-readings are measured with LMBench. LMbench consistently measured 20-30% higher latency for L2, L3 cache, and 10% higher readings for memory latency. Since Tinymembench allowed us to compare both latency with one (1 req in the table) or two outstanding requests (2 req in the table), we used the numbers measured by Tinymembench.

Mem
Hierarchy
Cavium
ThunderX 2.0
DDR4-2133
Intel
Xeon D
DDR4-2133
Intel Broadwell
Xeon E5-2640v4
DDR4-2133
Intel Broadwell
Xeon E5-2699v4
DDR4-2400
L1-cache (cycles) 3 4 4 4
L2-cache 1 / 2 req (cycles) 40/80 12 12 12
L3-cache 1 / 2 req (cycles) N/A 40/44 38/43 48/57
Memory 1 / 2 req (ns) 103/206 64/80 66/81 57/75

The ThunderX's shallow pipeline and relatively modest OOO capabilities is best served with a low latency L1-cache, and Cavium does not disappoint with a 3 cycle L1. Intel's L1 needs a cycle more, but considering that the Broadwell core has massive OOO buffers, this is not a problem at all.

But then things get really interesting. The L1-cache of the ThunderX does not seem to support multiple outstanding L1 misses. As a result, a second cache miss needs to wait until the first one was handled. Things get ugly when accessing the memory: not only is the latency of accessing the DDR4-2133 much higher, again the second miss needs to wait for the first one. So a second cache miss results in twice as much latency.

The Intel cores do not have this problem, a second request gets only a 20 to 30% higher latency.

So how bad is this? The more complex the core gets, the more important a non-blocking cache gets. The 5/6 wide Intel cores need this badly, as running many instructions in parallel, prefetching data, and SMT all increase the pressure on the cache system, and increase the chance of getting multiple cache misses at once.

The simpler two way issue ThunderX core is probably less hampered by a blocking cache, but it still a disadvantage. And this is something the Cavium engineers will need to fix if they want to build a more potent core and achieve better single threaded performance. This also means that it is very likely that there is no hardware prefetcher present: otherwise the prefetcher would get in the way of the normal memory accesses.

And there is no doubt that the performance of applications with big datasets will suffer. The same is true for applications that require a lot of data synchronization. To be more specific we do not think the 48 cores will scale well when handling transactional databases (too much pressure on the L2) or fluid dynamics (high latency memory) applications.

Memory Subsystem: Bandwidth Benchmarks Versus Reality
Comments Locked

82 Comments

View All Comments

  • JohanAnandtech - Wednesday, June 15, 2016 - link

    Good suggestion. I have been using an ipmi client to manage several other servers, like the IBM servers. However, such a GUI client is still a bit more userfriendly, ipmi commands can get complicated if you don't use them regularly. The thing is that HP and Intel's BMC GUI are a lot easier to use and more reliable.
  • fanofanand - Wednesday, June 15, 2016 - link

    I think you may have an inaccurate figure of 141 at idle (in the graph) for the Thunder. "makes us suspect that the chip is consuming between 40 and 50W at idle, as measured at the wall"
  • JohanAnandtech - Wednesday, June 15, 2016 - link

    If you look at the Column "peak vs idle", you see 82W. At peak, we assume that a 120W TDP chip will probably need about 130W. 130W - 82W (both measured at the wall) = 50W for the SoC alone at idle measured at the wall, so anywhere between 40-50W in reality. My Calculation is a "guestimate", but it is clear that the Cavium chip needs much more in idle than the Intel chips.(10-15W) .
  • djayjp - Wednesday, June 15, 2016 - link

    Many spelling/grammar issues here. It impacts readability. Please read before posting.
  • djayjp - Wednesday, June 15, 2016 - link

    That is to say in the article.
  • mariush - Wednesday, June 15, 2016 - link

    These guys are already working on ThunderX2 (54 cores, 3 Ghz , 14nm , ARMv8) and they already have functional chips : https://www.youtube.com/watch?v=ei9uVskwPNE
  • Meteor2 - Thursday, June 16, 2016 - link

    It's always jam tomorrow, isn't it? Intel is working on new chips too, you know.
  • beginner99 - Wednesday, June 15, 2016 - link

    It loses very clearly in performance/watt to Xeon-D. In this segment the lower price doesn't matter in that case and the fact that it has a process disadvantage doesn't matter either. What counts is the end result. And I doubt it would cost $800 if made on 14/16nm. I mean why would anyone buying this take the risk? Safer bet to go with Intel also due to more flexible use (single and multi threaded). The latency issue is mentioned but downplayed.
  • blaktron - Wednesday, June 15, 2016 - link

    So downplayed. Anandtech desperately wants ARM servers, but its a solution looking for a problem. Big web front ends running on bare metal are such a small percentage of the server market that developing for it seems stupid. Xeon-D was already in development for SANs, they just repurposed it for docker and nginx.
  • Senti - Wednesday, June 15, 2016 - link

    Very nice article. I especially liked the emphasis on relations of test numbers and real world workloads and what was problematic during the testing.

    It would be great to see the same style desktop CPU review (Zen?) form you instead of mix of reprinted marketing hype with silly benchmark numbers dump that plagues this site for quite some time now.

    Some annoying typos here and there, like "It is clear that the ThunderX is a match for high frequency trading", but nothing really bad.

Log in

Don't have an account? Sign up now