Memory Subsystem: Latency Measurements

There is no doubt about it: the performance of modern CPUs depends heavily on the cache subsystem, and some application depend heavily on the DRAM subsystem too. Since the ThunderX is a totally new architecture, we decided to invest some time to understand the cache system. We used LMBench and Tinymembench in an effort to try to measure the latency.

The numbers we looked at were "Random load latency stride=16 Bytes" (LMBench). Tinymembench was compiled with -O2 on each server. We looked at both "single random read" and "dual random read".

LMbench offers a test of L1, while Tinymembench does not. So the L1-readings are measured with LMBench. LMbench consistently measured 20-30% higher latency for L2, L3 cache, and 10% higher readings for memory latency. Since Tinymembench allowed us to compare both latency with one (1 req in the table) or two outstanding requests (2 req in the table), we used the numbers measured by Tinymembench.

Mem
Hierarchy
Cavium
ThunderX 2.0
DDR4-2133
Intel
Xeon D
DDR4-2133
Intel Broadwell
Xeon E5-2640v4
DDR4-2133
Intel Broadwell
Xeon E5-2699v4
DDR4-2400
L1-cache (cycles) 3 4 4 4
L2-cache 1 / 2 req (cycles) 40/80 12 12 12
L3-cache 1 / 2 req (cycles) N/A 40/44 38/43 48/57
Memory 1 / 2 req (ns) 103/206 64/80 66/81 57/75

The ThunderX's shallow pipeline and relatively modest OOO capabilities is best served with a low latency L1-cache, and Cavium does not disappoint with a 3 cycle L1. Intel's L1 needs a cycle more, but considering that the Broadwell core has massive OOO buffers, this is not a problem at all.

But then things get really interesting. The L1-cache of the ThunderX does not seem to support multiple outstanding L1 misses. As a result, a second cache miss needs to wait until the first one was handled. Things get ugly when accessing the memory: not only is the latency of accessing the DDR4-2133 much higher, again the second miss needs to wait for the first one. So a second cache miss results in twice as much latency.

The Intel cores do not have this problem, a second request gets only a 20 to 30% higher latency.

So how bad is this? The more complex the core gets, the more important a non-blocking cache gets. The 5/6 wide Intel cores need this badly, as running many instructions in parallel, prefetching data, and SMT all increase the pressure on the cache system, and increase the chance of getting multiple cache misses at once.

The simpler two way issue ThunderX core is probably less hampered by a blocking cache, but it still a disadvantage. And this is something the Cavium engineers will need to fix if they want to build a more potent core and achieve better single threaded performance. This also means that it is very likely that there is no hardware prefetcher present: otherwise the prefetcher would get in the way of the normal memory accesses.

And there is no doubt that the performance of applications with big datasets will suffer. The same is true for applications that require a lot of data synchronization. To be more specific we do not think the 48 cores will scale well when handling transactional databases (too much pressure on the L2) or fluid dynamics (high latency memory) applications.

Memory Subsystem: Bandwidth Benchmarks Versus Reality
Comments Locked

82 Comments

View All Comments

  • Daniel Egger - Wednesday, June 15, 2016 - link

    I could hardly disagree more about the remote management of SuperMicro vs. HP. Remote management of HP is *the horror*, I've never seen worse and I've seen a lot. It's clunky, it requires a license to be useful (others do to but SuperMicro does not have such nonsense), the BCM tends to crash a lot (which is very annoying for a remote management solution), boot is even slower than all other systems I know due to the way they integrate the BIOS and remote management on the system and it also uses Java unless you have Windows machines around to use the .NET version.

    For the remote management alone I would chose SuperMicro over most other vendors any day.
  • JohanAnandtech - Thursday, June 16, 2016 - link

    I found the .Net client of HP much less sluggish, and I have seen no crashing at all. I guess there is no optimal remote management client, but I really like the "boot into firmware" option that Intel implemented.
  • rahvin - Thursday, June 16, 2016 - link

    Not only that but Supermicro actually releases updates for their BCM's. I had the same shocked reaction to the HP claim. Started to wonder if I was the only one that thought supermicro was light years ahead in usability.

    I should note that Supermicro's awful Java tool works on Linux as well as windows. Though it refuses to run if your Java isn't the newest version available.
  • pencea - Wednesday, June 15, 2016 - link

    All these articles and yet still no review for the GTX 1080, while other major sites have already posted their reviews of both 1070 & 1080. Guru3D already has 2 custom 1080 and a custom 1070 review up.
  • Ryan Smith - Wednesday, June 15, 2016 - link

    It'll be done when it's done.
  • pencea - Wednesday, June 15, 2016 - link

    Unacceptably late for something that should've been posted weeks ago.
  • Meteor2 - Thursday, June 16, 2016 - link

    Will anyone read it though? Your ad impressions are going to suffer.
  • Ryan Smith - Thursday, June 16, 2016 - link

    Maybe. Maybe not. But it's my own fault regardless. All I can do is get it done as soon as I reasonably can, and hope it's something you guys find useful.
  • name99 - Thursday, June 16, 2016 - link

    Give it a freaking rest. No-one is impressed by your constant whining about this.
  • pencea - Thursday, June 16, 2016 - link

    Not looking to impress anyone. As a long time viewer of this site, I'm simply disappointed that a reputational site like this is constantly late for GPU reviews.

Log in

Don't have an account? Sign up now