Memory Subsystem: Latency Measurements

There is no doubt about it: the performance of modern CPUs depends heavily on the cache subsystem, and some application depend heavily on the DRAM subsystem too. Since the ThunderX is a totally new architecture, we decided to invest some time to understand the cache system. We used LMBench and Tinymembench in an effort to try to measure the latency.

The numbers we looked at were "Random load latency stride=16 Bytes" (LMBench). Tinymembench was compiled with -O2 on each server. We looked at both "single random read" and "dual random read".

LMbench offers a test of L1, while Tinymembench does not. So the L1-readings are measured with LMBench. LMbench consistently measured 20-30% higher latency for L2, L3 cache, and 10% higher readings for memory latency. Since Tinymembench allowed us to compare both latency with one (1 req in the table) or two outstanding requests (2 req in the table), we used the numbers measured by Tinymembench.

Mem
Hierarchy
Cavium
ThunderX 2.0
DDR4-2133
Intel
Xeon D
DDR4-2133
Intel Broadwell
Xeon E5-2640v4
DDR4-2133
Intel Broadwell
Xeon E5-2699v4
DDR4-2400
L1-cache (cycles) 3 4 4 4
L2-cache 1 / 2 req (cycles) 40/80 12 12 12
L3-cache 1 / 2 req (cycles) N/A 40/44 38/43 48/57
Memory 1 / 2 req (ns) 103/206 64/80 66/81 57/75

The ThunderX's shallow pipeline and relatively modest OOO capabilities is best served with a low latency L1-cache, and Cavium does not disappoint with a 3 cycle L1. Intel's L1 needs a cycle more, but considering that the Broadwell core has massive OOO buffers, this is not a problem at all.

But then things get really interesting. The L1-cache of the ThunderX does not seem to support multiple outstanding L1 misses. As a result, a second cache miss needs to wait until the first one was handled. Things get ugly when accessing the memory: not only is the latency of accessing the DDR4-2133 much higher, again the second miss needs to wait for the first one. So a second cache miss results in twice as much latency.

The Intel cores do not have this problem, a second request gets only a 20 to 30% higher latency.

So how bad is this? The more complex the core gets, the more important a non-blocking cache gets. The 5/6 wide Intel cores need this badly, as running many instructions in parallel, prefetching data, and SMT all increase the pressure on the cache system, and increase the chance of getting multiple cache misses at once.

The simpler two way issue ThunderX core is probably less hampered by a blocking cache, but it still a disadvantage. And this is something the Cavium engineers will need to fix if they want to build a more potent core and achieve better single threaded performance. This also means that it is very likely that there is no hardware prefetcher present: otherwise the prefetcher would get in the way of the normal memory accesses.

And there is no doubt that the performance of applications with big datasets will suffer. The same is true for applications that require a lot of data synchronization. To be more specific we do not think the 48 cores will scale well when handling transactional databases (too much pressure on the L2) or fluid dynamics (high latency memory) applications.

Memory Subsystem: Bandwidth Benchmarks Versus Reality
Comments Locked

82 Comments

View All Comments

  • willis936 - Thursday, June 16, 2016 - link

    Are you sure that the there are more cores at lower clocks to keep voltage lower? Power consumption is proportional to v^2*f.
  • ddriver - Friday, June 17, 2016 - link

    Say what? Go back, read my previous post again, and if you are going to respond, make sure it is legible.
  • willis936 - Friday, June 17, 2016 - link

    Alright well if you don't understand why many slower cores are more power efficient even if there was a 0 cycle penalty on context switching then you aren't worth having this discussion with.
  • blaktron - Wednesday, June 15, 2016 - link

    48 cores of server processing on 16mb of l2 and 4 channels of RAM? What is this thing designed for. Will be like running single channel celerons as server processors, so decent hypervisor hosts are out, and so is any database work more complex than dynamic web pages.
  • Haravikk - Wednesday, June 15, 2016 - link

    Facebook is specifically mentioned as being interested in this, so dynamic web-pages is definitely a valid use-case here. HHVM for example is pretty light on memory usage (so is PHP7 now), especially in high demand cases where you're really only running a single set of scripts, probably cached in a compiled form, plus both scale really well across as many cores as you can throw at them.

    Things like nginx and MariaDB will be the same, so they're absolutely intended use-cases for this kind of chip, and I think it should be very good at it.
  • blaktron - Wednesday, June 15, 2016 - link

    With no L3 and slow RAM access I'm not sure where you think the scrips will cache. Assuming you ran them on bare metal (horrifying waste of compute) there would be enough, but if you had docker instances or quick spin vms doing your work (as 99% of web servers are) then each instance will only get the tiniest slice of cache to work with. It would be like running your servers, as I said, on a bank of celerons. Except celerons have L3 and don't carry 12 cores per memory channel.
  • spaceship9876 - Wednesday, June 15, 2016 - link

    Hopefully someone will release a server chip using 64 cortex A73 cpu cores, i'm pretty sure the cortex a73 will be more power efficient than xeon d. Xeon d beats cortex a57 in power efficiency but i'm pretty sure than cortex a72 will be similar and cortex a73 will beat it.
  • Flunk - Wednesday, June 15, 2016 - link

    ARM with ambition?

    I've heard that before, nothing came of it.
  • CajunArson - Wednesday, June 15, 2016 - link

    Interesting article. This does appear to be the first semi-credible part from an ARM server vendor.

    Having said that, the energy efficiency table at the end should put to rest any misconceived notions that ARM is somehow magically energy efficient while X86 isn't.

    Considering that Xeon E5-2690 v3 is a 4.5 year old Sandy Bridge part made on a 32 nm process and it still has better performance-per-watt than the best ARM server parts available in 2016, it's pretty obvious that Intel has done an excellent job with power efficiency.
  • kgardas - Wednesday, June 15, 2016 - link

    2 CajunArson: (1) you can't compare energy efficiency of CPUs made on different nodes. 28nm versus 14nm? This is apple to oranges. (2) Xeon E5-2690 *v3* is Haswell and not Sandy Bridge and it's not 4.5 years definitely.

Log in

Don't have an account? Sign up now