Memory Subsystem: Bandwidth

Measuring the full bandwidth potential of a system with John McCalpin's Stream bandwidth benchmark is getting increasingly difficult on the latest CPUs, as core and memory channel counts have continued to grow.  As you can see from the results below, it not easy to measure bandwidth. The result vary wildly depending on the setting you choose.

Memory: STREAM Bandwidth
Mem
Hierarchy
Compiler & OS settings Result
Cavium ThunderX2
Gcc 7.2 binary
-O2 -mcmodel=large -fopenmp -DVERBOSE -fno-PIC"
OMP_PROC_BIND=spread
241 GB/s
Cavium ThunderX2
Gcc 7.2 binary
-Ofast -fopenmp -static
OMP_PROC_BIND=spread
157 GB/s
Cavium ThunderX2
Gcc 7.2 binary
OMP_PROC_BIND not configured 118 GB/s
Intel ICC Binary -fast  -qopenmp  -parallel
KMP_AFFINITY=verbose,scatter
183 GB/s
Intel gcc Binary Ofast -fopenmp -static
OMP_PROC_BIND=spread
151 GB/s
Intel gcc Binary Ofast -fopenmp -static
OMP_PROC_BIND not configured
150 GB/s

Theoretically, the ThunderX2 has 33% more bandwidth available than an Intel Xeon, as the SoC has 8 memory channels compared to Intel's six channels. These high bandwidth numbers can only be achieved in very specific conditions and require quite a bit of tuning to avoid reaching out to remote memory. In particular, we have to ensure that threads don't migrate from one socket to the other.

We first tried to achieve the best results on both architectures. In case of Intel the ICC compiler always produced better results with some low level optimizations inside the stream loops. In case of Cavium, we followed the instructions of Cavium. So strictly speaking these are not comparable, but it should give you an idea of what kind of bandwidth these CPUs can achieve at their respective peaks. To be fair to Intel, with ideal settings (AVX-512) you should be able to achieve 200 GB/s.

Nevertheless, it is clear that the ThunderX2 system can deliver between 15% and 28% more bandwidth to its CPU cores. This works out to 235 GB/sec, or about 120 GB/sec per socket. Which in turn is about 3 times more than what the original ThunderX was capable off.

Memory Subsystem: Latency

While Bandwidth measurements are only relevant to a small part of the server market, almost every application is heavily impacted by the latency of memory subsystem. To that end, we used LMBench in an effort to try to measure cache and memory latency. The numbers we looked at were "Random load latency stride=16 Bytes". Note that we're expressing the L3 cache and DRAM latency in nanoseconds since we don't have accurate L3-cache clockspeed values.

Memory: LMBench Latency
Mem
Hierarchy
Cavium
ThunderX
DDR4-2133
Cavium
ThunderX2
DDR4-2666
Intel Skylake
8176
DDR4-2666
L1-cache (cycles) 3 4 4
L2-cache (cycles) 40/80 (*) 8-9 12
L3-cache 4-8 MB (ns) N/A 27-30 ns 24-29 ns
Memory 384-512  (ns) 103/206 (*) 156-157 ns 89-91 ns

The L2-cache of the ThunderX2 is accessed with very little latency, and with a single thread running, the L3-cache is competitive with the Intel's complex L3 cache. Once we hit the DRAM however, Intel offers significantly lower latency.

Memory Subsystem: TinyMemBench

To get a deeper understanding of the respective architectures, we also ran the open source TinyMemBench benchmark. The source code was compiled with GCC 7.2 and the optimization level was set to "-O3". The benchmark's testing strategy is described rather well in its manual:

Average time is measured for random memory accesses in the buffers of different sizes. The larger the buffer, the more significant the relative contributions of TLB, L1/L2 cache misses, and DRAM accesses become. All the numbers represent extra time, which needs to be added to L1 cache latency (4 cycles).

We tested with single and dual random read (no huge pages), as we wanted to see how the memory system coped with multiple read requests. 

One of the major weaknesses of the original ThunderX was that it did not support multiple outstanding misses. Memory level parallelism is an important feature for any high-performance modern CPU core: using it it avoids cache misses that would starve the wide back end. A non-blocking cache is thus a key feature for wide cores.

The ThunderX2 does not suffer from that problem at all, thanks to its non-blocking cache. Just like the Skylake core in the Xeon 8176, a second read causes the overall latency to increase by only 15-30%, and not 100%. According to TinyMemBench, the Skylake core has tangibly better latencies.  The datapoint at 512 KB is of course easy to explain: the Skylake core is still fetching from its fast L2, while the ThunderX2 core has to access its L3. But the numbers at 1 and 2 MB indicate that Intel's prefetchers offer a serious advantage as the latency stays is an averag of the L2 and the L3-cache. Around 8 to 16 MB, the latency numbers are close, but once we go beyond the L3 (64 MB), Intel's Skylake offers lower memory latencies.

Benchmark Configuration & Energy Consumption Single-Threaded Integer Performance: SPEC CPU2006
Comments Locked

97 Comments

View All Comments

  • JohanAnandtech - Thursday, May 24, 2018 - link

    I have been trouble shooting a Java problem for the last 3 weeks now - for some reason our specific EPYC test system has some serious performance issues after we upgraded to kernel 4.13. This might be a hardware/firmware... issue. I don't know. I just know that the current tests are not accurate.
  • junky77 - Thursday, May 24, 2018 - link

    What? A 2.5GHZ ARM core is around 60-70% of a 3.8GHZ Skylake core?? For 3.8GHZ, the ARM is probably at least as fast?
  • Wilco1 - Thursday, May 24, 2018 - link

    Probably around 90% since performance doesn't scale linearly with frequency. Note these are throughput parts so won't clock that high. However a 7nm version might well reach 3GHz.
  • AJ_NEWMAN - Thursday, May 24, 2018 - link

    If Caviums tweaked 16nm hits 3GHz - it would to be unreasonable to aim for 4GHz for a 7nm part.

    With 2.3 times as many transistors available - it will be interesting to see what else they beef up?

    HIgher IPC? 64 cores? 16 memory controllers? CCIX - or perhaps they will compete with Fujitsu and add some Supercomputer centric hardware?

    AJ
  • meta.x.gdb - Thursday, May 31, 2018 - link

    Wonder why the VASP code limped along on ThunderX2 while OpenFOAM saw such gains. I'm pretty familiar with both codes. VASP is mostly doing density functional theory, which is FFT-heavy...
  • Meteor2 - Tuesday, June 26, 2018 - link

    All I want to say (all I can say) is that Anandtech has some of the best writers and commenters in this field. Fantastic article, and fantastic discussion.
  • paldU - Saturday, July 7, 2018 - link

    A typo in Page 2. "it terms of performance per dollar" should be " in terms of performance per dollar".

Log in

Don't have an account? Sign up now