Memory Subsystem: Bandwidth

Measuring the full bandwidth potential of a system with John McCalpin's Stream bandwidth benchmark is getting increasingly difficult on the latest CPUs, as core and memory channel counts have continued to grow.  As you can see from the results below, it not easy to measure bandwidth. The result vary wildly depending on the setting you choose.

Memory: STREAM Bandwidth
Mem
Hierarchy
Compiler & OS settings Result
Cavium ThunderX2
Gcc 7.2 binary
-O2 -mcmodel=large -fopenmp -DVERBOSE -fno-PIC"
OMP_PROC_BIND=spread
241 GB/s
Cavium ThunderX2
Gcc 7.2 binary
-Ofast -fopenmp -static
OMP_PROC_BIND=spread
157 GB/s
Cavium ThunderX2
Gcc 7.2 binary
OMP_PROC_BIND not configured 118 GB/s
Intel ICC Binary -fast  -qopenmp  -parallel
KMP_AFFINITY=verbose,scatter
183 GB/s
Intel gcc Binary Ofast -fopenmp -static
OMP_PROC_BIND=spread
151 GB/s
Intel gcc Binary Ofast -fopenmp -static
OMP_PROC_BIND not configured
150 GB/s

Theoretically, the ThunderX2 has 33% more bandwidth available than an Intel Xeon, as the SoC has 8 memory channels compared to Intel's six channels. These high bandwidth numbers can only be achieved in very specific conditions and require quite a bit of tuning to avoid reaching out to remote memory. In particular, we have to ensure that threads don't migrate from one socket to the other.

We first tried to achieve the best results on both architectures. In case of Intel the ICC compiler always produced better results with some low level optimizations inside the stream loops. In case of Cavium, we followed the instructions of Cavium. So strictly speaking these are not comparable, but it should give you an idea of what kind of bandwidth these CPUs can achieve at their respective peaks. To be fair to Intel, with ideal settings (AVX-512) you should be able to achieve 200 GB/s.

Nevertheless, it is clear that the ThunderX2 system can deliver between 15% and 28% more bandwidth to its CPU cores. This works out to 235 GB/sec, or about 120 GB/sec per socket. Which in turn is about 3 times more than what the original ThunderX was capable off.

Memory Subsystem: Latency

While Bandwidth measurements are only relevant to a small part of the server market, almost every application is heavily impacted by the latency of memory subsystem. To that end, we used LMBench in an effort to try to measure cache and memory latency. The numbers we looked at were "Random load latency stride=16 Bytes". Note that we're expressing the L3 cache and DRAM latency in nanoseconds since we don't have accurate L3-cache clockspeed values.

Memory: LMBench Latency
Mem
Hierarchy
Cavium
ThunderX
DDR4-2133
Cavium
ThunderX2
DDR4-2666
Intel Skylake
8176
DDR4-2666
L1-cache (cycles) 3 4 4
L2-cache (cycles) 40/80 (*) 8-9 12
L3-cache 4-8 MB (ns) N/A 27-30 ns 24-29 ns
Memory 384-512  (ns) 103/206 (*) 156-157 ns 89-91 ns

The L2-cache of the ThunderX2 is accessed with very little latency, and with a single thread running, the L3-cache is competitive with the Intel's complex L3 cache. Once we hit the DRAM however, Intel offers significantly lower latency.

Memory Subsystem: TinyMemBench

To get a deeper understanding of the respective architectures, we also ran the open source TinyMemBench benchmark. The source code was compiled with GCC 7.2 and the optimization level was set to "-O3". The benchmark's testing strategy is described rather well in its manual:

Average time is measured for random memory accesses in the buffers of different sizes. The larger the buffer, the more significant the relative contributions of TLB, L1/L2 cache misses, and DRAM accesses become. All the numbers represent extra time, which needs to be added to L1 cache latency (4 cycles).

We tested with single and dual random read (no huge pages), as we wanted to see how the memory system coped with multiple read requests. 

One of the major weaknesses of the original ThunderX was that it did not support multiple outstanding misses. Memory level parallelism is an important feature for any high-performance modern CPU core: using it it avoids cache misses that would starve the wide back end. A non-blocking cache is thus a key feature for wide cores.

The ThunderX2 does not suffer from that problem at all, thanks to its non-blocking cache. Just like the Skylake core in the Xeon 8176, a second read causes the overall latency to increase by only 15-30%, and not 100%. According to TinyMemBench, the Skylake core has tangibly better latencies.  The datapoint at 512 KB is of course easy to explain: the Skylake core is still fetching from its fast L2, while the ThunderX2 core has to access its L3. But the numbers at 1 and 2 MB indicate that Intel's prefetchers offer a serious advantage as the latency stays is an averag of the L2 and the L3-cache. Around 8 to 16 MB, the latency numbers are close, but once we go beyond the L3 (64 MB), Intel's Skylake offers lower memory latencies.

Benchmark Configuration & Energy Consumption Single-Threaded Integer Performance: SPEC CPU2006
Comments Locked

97 Comments

View All Comments

  • DrizztVD - Wednesday, May 23, 2018 - link

    It amazes me how the one big advantage ARM could have is the power efficiency, yet no power efficiency numbers in this review? It's like someone just isn't thinking about what can best showcase the ARM advantage and testing it.
  • boeush - Thursday, May 24, 2018 - link

    You must have missed this bit:

    "So as is typically the case for early test systems, we are not able to do any accurate power comparisons.

    In fact, Cavium claims that the actual systems from HP, Gigabyte and others will be far more power efficient."

    This was an early (and apparently quite buggy, especially from the power management standpoint) test system. It's not representative of final production systems in these respects, so doing what you request on it would only put a very crude lower bound on efficiency, at best.

    That's why the final section of the write-up has a title ending in ": so far"... (obviously, there will be more to come if/when real production-quality systems are available for benchmarking/analysis.)
  • ZolaIII - Thursday, May 24, 2018 - link

    It's broken currently on the MB. If you want to see real power/performance metrics for a SoC made on comparable lithography to the lintels 14 nm (aka TSMC 10nm) & with optimised software read this:
    https://blog.cloudflare.com/neon-is-the-new-black/
  • drwho9437 - Wednesday, May 23, 2018 - link

    Thanks Johan, I've been reading since Ace's. I can't believe it has been more almost 20 years. Even though I don't work in this market I still read everything you write.
  • JohanAnandtech - Friday, May 25, 2018 - link

    It was indeed almost 20 years ago that I published my first article about the K6-2 vs Pentium MMX. And Anand's star was about to rise with the launch of the K6-3 :-).
  • Spatz - Wednesday, May 30, 2018 - link

    Wow. Aces hardware... that used to be my go to for hardware reviews back in the day. I can’t believe your still at it! This article was great. Keep up the good work.
  • beginner99 - Thursday, May 24, 2018 - link

    So it for sure is an option. however I d not get the focus on price. The CPU cost is a small fraction of the total server cost and a tiny if infrastructure cost (network, HVAC,...) is included. Add to that the software and data running on that server and if your CPU is 5% faster at same power it costing $5000 more might be totally worth it.
  • Apple Worshipper - Thursday, May 24, 2018 - link

    Errmm... does ARM feature SMT now?
  • Ryan Smith - Thursday, May 24, 2018 - link

    Not in Arm's own cores. But in Cavium's ThunderX2, yes.
  • sgeocla - Thursday, May 24, 2018 - link

    What's up with EPYC comparison missing in almost all benchmarks?
    EPYC has been out for a while and the only benchmarks are from almost a year ago?

Log in

Don't have an account? Sign up now