Memory Subsystem: Bandwidth

As we have reported before, measuring the full bandwidth potential with John McCalpin's Stream bandwidth benchmark has become a matter of extreme tuning, requiring a very deep understanding of the platform. 

If we used our previous binaries, both the first and second generation EPYC could not get past 200-210 GB/s. It gave the impression of running into a "bandwidth wall", despite the fact that we now had 8-channel DDR4-3200. So we used the results that Intel and AMD best binaries produce using AVX-512 (Intel) and AVX-2 (AMD). 

The results are expressed in gigabytes per second.

Stream Triad

AMD can reach even higher numbers with the setting "number of nodes per socket" (NPS) set to 4. With 4 nodes per socket, AMD reports up to 353 GB/s. NPS4 will cause the CCX to only access the memory controllers with the lowest latency at the central IO Hub chip.

Those numbers only matter to a small niche of carefully AVX(-256/512) optimized HPC applications. AMD claims a 45% advantage compared to the best (28-core) Intel SKUs. We have every reason to believe them but it is only relevant to a niche. 

For the rest of the enterprise world (probably 95+%), memory latency has much larger impact than peak bandwidth. 

Benchmark Configuration and Methodology Memory Subsystem: Latency
Comments Locked

180 Comments

View All Comments

  • schujj07 - Friday, August 9, 2019 - link

    The problem is Microsoft went to the Oracle model of licensing for Server 2016/19. That means that you have to license EVERY CPU core it can be run on. Even if you create a VM with only 8 cores, those 8 cores won't always be running on the same cores of the CPU. That is where Rome hurts the pockets of people. You would pay $10k/instance of Server Standard on a single dual 64 core host or $65k/host for Server DataCenter on a dual 64 core host.
  • browned - Saturday, August 10, 2019 - link

    We are currently a small MS shop, VMWare with 8 sockets licensed, Windows Datacenter License. 4 Hosts, 2 x 8 core due to Windows Licensing limits. But we are running 120+ majority Windows systems on those hosts.

    I see our future with 4 x 16 core systems, unless our CPU requirements grow, in which case we could look at 3 x 48 or 2 x 64 core or 4 x 24 core and buy another lot of datacenter licenses. Because we already have 64 cores licensed the uplift to 96 or 128 is not something we would worry about.

    We would also get a benefit from only using 2, 3, or 4 of our 8 VMWare socket licenses. We could them implement a better DR system, or use those licenses at another site that currently use Robo licenses.
  • jgraham11 - Thursday, August 8, 2019 - link

    so how does it work with hyper threaded CPUs? And what if the server owner decides to not run Intel Hyperthreading because it is so prone to CPU exploits (most 10 yrs+ old). Does Google still pay for those cores??
  • ianisiam - Thursday, August 8, 2019 - link

    You only pay for physical cores, not logical.
  • twotwotwo - Thursday, August 8, 2019 - link

    Sort of a fun thing there is that in the past you've had to buy more cores than you need sometimes: lower-end parts that had enough CPU oomph may not support all the RAM or I/O you want, or maybe some feature you wanted was absent or disabled. These seem to let you load up on RAM and I/O at even 8C or 16C (min. 1P or 2P configs).

    Of course, some CPU-bound apps can't take advantage of that, but in the right situation being able to build as lopsided a machine as you want might even help out the folks who pay by the core.
  • azfacea - Wednesday, August 7, 2019 - link

    F
  • NikosD - Wednesday, August 7, 2019 - link

    Ok guys...The Anandtech's team had a "bad luck and timming issues" to offer a true and decent review of the Greatest x86 CPU of all time, so for a proper review of EPYC Rome coming from the most objective and capable site for servers, take a look here:
    https://www.servethehome.com/amd-epyc-7002-series-...
  • anactoraaron - Thursday, August 8, 2019 - link

    F
  • phoenix_rizzen - Saturday, August 10, 2019 - link

    Review article for new CPU devolves into Windows vs Linux pissing match, completely obscuring any interesting discussion about said hardware. We really haven't reached peak stupid on the internet yet. :(
  • The Benjamins - Wednesday, August 7, 2019 - link

    Can we get a C20 benchmark for the lulz?

Log in

Don't have an account? Sign up now