Topology, Memory Subsystem & Latency

For users who are already familiar with AMD’s newest Zen3 microarchitecture from our coverage of the new Ryzen 5000 consumer parts, they will remember that one big change the way configures their CPU topology is how the new CCX (Core Complex) is integrated within a CCD (Core Chiplet Die).

Previous generation Zen2 and prior designs of the Zen microarchitecture consisted of CCXs of four CPU cores, with a set amount of L3 cache shared between these four cores – 8MB in the original Zen iterations and 16MB in Zen2 variants. In Zen3, AMD has redesigned their L3 implementation to now support up to 8 physical cores in one core complex. Effectively, AMD is doubling the L3 and cores per CCX. The CCD in turn however instead of housing two CCXs, now only houses one, which effectively means that the total L3 and core count per core chiplet doesn’t change between the generations.

Benefits for the new arrangement is that this enlarges the L3 cache hierarchy from the view of a single CPU core, allowing each core to possibly make use of the full 32MB of cache. Furthermore, because the CCX is now 8 cores instead of 4 cores, inter-core access latencies to other cores can effectively be reduced when workloads are resident on the cores within that CCX. Previously, even if two cores were on the same physical die, but on different CCXs, communications between the two had to take place via external routing through the IOD (I/O Die) of the CPU package, incurring large access penalties. AMD here also makes an interesting argument for larger virtual machines which are bound to a single CCD across 8 cores, something which was previously not possible without performance hits due to spanning a VM across multiple CCXs.

We can test out the physical topology of the CPUs by running our inter-core synchronisation latency test. As a reminder, our inter-core bounce test consists of an initial main thread which allocates the synchronisation cache line on the core that the executable is spawned on – we try to fix this to the first NUMA node / CPU group of the first socket. This in turn spawns two ping-pong threads which bounce around based on the shared cache line, and we change the affinity of the threads across the system to test out the various core-to-core latencies. Because of the usage of a common shared cache line – usually how real software works; we’re essentially testing core-to-cacheline-to-core – an important distinction to make for some systems which have different cache line placement and cache coherency algorithms.


2-Socket AMD EPYC 7763 (Milan)

Right off the bat, what’s immediately visible in the new EPYC 7763 64-core Milan based part is that the CCXs have grown in size, now spanning 8 cores, with the corresponding lower latency results within that cache hierarchy. The result set here is limited to just the physical cores as otherwise the logical SMT siblings would have resulted in a 256 x 256 matrix for a 2-socket system.


2-Socket AMD EPYC 7742 (Rome)

If we contrast the new Milan design to the Rome-based EPYC 7742, we’re seeing quite a few differences between the two generations. First of all, within the CCX/CCD, we’re not seeing that access latencies aren’t as uniform anymore. Previously we saw latencies at 22-23ns, whereas the new part now varies from 19 to 31ns, which is due to AMD’s doubling of the L3 which has seen a new internal topology between the 8 cache slices. It’s to be noted that the test here ran around 3400MHz between both generation parts.

AMD’s server IO die is divided into quadrants, each featuring two memory controllers and two connections to CCDs. Access latencies between two CCDs in a quadrant is lower than between two CCDs in different quadrants, and we can also see this in the results. The core-to-core latency within a quadrant this generation has improved from a worst-case 112ns to 99ns, about a 10ns improvement. Access to remote quadrants has been reduced from up to 142ns to up to 114ns, which is actually a 24% improvement, which is considerable.

What’s really interesting is that inter-socket latencies have also seen very notable reductions. Whereas the Rome part these went up to 272ns, the new Milan part reduces this down to 202ns, again a large 25% improvement between generations.

 
2-Socket Ampere Altra Q80-33 ---------   2-Socket Intel Xeon Platinum 8280

Compared to other competitors which use monolithic silicon designs, AMD lags behind in terms of core-to-core latencies within a single socket, although latencies within a CCX/CCD are more performant. Inter-socket latencies are superior to newcomers such as Ampere’s Altra, however lag behind Intel’s seemingly superior cache coherency protocol, particularly in scenarios where two cores of a same socket access a remote socket cache line, being able to locally copy/mirror it with native performance, something AMD currently is only able to do at a CCX/CCD level.

Memory Latency

Memory latency on Milan is interesting as AMD is now employing a new IOD design. Well – it’s not a completely new design from the previous generation Rome IOD, however it is a redesign with new features, with slightly more transistors and it is a new chip tape-out.

The biggest practical change of the new chip is that the internal Infinity Fabric clock is now coupled with the memory controller clocks / DRAM MEMCLK, at up to DDR3200. The much-improved inter-core latencies discussed above are likely direct result of this change, but it should also have larger impact on the memory latency of the new Milan chips.

Starting off in NPS4 mode, a NUMA node in which a socket is divided into 4 non-uniform memory address spaces, representing the four physical quadrants of the IOD, each NUMA node and quadrant with the local 2 CCDs and 16 cores only have access to their local 2 memory channels.

Latencies here see a reduction from 118ns down to 105ns, which is a 11% improvement. The 13ns improvement should be a direct result of the new Milan part having to avoid asynchronous clock bridges between the Infinity Fabric and the memory controllers.

The latencies at the lower cache hierarchies within the new L3 and Zen3 cores looks pretty identical to what we’ve measured on the consumer Ryzen 5000 parts – with the only difference of course that the EPYC CPUs are running at lower frequencies.

What I’d say is interesting, is how the prefetchers behave compared to Rome. AMD this generation with Zen3 has changed things up quite a bit with newer generation prefetchers, including a new region-based prefetcher. On the Rome parts we saw that the more advanced prefetchers enabled/disabled themselves depending on access pattern depth. On Milan, this doesn’t seem to be the case, at least for some of our patterns.

Although the best latencies are to be had in NPS4 mode, sometimes you want just a single large NUMA node with full access to the total memory of a single socket (AMD also offers NPS0 mode with interleaving memory across two sockets, but that’s beyond our scope here). Here, a single CPU will interleave memory accesses across all four quadrants and eight memory channels, but due to the physical design of the chip this naturally means the data has to travel longer distances, and thus latencies are expected to be worse than in NPS4 mode.

Surprisingly, the new 7763 Milan part is showing extremely large improvements in NPS1 mode – reducing latencies from 133ns down to 112ns, a 21ns reduction, representing a 15.8% improvement over the 7742 Rome part. Although naturally still not quite matching monolithic chip access latencies as seen from a Xeon or Altra system, it’s significantly better this generation.

Memory Bandwidth

For memory bandwidth, AMD had advertised a 3-5% improvement in STREAM-Triad compared to previous generation 7002 series CPUs, so let’s put that to the test.

We’re opting for a regularly compiled STREAM binary using GCC 10.2, and are avoiding the use of explicit non-temporal memory accesses as I do not consider them to be real-world for how most workloads behave. I had explained this rationale in our review of the Ampere Altra a few months ago.

Also, instead of looking at a single test result figure, I charted the bandwidth curves of the system when scaling from 1 thread/core to the full 128 threads of the system.

In terms of peak memory bandwidth, we’re seeing that indeed the new Milan-based chip around 8% higher than the Rome-based predecessor, slightly above AMD’s marketed projections. AMD hadn’t clarified whether these were NPS1 or NPS4 figures, but we prefer using STREAM to measure bandwidth within a single memory node – scaling up to multiple nodes should normally be expected to simply be a multiple of the node.

What’s really interesting in the behaviour of the Milan system is the bandwidth scaling at lower thread/core counts. Here we’re seeing the new 7763 take the lead by considerable margins compared to the competition, and the Milan chip is able to actually reach its peak memory bandwidth with only 8 cores, whereas the Rome-based parts required 16 cores to reach this point. The cause of these excellent results is likely the much-improved load/store capabilities of the new Zen3 cores.

What’s odd here in the bandwidth curve, is the zig-zag pattern, with bandwidth sometimes regresses below earlier achieved peak figures. We’re placing threads with OMP_PLACES="cores" and OMP_PROC_BIND="spread", however this might still not be optimal in terms of spreading load symmetrically across all CCDs in the system.

Test Bed and Setup - Compiler Options Power & Efficiency: IOD Power Overhead?
Comments Locked

120 Comments

View All Comments

  • mkbosmans - Tuesday, March 23, 2021 - link

    Even if you have a nice two-tiered approach implemented in your software, let's say MPI for the distributed memory parallelization on top of OpenMP for the shared memory parallelization, it often turns out to be faster to limit the shared memory threads to a single socket of NUMA domain. So in case of an 2P EPYC configured as NPS4 you would have 8 MPI ranks per compute node.

    But of course there's plenty of software that has parallelization implemented using MPI only, so you would need a separate process for each core. This is often because of legacy reasons, with software that was originally targetting only a couple of cores. But with the MPI 3.0 shared memory extension, this can even today be a valid approach to great performing hybrid (shared/distributed mem) code.
  • mode_13h - Tuesday, March 23, 2021 - link

    Nice explanation. Thanks for following up!
  • Andrei Frumusanu - Saturday, March 20, 2021 - link

    This is vastly incorrect and misleading.

    The fact that I'm using a cache line spawned on a third main thread which does nothing with it is irrelevant to the real-world comparison because from the hardware perspective the CPU doesn't know which thread owns it - in the test the hardware just sees two cores using that cache line, the third main thread becomes completely irrelevant in the discussion.

    The thing that is guaranteed with the main starter thread allocating the synchronisation cache line is that it remains static across the measurements. One doesn't actually have control where this cache line ends up within the coherent domain of the whole CPU, it's going to end up in a specific L3 cache slice depended on the CPU's address hash positioning. The method here simply maintains that positioning to be always the same.

    There is no such thing as core-core latency because cores do not snoop each other directly, they go over the coherency domain which is the L3 or the interconnect. It's always core-to-cacheline-to-core, as anything else doesn't even exist from the hardware perspective.
  • mkbosmans - Saturday, March 20, 2021 - link

    The original thread may have nothing to do with it, but the NUMA domain where the cache line was originally allocated certainly does. How would you otherwise explain the difference between the first quadrant for socket 1 to socket 1 communication and the fourth quadrant for socket 2 to socket 2 communication?

    Your explanation about address hashing to determine the L3 cache slice may be makes sense when talking about fixing the inital thread within a L3 domain, but not why you want that that L3 domain fixed to the first one in the system, regardless of the placement of the two threads doing the ping-ponging.

    And about core-core latency, you are of course right, that is sloppy wording on my part. What I meant to convey is that roundtrip latency between core-cacheline-core and back is more relevant (at least for HPC applications) when the cacheline is local to one of the cores and not remote, possibly even on another socket than the two thread.
  • Andrei Frumusanu - Saturday, March 20, 2021 - link

    I don't get your point - don't look at the intra-remote socket figures then if that doesn't interest you - these systems are still able to work in a single NUMA node across both sockets, so it's still pretty valid in terms of how things work.

    I'm not fixing it to a given L3 in the system (except for that socket), binding a thread doesn't tell the hardware to somehow stick that cacheline there forever, software has zero say in that. As you see in the results it's able to move around between the different L3's and CCXs. Intel moves (or mirrors it) it around between sockets and NUMA domains, so your premise there also isn't correct in that case, AMD currently can't because probably they don't have a way to decide most recent ownership between two remote CCXs.

    People may want to just look at the local socket numbers if they prioritise that, the test method here merely just exposes further more complicated scenarios which I find interesting as they showcase fundamental cache coherency differences between the platforms.
  • mkbosmans - Tuesday, March 23, 2021 - link

    For a quick overview of how cores are related to each other (with an allocation local to one of the cores), I like this way of visualizing it more:
    http://bosmans.ch/share/naples-core-latency.png
    Here you can for example clearly see how the four dies of the two sockets are connected pairwise.

    The plots from the article are interesting in that they show the vast difference between the cc protocols of AMD and Intel. And the numbers from the Naples plot I've linked can be mostly gotten from the more elaborate plots from the article, although it is not entirely clear to me how to exactly extend the data to form my style of plots. That's why I prefer to measure the data I'm interested in directly and plot that.
  • imaskar - Monday, March 29, 2021 - link

    Looking at the shares sinking, this pricing was a miss...
  • mode_13h - Tuesday, March 30, 2021 - link

    Prices are a lot easier to lower than to raise. And as long as they can sell all their production allocation, the price won't have been too high.
  • Zone98 - Friday, April 23, 2021 - link

    Great work! However I'm not getting why in the c2c matrix cores 62 and 74 wouldn't have a ~90ns latency as in the NW socket. Could you clarify how the test works?
  • node55 - Tuesday, April 27, 2021 - link

    Why are the cpus not consistent?

    Why do you switch between 7713 and 7763 on Milan and 7662 and 7742 on Rome?

    Why do you not have results for all the server CPUs? This confuses the comparison of e.g. 7662 vs 7713. (My current buying decision )

Log in

Don't have an account? Sign up now