Silicon, Glue, & NUMA Too

In the Ryzen family, AMD designed an 8 core silicon die known as a Zeppelin die. This consisted of two core complexes (CCX) of four cores each, with each CCX having access to 8 MB of L3 cache. The Zeppelin die had access to two DRAM channels, and was fixed with 16 PCIe lanes for add-in cards. With Threadripper, AMD has doubled up the silicon.

If you were to delid a Threadripper CPU, you would actually see four silicon dies, similar to what an EPYC processor would have, making Threadripper a Multi Core Module (MCM) design. Two of these are reinforcing spacers – empty silicon with no use other than to help distribute the weight of the cooler and assist in cooling. The other two dies (in opposite corners for thermal performance and routing) are basically the same Zeppelin dies as Ryzen, containing eight cores each and having access to two memory channels each. They communicate through Infinity Fabric, which AMD lists as 102 GB/s die-to-die bandwidth (full duplex bidirectional), along with 78ns to reach the near memory (DRAM connected to the same die) and 133ns to reach the far memory (DRAM on another die). We confirmed those numbers on DDR4-2400 memory, also achieving 65 ns and 108 ns respectively using DDR4-3200. 


Despite this AMD slide showing two silicon dies, there are four units of silicon in the package. Only two of the dies are active, so AMD has 'simplified' the diagram'

By comparison, EPYC lists die-to-die bandwidth as 42.6 GB/s at DDR4-2666. This is because EPYC runs fabric links to three dies internally and one die externally (on the next socket), which maximizes all the links available. The dies in Threadripper only have to communicate with one other die, so has more flexibility. To that extent, we’re under the impression that Threadripper is using two of these links at 10.4 GT/s using the following method:

  • Die to Die for EPYC is quoted as 42.6 GB/s at DDR4-2667
  • Die to Die for Threadripper is quoted as 102.2 GB/s at DDR4-3200
  • 42.6 GB/s * 2 links * 3200/2667 = 102.2 GB/s
  • 42.6 GB/s * 3 links * 3200/2667 at 8.0 GT/s = 115.8 GB/s (too high)
  • 42.6 GB/s * 3 links * 3200/2667 at 6.4 GT/s = 92.6 GB/s (too low)

This configuration for AMD is essentially what the industry calls a NUMA configuration: non-uniform memory access. If left as it, it means that code cannot rely on a regular (and low) latency between requesting something from DRAM and receiving it. This can be an issue for high-performance code, which is why some software is designed NUMA-aware, so that it can intelligently pin the memory it needs to the closest DRAM controller, lowering potential bandwidth but prioritizing latency.

NUMA is nothing new in the x86 space. Once CPUs began shipping with on-die memory controllers rather than using an off-die memory controller in the Northbridge, NUMA became an inherent part of multi-socket systems. In this respect AMD was the leader here right from the start, as they beat Intel to on-die memory controllers for x86 CPUs by years. So AMD has been working with NUMA for years, and similarly NUMA has been the state of affairs for Intel's multi-socket server systems for almost a decade.

What's new with Threadripper however is that NUMA has never been a consumer concern. MCM consumer CPUs have been few and far between, and we'd have to go all the way back to the Core 2 Quad family to find a CPU with cores on multiple dies, which was a design that predates on-die memory controllers for Intel. So with Threadripper, this is the very first time that consumers – even high-end consumers – have been exposed to NUMA.

But more importantly, consumer software has been similarly unexposed to NUMA, so almost no software is able to take its idiosyncrasies into account. The good news is that while NUMA changes the rules of the game a bit, it doesn't break software. NUMA-aware OSes do the heavy lifting here, helping unaware software by keeping threads and memory accesses together on the same NUMA node in order to ensure classic performance characteristics. The downside to this is that much like an overprotective parent, the OS is going discourage unaware software from using other NUMA nodes. Or in the case of Threadripper, discouraging applications from using the other die and its 8 cores.


At a hardware level, Threadripper is natively two NUMA nodes

In an ideal world, all software would be NUMA-aware, eliminating any concerns over the matter. From a practical perspective however, software is slow to change and it seems unlikely that NUMA-style CPUs are going to become common in the future. Furthermore NUMA can be tricky to program for, especially in the case of workloads/algorithms that inherently struggle with "far" cores and memory. So the quirks of NUMA are never going to completely go away, and instead AMD has taken it upon themselves to manage the matter.

AMD has implemented BIOS switches and software switches in order to better support and control the NUMAness of Threadripper. By default, Threadripper actually hides its NUMA architecture. AMD instead runs Threadripper in a UMA configuration: a uniform memory access system where memory is sent to any DRAM and the return is variable in latency (e.g. ~100ns averaging between 78ns and 133ns) but focusing for a high peak bandwidth. By presenting the CPU to the OS as a monolithic, single-domain design, memory bandwidth is maximized and all applications (NUMA-aware and not) see all 16 cores as part of the same CPU. So for applications that are not NUMA-aware – and consequently would have been discouraged by the OS in NUMA mode – this maximizes the number of cores/threads they can use and the memory bandwidth they can use.


All 32 threads are exposed as part of a single monolithic CPU

The drawback to UMA mode is that because it's hiding how Threadripper really works, it doesn't allow the OS and applications to make fully informed decisions for themselves, and consequently they may not make the best decisions. Latency-sensitive NUMA-unaware applications that fare poorly with high core/memory latencies can struggle here if they use cores and memory attached to the other die. Which is why AMD also allows Threadripper to be configured for NUMA mode, exposing its full design to the OS and resulting in separate NUMA domains for the two dies. This informs the OS to keep applications pinned to one die when possible as previously discussed, and this mode is vital for some software and some games, and we’ve tested it in this review.

Overall, using a multi-silicon design has positives and negatives. The negatives end up being variable memory latency, variable core-to-core latency, and often redundancy in on-die units that don’t need to be repeated. As a result, AMD uses 400mm2+ of silicon to achieve this, which can increase costs at the manufacturing level. By contrast, the positives are in silicon design and overall yeilds: being able to design a single piece of silicon and repeat it, rather than design several different floor plans which multiplies up the design costs, and having the (largely) fixed number of wafer defects spread out over many more smaller dies.

By contrast, Intel uses a single monolithic die for its Skylake-X processors: the LCC die up to 10-core and HCC die from 12-core up to 18-core. These use a rectangular grid of cores (3x4 and 5x4 respectively), with two of the segments reserved for the memory controllers. In order to communicate between the cores, Intel uses a networking mesh, which determines which direction the data needs to travel (up, down, left, right, or accepted into the core). We covered Intel’s MOdular Decoupled Crossbar (MoDe-X) methodology in our Skylake-X review, but the underlying concept is consistency. This mesh runs at 2.4 GHz nominally. Prior to Skylake-X, Intel implemented a ring topology, such that data would have to travel around the ring of cores to get to where it needed to go.

With reference to glue, or glue-logic, we’re referring to the fabric of each processor. For AMD that’s the Infinity Fabric, which has to travel within the silicon die or out to the other silicon die, and for Intel that’s the internal MoDe-X mesh. Elmer’s never looked so complicated.

Feeding the Beast and CPU Top Trumps AMD’s Solution to Dual Dies: Creator Mode and Game Mode
Comments Locked

347 Comments

View All Comments

  • nitin213 - Thursday, August 10, 2017 - link

    Thanks for your reply. Hopefully the test suite can be expanded as Intel's CPUs probably also move to higher core count and IO ranges in future.
    and i completely understand the frustration trying to get a 3rd party to change their defaults. Cheers
  • deathBOB - Thursday, August 10, 2017 - link

    It's clear to me . . . Ian is playing both sides and making out like a bandit! /s
  • FreckledTrout - Thursday, August 10, 2017 - link

    Ian can we get an updated comments section so we can +/- people and after x number of minuses they wont show by default. I'm saying this because some of these comments(the one in this chain included) are not meaningful responces. The comments section is by far the weakest link on Anantech.

    Nice review btw.
  • mapesdhs - Thursday, August 10, 2017 - link

    toms has that, indeed it's kinda handy for blanking out the trolls. Whether it's any useful indicator of "valid" opinion though, well, that kinda varies. :D (there's nowt to stop the trolls from voting everything under the sun, though one option would be to auto-suspend someone's ability to vote if their own posts get hidden from down voting too often, a hands-off way of slapping the trolls)

    Given the choice, I'd much rather just be able to *edit* what I've posted than up/down-vote what others have written. I still smile recalling a guy who posted a followup to apologise for the typos in his o.p., but the followup had typos aswell, after which he posted aaaaagh. :D

    Ian.
  • Johan Steyn - Thursday, August 10, 2017 - link

    Ian thanks for at least responding, I appreciate it. Please compare your review to sites like PCPer and many others. They have no problem to also point out the weak points of TR, yet clearly understand for what TR was mostly designed and focus properly on it and even though they did not test the 64 PCI lanes as an example, mention that they are planning a follow-up to do it, since it is an important point. You do mention these as well, but could have said more than just mention it by the way.

    Look at your review, most of it is about games. Are you serious?

    I have to give you credit to at least mention the problems with Sysmark.

    Let me give you an example of slanted journalism, When you do the rendering benchmarks, where AMD is known to shine, you only mention at each benchmark what they do etc, and fail to mention that AMD clearly beats Intel, even though other sites focus more ons these benchmarks. In the one benchmark where Intel get a descent score, you take time to mention that:

    "Though it's interesting just how close the 10-core Core i9-7900X gets in the CPU (C++) test despite a significant core count disadvantage, likely due to a combination of higher IPC and clockspeeds."

    Not in one of the rendering benchmarks do you give credit to AMD, yet you found it fitting to end the section of with:

    "Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."

    Not slanted journalism? At least you mention "2x the cost," but for most this will not defer them in buying the monopoly.

    After focussing so much time on game performance, I am not sure you understand TR at all. AMD still has a long way to go in many areas. Why? Because corrupt Intel basically drove them to bankruptcy, but that is a discussion for another day. I lived through those days and experienced it myself.

    Maybe I missed it, but where did you discuss the issue of memory speed? You mention in the beginning of memory overclock. Did you test the system running at 3200 or 2666? It is important to note. If you ran at 2666, then you are missing a very important point. Ryzen is known to gain a huge amount with memory speed. You should not regard 3200 as an overclock, since that is what that memory is made for, even if 2666 is standard spec. Most other sites I checked, used it like that. If you did use 3200, don't you think you should mention it?

    Why is it that your review ends up meh about TR and leave you rather wanting an i9 an almost all respects, yet most of the other site gives admiration where deserved, even though they have criticism as well. Ian I see that you clearly are disappointed with TR, which is OK, maybe you just like playing games and that is why you are so.

    It was clear how much you admire Intel in your previous article. You say that I gave no examples of slanted journalism, maybe you should read my post again. "Most Powerful, Most scalable." It is well known that people don't read the fine print. This was intentional. If not, you are a very unlucky guys for having so many unintended mishaps. Then I truly need to say I am sorry.

    For once, please be a bit excited that there is some competition against the monopoly of Intel, or maybe you are also deluded that they became so without any underhanded ways.

    By the way, sorry that I called you Anand. I actually wanted to type Anandtech, but left it like it. This site still carries his name and he should still take responsibility. After I posted, I realised I should have just checked the author, so sorry about that.
  • vanilla_gorilla - Thursday, August 10, 2017 - link

    "Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."

    How do you not understand that is a dig at Intel? He's saying you have to pay twice as much for only a 6.7% improvement.
  • smilingcrow - Thursday, August 10, 2017 - link

    The memory speed approach taken was clearly explained in the test and was stated as being consistent with how they always test.
    I don't take issue with testing at stock speeds at launch day as running memory out of spec for the system can be evaluated in depth later on.
  • Johan Steyn - Friday, August 11, 2017 - link

    That is just rubbish. Threadripper has no problem with 3200 memory and other sites has no problem running it at that speed. 3200 memory is designed to run 3200, why run it at 2666? There is just no excuse except being paid by Intel.

    Maybe then you can accuse other sites of being unscientific?
  • fanofanand - Tuesday, August 15, 2017 - link

    Anandtech always tests at JDEC, regardless of the brand.
  • Manch - Friday, August 11, 2017 - link

    ""Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."

    Not slanted journalism? At least you mention "2x the cost," but for most this will not defer them in buying the monopoly."

    You call Intel the monopoly and call him out for not wording the sentence to dissuade people from buying Intel. Who has the bias here? If he was actively promoting Intel over AMD or vice versa, you'd be OK with the latter, but to do neither. He's an Intel shill? Come on. That's unfair. HOW should he have wrote it so it would satisfy you?

    FYI Anand is gone. He's NOT responsible for anything at Anandtech. Are you going to hold Wozniak's feet to the fire for the lack of ports on a Mac too?

Log in

Don't have an account? Sign up now