Core to Core to Core: Design Trade Offs

AMD’s approach to these big processors is to take a small repeating unit, such as the 4-core complex or 8-core silicon die (which has two complexes on it), and put several on a package to get the required number of cores and threads. The upside of this is that there are a lot of replicated units, such as memory channels and PCIe lanes. The downside is how cores and memory have to talk to each other.

In a standard monolithic (single) silicon design, each core is on an internal interconnect to the memory controller and can hop out to main memory with a low latency. The speed between the cores and the memory controller is usually low, and the routing mechanism (a ring or a mesh) can determine bandwidth or latency or scalability, and the final performance is usually a trade-off.

In a multiple silicon design, where each die has access to specific memory locally but also has access to other memory via a jump, we then come across a non-uniform memory architecture, known in the business as a NUMA design. Performance can be limited by this abnormal memory delay, and software has to be ‘NUMA-aware’ in order to maximize both the latency and the bandwidth. The extra jumps between silicon and memory controllers also burn some power.

We saw this before with the first generation Threadripper: having two active silicon dies on the package meant that there was a hop if the data required was in the memory attached to the other silicon. With the second generation Threadripper, it gets a lot more complex.

On the left is the 1950X/2950X design, with two active silicon dies. Each die has direct access to 32 PCIe lanes and two memory channels each, which when combined gives 60/64 PCIe lanes and four memory channels. The cores that have direct access to the memory/PCIe connected to the die are faster than going off-die.

For the 2990WX and 2970WX, the two ‘inactive’ dies are now enabled, but do not have extra access to memory or PCIe. For these cores, there is no ‘local’ memory or connectivity: every access to main memory requires an extra hop. There is also extra die-to-die interconnects using AMD’s Infinity Fabric (IF), which consumes power.

The reason that these extra cores do not have direct access is down to the platform: the TR4 platform for the Threadripper processors is set at quad-channel memory and 60 PCIe lanes. If the other two dies had their memory and PCIe enabled, it would require new motherboards and memory arrangements.

Users might ask, well can we not change it so each silicon die has one memory channel, and one set of 16 PCIe lanes? The answer is that yes, this change could occur. However the platform is somewhat locked in how the pins and traces are managed on the socket and motherboards. The firmware is expecting two memory channels per die, and also for electrical and power reasons, the current motherboards on the market are not set up in this way. This is going to be an important point when get into the performance in the review, so keep this in mind.

It is worth noting that this new second generation of Threadripper and AMD’s server platform, EPYC, are cousins. They are both built from the same package layout and socket, but EPYC has all the memory channels (eight) and all the PCIe lanes (128) enabled:

Where Threadripper 2 falls down on having some cores without direct access to memory, EPYC has direct memory available everywhere. This has the downside of requiring more power, but it offers a more homogenous core-to-core traffic layout.

Going back to Threadripper 2, it is important to understand how the chip is going to be loaded. We confirmed this with AMD, but for the most part the scheduler will load up the cores that are directly attached to memory first, before using the other cores. What happens is that each core has a priority weighting, based on performance, thermals, and power – the ones closest to memory get a higher priority, however as those fill up, the cores nearby get demoted due to thermal inefficiencies. This means that while the CPU will likely fill up the cores close to memory first, it will not be a simple case of filling up all of those cores first – the system may get to 12-14 cores loaded before going out to the two new bits of silicon.

The AMD Threadripper 2990WX 32-Core and 2950X 16-Core Review Precision Boost 2, Precision Boost Overdrive, and StoreMI
Comments Locked

171 Comments

View All Comments

  • just4U - Monday, August 13, 2018 - link

    Ian, were you testing this with the CM Wraith Cooler? If not is it something you plan to review?
  • Ian Cutress - Monday, August 13, 2018 - link

    Most of the testing data is with the Liqtech 240 liquid cooler, rated at 500W. I do have data taken with the Wraith Ripper, and I'll be putting some of that data out when this is wrapped up.
  • IGTrading - Monday, August 13, 2018 - link

    To be honest, with the top of the line 32core model, it is interesting to identify as many positive effect cases as possible, to see if that entire set of applications that truly benefit of the added cores will persuade power users to purchase it.

    Like you've said, it is a niche of a niche and seeing it be X% faster of Y% slower is not as interesting as seeing what it can actually do when it is used efficiently and if this this makes a compelling argument for power users.
  • PixyMisa - Tuesday, August 14, 2018 - link

    Phoronix found that a few tests ran much faster on Linux - for 7zip compression in particular, 140% faster (as in, 2.4x). Some of these benchmarks could improve a lot with some tweaking to the Windows scheduler.
  • phoenix_rizzen - Wednesday, August 15, 2018 - link

    It'd be interesting to redo these tests on a monthly basis after Windows/BIOS updates are done, to see how performance changes over time as the Windows side of things is tweaked to support the new NUMA setup for TR2.

    At the very least, a follow-up benchmark run in 6 months would be nice.
  • Kevin G - Monday, August 13, 2018 - link

    Chiplets!

    The power consumption figures are interesting but TR does have to manage one thing that the high end desktop chips from Intel don't: off-die traffic. The amount of power to move data off die is significantly higher than moving it around on-die. Even in that context, TR's energy consumption for just the fabric seems high. When only threads are loaded, they should only be with dies with the memory controllers leaving two dies idle. It doesn't appear that the fabric is powering down while those remote dies are also powering down. Any means of watching cores enter/exit sleep states in real time?

    I'd also be fun to see with Windows Server what happens when all the cores on a die are unplugged from the system. Consdiering the AMD puts the home agent on the memory controller on each die, even without cores or memory attached, chances are that the home agent is still alive consuming power. It'd be interesting to see what happens on Sky Lake-SP as well if the home agents on the grid eventually power themselves down when there is nothing directly connected to them. It'd be worth comparing to the power consumption when a core is disabled in BIOS/EFI.

    I also feel that this would be a good introduction for what is coming down the road with server chips and may reach the high end consumer products: chiplets. This would permit the removal of the off-die Infinity Links for something that is effectively on-die throughout the cluster of dies. That alone will save AMD several watts. The other thing about chiplets is that it would greatly simplify Thread Ripper: only two memory controller chiplets would be to be in the package vs. four as we have now. That should save AMD lots of power. (And for those reading this comment, yes, Intel has chiplet plans as well.). The other thing AMD could do is address how their cache coherency protocols work. AMD has hinted at some caching changes for Zen 2 but lacks specificity.
  • gagegfg - Monday, August 13, 2018 - link

    do not seem to exist more than once the 16 additional core of the 2990wx compared to the 2950x
  • Ian Cutress - Monday, August 13, 2018 - link

    https://www.anandtech.com/bench/product/2133?vs=21...
  • Chaitanya - Monday, August 13, 2018 - link

    Built for scientific workload.
  • woozle341 - Monday, August 13, 2018 - link

    Do you think the lack of AVX512 is an issue? I might build a workstation soon for data processing with R and Python for some Fortran models and post-processing. Skylake-X looks pretty good wit its quad memory channels despite its high price.

Log in

Don't have an account? Sign up now