Conclusions: Not All Cores Are Made Equal

Designing a processor is often a finely tuned craft. To get performance, the architect needs to balance compute with throughput and at all times have sufficient data in place to feed the beast. If the beast is left idle, it sits there and consumes power, while not doing any work. Getting the right combination of resources is a complex task, and the reason why top CPU companies hire thousands of engineers to get it to work right. As long as the top of the design is in place, the rest should follow.

Sometimes, more esoteric products fall out of the stack. The new generation of AMD Ryzen Threadripper processors are just that – a little esoteric. The direct replacements for the previous generation units, replacing like for like but with better latency and more frequency, are a known component at this point and we get the expected uplift. It is just this extra enabled silicon in the 2990WX, without direct access to memory, is throwing a spanner in the works.

2950X (left) and 2990WX (right)

When some cores are directly connected to memory, such as the 2950X, all of the cores are considered equal enough that distributing a workload is a fairly easy task. With the new processors, we have the situation on the right, where only some cores are directly attached to memory, and others are not. In order to go from one of these cores to main memory, it requires an extra hop, which adds latency. When all the cores are requesting access, this causes congestion.

In order to take the full advantage of this setup, the workload has to be memory light. In workloads such as particle movement, ray-tracing, scene rendering, and decompression, having all 32-cores shine a light means that we set new records in these benchmarks.

In true Janus style, for other workloads that are historically scale with cores, such as physics, transcoding, and compression, the bi-modal core caused significant performance regression. Ultimately, there seems to be almost no middle ground here – either the workload scales well, or it sits towards the back of our high-end testing pack.

Part of the problem relates to how power is distributed with these big core designs. As shown on page four, the more chiplets that are in play, or the bigger the mesh, the more power gets diverted from the cores to the internal networking, such as the uncore or Infinity Fabric. Comparing the one IF link in the 2950X to the six links in 2990WX, we saw the IF consuming 60-73% of the chip power total at small workloads, and 25-40% at high levels.

In essence, at full load, a chip like the 2990WX is only using 60% of its power budget for CPU frequency. In our EPYC 7601, because of the additional memory links, the cores were only consuming 50% of the power budget at load. Rest assured, once AMD and Intel have finished fighting over cores, the next target on their list will be this interconnect.

But the knock on effect of not using all the power for the cores, as well as having a bi-modal operation of cores, is that some workloads will not scale: or in some cases regress.

The Big Cheese: AMD’s 32-Core Behemoth

There is no doubting that when the AMD Ryzen Threadripper 2990WX gets a change to work its legs, it will do so with gusto. We were able to overclock the system to 4.0 GHz on all cores by simply changing the BIOS settings, although AMD also supports features like Precision Boost Overdrive in Windows to get more out of the chip. That being said, the power consumption when using half of the cores at 4.0 GHz pushes up to 260W, leaving a full loaded CPU nudging 450-500W and spiking at over 600W. Users will need to make sure that their motherboard and power supply are up to the task.

This is the point where I mention if we would recommend AMD’s new launches. The 2950X slots right in to where the 1950X used to be, and at a lower price point, and we are very comfortable with that. However the 2950X already sits as a niche proposition for high performance – the 2990WX takes that ball and runs with it, making it a niche of a niche. To be honest, it doesn’t offer enough cases where performance excels as one would expect – it makes perfect sense for a narrow set of workloads where it toasts the competition. It even outperforms almost all the other processors in our compile test. However there is one processor that did beat it: the 2950X.

For most users, the 2950X is enough. For the select few, the 2990WX will be out of this world.

Thermal Comparisons and XFR2: Remember to Remove the CPU Cooler Plastic!
Comments Locked

171 Comments

View All Comments

  • plonk420 - Tuesday, August 14, 2018 - link

    worse for efficiency?

    https://techreport.com/r.x/2018_08_13_AMD_s_Ryzen_...
  • Railgun - Monday, August 13, 2018 - link

    How can you tell? The article isn’t even finished.
  • mapesdhs - Monday, August 13, 2018 - link

    People will argue a lot here about performance per watt and suchlike, but in the real world the cost of the software and the annual license renewal is often far more than the base hw cost, resulting in a long term TCO that dwarfs any differences in some CPU cost. I'm referring here to the kind of user that would find the 32c option relevant.

    Also missing from the article is the notion of being able to run multiple medium scale tasks on the same system, eg. 3 or 4 tasks each of which is using 8 to 10 cores. This is quite common practice. An article can only test so much though, at this level of hw the number of different parameters to consider can be very large.

    Most people on tech forums of this kind will default to tasks like 3D rendering and video conversion when thinking about compute loads that can use a lot of cores, but those are very different to QCD, FEA and dozens of other tasks in research and data crunching. Some will match the arch AMD is using, others won't; some could be tweaked to run better, others will be fine with 6 to 10 cores and just run 4 instances testing different things. It varies.

    Talking to an admin at COSMOS years ago, I was told that even coders with seemingly unlimited cores to play with found it quite hard to scale relevant code beyond about 512 cores, so instead for the sort of work they were doing, the centre would run multilple simulations at the same time, which on the hw platform in question worked very nicely indeed (1856 cores of the SandyBridge-EP era, 14.5TB of globally shared memory, used primarily for research in cosmology, astrophysics and particle physics; squish it all into a laptop and I'm sure Sheldon would be happy. :D) That was back in 2012, but the same concepts apply today.

    For TR2, the tricky part is getting the OS to play nice, along with the BIOS, and optimised sw. It'll be interesting to see how 2990WX performance evolves over time as BIOS updates come out and AMD gets feedback on how best to exploit the design, new optimisations from sw vendors (activate TR2 mode!) and so on.

    SGI dealt with a lot of these same issues when evolving its Origin design 20 years ago. For some tasks it absolutely obliterated the competition (eg. weather modelling and QCD), while for others in an unoptimised state it was terrible (animation rendering, not something that needs shared memory, but ILM wrote custom sw to reuse bits of a frame already calculated for future frame, the data able to fly between CPUs very fast, increasing throughput by 80% and making the 32-CPU systems very competitive, but in the long run it was easier to brute force on x86 and save the coder salary costs).

    There are so many different tasks in the professional space, the variety is vast. It's too easy to think cores are all that matter, but sometimes having oodles of RAM is more important, or massive I/O (defense imaging, medical and GIS are good examples).

    I'm just delighted to see this kind of tech finally filter down to the prosumer/consumer, but alas much of the nuance will be lost, and sadly some will undoubtedly buy based on the marketing, as opposed to the golden rule of any tech at this level: ignore the publish benchmarks, the ony test that actually matters is your specific intended task and data, so try and test it with that before making a purchasing decision.

    Ian.
  • AbRASiON - Monday, August 13, 2018 - link

    Really? I can't tell if posts like these are facetious or kidding or what?

    I want AMD to compete so badly long term for all of us, but Intel have such immense resources, such huge infrastructure, they have ties to so many big business for high end server solutions. They have the bottom end of the low power market sealed up.

    Even if their 10nm is delayed another 3 years, AMD will only just begin to start to really make a genuine long term dent in Intel.

    I'd love to see us at a 50/50 situation here, heck I'd be happy with a 25/75 situation. As it stands, Intel isn't finished, not even close.
  • imaheadcase - Monday, August 13, 2018 - link

    Are you looking at same benchmarks as everyone else? I mean AMD ass was handed to it in Encoding tests and even went neck to neck against some 6c intel products. If AMD got one of these out every 6 months with better improvements sure, but they never do.
  • imaheadcase - Monday, August 13, 2018 - link

    Especially when you consider they are using double the core count to get the numbers they do have, its not very efficient way to get better performance.
  • crotach - Tuesday, August 14, 2018 - link

    It's happened before. AMD trashes Intel. Intel takes it on the chin. AMD leads for 1-2 years and celebrates. Then Intel releases a new platform and AMD plays catch-up for 10 years and tries hard not to go bankrupt.

    I dearly hope they've learned a lesson the last time, but I have my doubts. I will support them and my next machine will be AMD, which makes perfect sense, but I won't be investing heavily in the platform, so no X399 for me.
  • boozed - Tuesday, August 14, 2018 - link

    We're talking about CPUs that cost more than most complete PCs. Willy-waving aside, they are irrelevant to the market.
  • Ian Cutress - Monday, August 13, 2018 - link

    Hey everyone, sorry for leaving a few pages blank right now. Jet lag hit me hard over the weekend from Flash Memory Summit. Will be filling in the blanks and the analysis throughout today.

    But here's what there is to look forward to:

    - Our new test suite
    - Analysis of Overclocking Results at 4G
    - Direct Comparison to EPYC
    - Me being an idiot and leaving the plastic cover on my cooler, but it completed a set of benchmarks. I pick through the data to see if it was as bad as I expected

    The benchmark data should now be in Bench, under the CPU 2019 section, as our new suite will go into next year as well.

    Thoughts and commentary welcome!
  • Tamz_msc - Monday, August 13, 2018 - link

    Are the numbers for test LuxMark C++ test correct? Seems they've been swapped(2900WX and 2950X).

Log in

Don't have an account? Sign up now