Conclusion

There are many attributes by which a processor is examined. Absolute performance is the one that grabs the headlines – either by way of single core performance, or the chip performance as a whole, most often applied to an arbitrary benchmark. Aside from raw by-the-numbers performance, we also look at popular workloads, where the CPU plays a critical role. Gaming, on the other hand, is where the CPU is still important but often secondary, and so we end up analyzing the relationship between the CPU and the GPU. Beyond performance, we have cost to consider, and if a product is priced appropriately for the market it aims for.

For performance, perhaps the obvious that was expected was observed: AMD’s 3950X knocks down walls that only a couple of years ago seemed impenetrable. When compared to its immediate rivals, The Ryzen 9 3950X smashes through several of our tests published here, such as the Photoscan, Blender, Handbrake, and 7-zip, while CineBench R20 and SPEC in our benchmark database also have some strong numbers.

Because it is competing against Intel’s high-end Skylake Refresh desktop platform (comparing equal core count, rather than price), it can’t compete when AVX-512 is in play, or memory bandwidth is the limiting factor. At some level having this many cores requires extra memory bandwidth, and dual channel isn’t going to cut it. This is one of the downsides of moving ‘mainstream’ up to a HEDT price point, even if it does come with more cores.

There are some other minor points to note – if we compare single threaded performance, despite AMD’s Zen 2 having a general IPC advantage, the Core i9-9900KS is still running at 5.0 GHz for sustained single threaded work, which is still 7-15% higher than the Ryzen 3950X, and as a result it does pull out ahead in a number of ST tests as well as in low resolution (CPU-bound) gaming. At higher resolution gaming, most of the CPUs in our test perform within a fraction of each other.

We did test the Ryzen 9 3950X with both the normal Windows 'High Performance' (HP) power plan, and the 'Ryzen High Performance' (RHP) power plan that comes with the chipset drivers. Overall the RHP showed higher frequencies and 2-3% more performance than the HP, for about +4W on peak power consumption. Now we're in a quandry as to continue with the RHP - it's technically not out of the box performance, however we still have to install chipset drivers with every test, and the RHP is part of the package. We recommend users use the RHP either way, however it would be helpful if Microsoft recognized a Ryzen 3000 series processor and automatically offered / enabled the RHP in the power settings.

Another key point in our testing was power. Using our power consumption test that outputs the processor's internal power metrics, we saw a considerable inflection in overall CPU power consumption when more than 10 cores were loaded. At this level, the two chiplets are over half occupied, so the desire to balance power over cores and reduce frequency occurs. Normally we expect the power to equalize at that peak level across all the cores, however something else is happening on the chip that means a lower overall power level is found. This is perhaps current density on the Zen 2 cores and chiplets becoming a factor, although we are talking about moving from 10W per core to 12W per core, which isn't a considerable jump, but might just be enough of a voltage jump to become a concern at some level. We technically saw this on the Ryzen 9 3900X as well, with the 10-core power consumption being a few watts higher than the 12-core result. But as that chip only has 12 cores, it wasn't much of a concern.

The Ryzen 3000 family also marks a turning point regarding AMD's use of official TDP ratings. We're seeing power consumption values higher than the TDP on the box, equal to AMD's PPT rating. Before with Ryzen 1000 and Ryzen 2000 there was some good parity between TDP and power consumption, but now there is not. Note, this isn't as bad as the difference between Intel's TDP-on-the-box and the actual power consumption of its turbo modes, and likely AMD is seeing this market expoitation of TDP vs Turbo as a potential avenue for performance. Ideally we should be getting several numbers on the box explaining this (TDP and PPT), but as of yet I've not seen any desire to do so. As we move into 280W 3rd Gen Threadripper processors, we'll be inspecting those to see if it changes for those as well.

The High-Level Holistic View: Ryzen Wins

In order to put all of our data into a conclusion that was more meaningful, we took the geometric mean of a cross section of our benchmarks. For the cross section of benchmarks, we covered a range of ST and MT tests, and we plotted a number of chips as a function of price.

A geometric mean (or average) is different to a standard average: e.g. if two benchmarks score 100 and 500 on one chip, but 250 and 350 on the other, it gives the same standard average. But, looking at the data, the second test is 33% lower but the first test is 2.5x higher. Ultimately the second chip might overall be better: both chips are equal in a standard average, but the geometric mean recognizes that 2.5x difference vs only a 33% drop and adjusts the value accordingly.

We get this graph:

In terms of absolute performance across our benchmark range, the Ryzen 9 3950X has the lead. This metric also puts the 3900X above the 9900KS, because despite the 5.0 GHz all-core on 8-cores, moving to 12-core and 16-core at almost the same performance per core gives more of an advantage in our test suite's MT-heavy workloads. As we move to the more expensive HEDT chips, the 16-core and 18-core from Intel, then even with strong AVX-512 performance, it’s not enough to offset other areas.

For users who want the best without going to the high-end desktop, the Ryzen 9 3950X has the best overall score out of all the chips we’ve ever tested.

If you thought the 3900X didn't have enough power, the 3950X is your answer.

Looking forward, we’re going to have an interesting time with the upcoming HEDT chips from both companies, as users who need HEDT will need something more than what the AM4 has to offer. AMD is set to launch its 3rd Generation Threadripper platform on the 25th, and Intel's Cascade Lake-X CPUs are also due out later this month. Stay tuned for those reviews.

Gaming: F1 2018
Comments Locked

206 Comments

View All Comments

  • Drazick - Sunday, November 17, 2019 - link

    The DDR Technology is orthogonal.
    I want Quad and the latest memory available.
  • guyr - Friday, December 20, 2019 - link

    Anything is possible, of course. 5 years ago, who would have predicted 16 cores in a consumer-oriented CPU? However, neither Intel nor AMD has made any moves beyond 2 memory channels in the consumer space. The demand is simply not there to justify the increase in complexity and price. In the professional space, more channels are easily justified and the target market doesn't hesitate to pay the higher prices. So, it's all driven by what the market will bear.
  • alufan - Saturday, November 16, 2019 - link

    weird intel launches its chip a couple of weeks ago and it stayed upfront and main story for over a week, AMD launches what is in effect the best CPU ever tested by this site and it lasts a few Days before being pushed aside for another intel article am sure the intention by the reporters is to be fair and unbiased however I can see how the commercial motives of the site are being manipulated looks like intels up to its old tricks again, the thread ripper article lasted even less time but no chips have been tested(or at least released) yet which I guess makes sense
  • penev91 - Sunday, November 17, 2019 - link

    Just ignore everything Intel/AMD related on Anandtech. There's been an obvious bias for years.
  • Atom2 - Saturday, November 16, 2019 - link

    There has never been a situation as big as this one, where the bench software was benchmarked more than the hardware. Comprehensive overview of historic software development? Whatever the reason, it seems that keeping back AVX512 to only select few CPUs, was an unfortunate decision by Intel, which only contributed to the situation. Yes, you know, if you compile your code with compiler from 1998 and ignore all the guidelines how to write fast code ... Voila... For some reason however, nobody tries to run 20 year old CPU code on GPU though.
  • chrkv - Monday, November 18, 2019 - link

    Second page "On the Ryzen High Performance power plan, our sustained single core frequency dropped to 4450 MHz" - I believe just "the High Performance" should be here.
    Page 4 "Despite 5.0 GHz all-core turbo being on the 9900K" - should be "9900KS".
  • Irata - Tuesday, November 19, 2019 - link

    Quick question: Are any of your benchmarks affected by the Mathlab issue (Ryzen are crippled because a poor code path is used due to a vendor ID check for "genuine Intel" )?
  • twotwotwo - Tuesday, November 19, 2019 - link

    Intel's had these consumer-platform-based "entry-level Xeons" (once E3, now E) for a while. Despite some obvious limits, and that there are other low-end server options, enough folks want 'em to seed an ecosystem of rackmount and blade servers from Supermicro, Dell, etc.

    Anyway, the "pro" (ECC/management enabled) variant of Ryzen seems like a great fit for that. 16 cores and 24 PCIe 4 lanes are probably more useful for little servers than for most desktop users. It's also more balanced than the 8/16C EPYCs; it's cool they have 128 lanes and tons of memory channels, but it takes very specific applications to use them all with that few cores (caching?). Ideally the lesser I/O and lower TDPs also help make denser/cheaper boxes, and the consumer-ish clocks pay off for some things.

    The biggest argument against is that the entry-level server market is probably shrinking anyway as users rent tiny slices of huge boxes from cloud providers instead. It also probably doesn't have the best margins. So maybe you could release a competitive product there and still not make all that much off it.
  • halfflat - Thursday, November 21, 2019 - link

    Very curious about the AVX512 vs AVX2 results for 3dPM. It's really unusual to see even a 2x performance increase going from AVX2 to AVX512 on the same architecture, given that running AVX512 instructions will lower the clock.

    The non-AVX versions, I'm presuming, are utilizing SSE2.

    The i9-9900K gets a factor of 2 increase going from SSE2 to AVX2, which is pretty much what one would expect with twice as many fp operations per instruction. But the i9-7960X performance with AVX512 is *ten times* improved over SSE2, when the vector is only four times as wide and the cores will be running at a lower clock speed.

    Is there some particular AVX512-only operation that is determining this huge performance gap? Some further analysis of these results would be very interesting.
  • AIV - Wednesday, November 27, 2019 - link

    Somebody posted that it's caused by 64 bit integer multiplies, which are supported in AVX512, but not in AVX2 and thus fallback to scalar operations.

Log in

Don't have an account? Sign up now