Conclusion

There are many attributes by which a processor is examined. Absolute performance is the one that grabs the headlines – either by way of single core performance, or the chip performance as a whole, most often applied to an arbitrary benchmark. Aside from raw by-the-numbers performance, we also look at popular workloads, where the CPU plays a critical role. Gaming, on the other hand, is where the CPU is still important but often secondary, and so we end up analyzing the relationship between the CPU and the GPU. Beyond performance, we have cost to consider, and if a product is priced appropriately for the market it aims for.

For performance, perhaps the obvious that was expected was observed: AMD’s 3950X knocks down walls that only a couple of years ago seemed impenetrable. When compared to its immediate rivals, The Ryzen 9 3950X smashes through several of our tests published here, such as the Photoscan, Blender, Handbrake, and 7-zip, while CineBench R20 and SPEC in our benchmark database also have some strong numbers.

Because it is competing against Intel’s high-end Skylake Refresh desktop platform (comparing equal core count, rather than price), it can’t compete when AVX-512 is in play, or memory bandwidth is the limiting factor. At some level having this many cores requires extra memory bandwidth, and dual channel isn’t going to cut it. This is one of the downsides of moving ‘mainstream’ up to a HEDT price point, even if it does come with more cores.

There are some other minor points to note – if we compare single threaded performance, despite AMD’s Zen 2 having a general IPC advantage, the Core i9-9900KS is still running at 5.0 GHz for sustained single threaded work, which is still 7-15% higher than the Ryzen 3950X, and as a result it does pull out ahead in a number of ST tests as well as in low resolution (CPU-bound) gaming. At higher resolution gaming, most of the CPUs in our test perform within a fraction of each other.

We did test the Ryzen 9 3950X with both the normal Windows 'High Performance' (HP) power plan, and the 'Ryzen High Performance' (RHP) power plan that comes with the chipset drivers. Overall the RHP showed higher frequencies and 2-3% more performance than the HP, for about +4W on peak power consumption. Now we're in a quandry as to continue with the RHP - it's technically not out of the box performance, however we still have to install chipset drivers with every test, and the RHP is part of the package. We recommend users use the RHP either way, however it would be helpful if Microsoft recognized a Ryzen 3000 series processor and automatically offered / enabled the RHP in the power settings.

Another key point in our testing was power. Using our power consumption test that outputs the processor's internal power metrics, we saw a considerable inflection in overall CPU power consumption when more than 10 cores were loaded. At this level, the two chiplets are over half occupied, so the desire to balance power over cores and reduce frequency occurs. Normally we expect the power to equalize at that peak level across all the cores, however something else is happening on the chip that means a lower overall power level is found. This is perhaps current density on the Zen 2 cores and chiplets becoming a factor, although we are talking about moving from 10W per core to 12W per core, which isn't a considerable jump, but might just be enough of a voltage jump to become a concern at some level. We technically saw this on the Ryzen 9 3900X as well, with the 10-core power consumption being a few watts higher than the 12-core result. But as that chip only has 12 cores, it wasn't much of a concern.

The Ryzen 3000 family also marks a turning point regarding AMD's use of official TDP ratings. We're seeing power consumption values higher than the TDP on the box, equal to AMD's PPT rating. Before with Ryzen 1000 and Ryzen 2000 there was some good parity between TDP and power consumption, but now there is not. Note, this isn't as bad as the difference between Intel's TDP-on-the-box and the actual power consumption of its turbo modes, and likely AMD is seeing this market expoitation of TDP vs Turbo as a potential avenue for performance. Ideally we should be getting several numbers on the box explaining this (TDP and PPT), but as of yet I've not seen any desire to do so. As we move into 280W 3rd Gen Threadripper processors, we'll be inspecting those to see if it changes for those as well.

The High-Level Holistic View: Ryzen Wins

In order to put all of our data into a conclusion that was more meaningful, we took the geometric mean of a cross section of our benchmarks. For the cross section of benchmarks, we covered a range of ST and MT tests, and we plotted a number of chips as a function of price.

A geometric mean (or average) is different to a standard average: e.g. if two benchmarks score 100 and 500 on one chip, but 250 and 350 on the other, it gives the same standard average. But, looking at the data, the second test is 33% lower but the first test is 2.5x higher. Ultimately the second chip might overall be better: both chips are equal in a standard average, but the geometric mean recognizes that 2.5x difference vs only a 33% drop and adjusts the value accordingly.

We get this graph:

In terms of absolute performance across our benchmark range, the Ryzen 9 3950X has the lead. This metric also puts the 3900X above the 9900KS, because despite the 5.0 GHz all-core on 8-cores, moving to 12-core and 16-core at almost the same performance per core gives more of an advantage in our test suite's MT-heavy workloads. As we move to the more expensive HEDT chips, the 16-core and 18-core from Intel, then even with strong AVX-512 performance, it’s not enough to offset other areas.

For users who want the best without going to the high-end desktop, the Ryzen 9 3950X has the best overall score out of all the chips we’ve ever tested.

If you thought the 3900X didn't have enough power, the 3950X is your answer.

Looking forward, we’re going to have an interesting time with the upcoming HEDT chips from both companies, as users who need HEDT will need something more than what the AM4 has to offer. AMD is set to launch its 3rd Generation Threadripper platform on the 25th, and Intel's Cascade Lake-X CPUs are also due out later this month. Stay tuned for those reviews.

Gaming: F1 2018
Comments Locked

206 Comments

View All Comments

  • zmatt - Thursday, November 14, 2019 - link

    Maybe they should stop using such an old and buggy game as a bench mark then since its trivial to hit more than acceptable frames with modern hardware.
  • imaheadcase - Thursday, November 14, 2019 - link

    Um they are using 1080p, its not GPU bound be any stretch of the imagination on the games they tested. lol
  • Spunjji - Thursday, November 14, 2019 - link

    Boost frequency isn't guaranteed, so it's not false advertising any more than Intel's TDP.

    If the games you're playing are a few years old and your GPU is sufficient to make the CPU the limit, you'll likely see no real-world benefit from anything above the Ryzen 3600X / Core i5 9600. Discussion of high-end CPUs rapidly becomes moot, even for that niche-within-a-niche.
  • shaolin81 - Thursday, November 14, 2019 - link

    Problem isn't the chip, but the fact they tested it with High Performance profile, which nearly never parks idle cores and therefore there's nearly no change to hit max Turbo on single core. If they try Balanced profile, the would reach 4.7 or 4.75 more easily.
    I'm wondering how Anandtech doesn't know this.
  • GlossGhost - Tuesday, November 19, 2019 - link

    Quite a lot of people don't know this. Maybe we, who know, are stupid instead?
  • Targon - Thursday, November 14, 2019 - link

    One thing to consider is motherboard and the VRMs on the motherboard, power supply, and even RAM. There was an issue causing slightly lower CPU performance prior to AGESA 1.0.0.3ABBA, and 1.0.0.4 is supposed to help a bit as well.

    You also don't take into account that the IPC of the new Ryzen chips is actually a bit better than what you see with the i9-9900k and ks. It may not be enough to offset the clock speed difference, but it does come into play.

    Now, the other thing that will come into play is the security problems that keep showing up. Security mitigations are put into place, and Intel chips get another slowdown. In another year or two, Ryzen third generation vs. the 9900ks might actually be completely even with zero advantage to Intel. You may not be worried about security, but if an OS update puts in the security patches, your performance WILL drop.

    We only have another 7 months until the next generation Ryzen chips come out with another 8 percent IPC improvements, will Intel still be on 14nm with a 10900k being the high end from Intel without any IPC improvements?
  • zmatt - Thursday, November 14, 2019 - link

    What's the highest refresh rate you can get on gaming monitors today? 144hz right? When both Intel and AMD's best are exceeding 144fps in these benchmarks is there a real world difference between the two? I don't think so. You can only be as fast as your refresh rate ultimately.
  • brantron - Thursday, November 14, 2019 - link

    There are 240 Hz monitors with FreeSync and/or G-Sync, so you could (hopefully) run games somewhere between there and 144 Hz without stuttering.

    I've only seen comparisons to 60 Hz, though, and that's really going to depend on the game.
  • Winnetou - Thursday, November 14, 2019 - link

    The highest refresh rate is 300hz now, thuogh it's only one monitor now. There are multiple 240hz ones though.

    Even so, your point is actually worthless here. Having a really high framerate does help with responsiveness and input lag. That's why top CS players still play at 720p low, on 600 FPS. For casuals like us it may be worthless, but for them it isn't. And maybe even for us it won't be, just too small of a difference to be quatifiable.
  • RSAUser - Friday, November 15, 2019 - link

    The frame rate that high is to avoid lows.

Log in

Don't have an account? Sign up now