Overclocking: 4.0 GHz for 500W

Who said that a 250W processor should not be overclocked? AMD prides itself as being a processor manufacturer that offers every consumer processor as a multiplier unlocked part, as well as using a soldered thermal interface material to assist with thermal dissipation performance. This 2990WX has an X in the same, so let the overclocking begin!

Actually, confession time. We did not have much time to do overclocking by any stretch. This processor has a 3.0 GHz base frequency and a 4.2 GHz turbo frequency, and in an air-conditioned room using the 500W Enermax Liqtech cooler, when running all cores under POV-Ray, we observed each core running around 3150 MHz, which is barely above the turbo frequency. The first thing I did was set the all-core turbo to 4.2 GHz, the same as the single core turbo frequency. That was a bust.

However, the next stage of my overclocking escapades surprised me. I set the CPU to a 40x multiplier in the BIOS, for 4.0 GHz on all the cores, all the time. I did not adjust the voltage, it was kept at auto, and I was leaving the ASUS motherboard to figure it out. Lo and behold, it performed flawlessly through our testing suite at 4.0 GHz. I was shocked.

All I did for this overclock was turn a setting from ‘auto’ to ‘40’, and it breezed through almost every test I threw at it. I say almost every test – our Prime95 power testing failed. But our POV-Ray power testing, which draws more power, worked. Every benchmark in the suite worked. Thermals were high (in the 70s), but the cooler could take it, and with good reason too.

At full load in our POV-Ray test, the processor was listed as consuming 500W. The cooler is rated for 500W. At one point we saw 511W. This was split between 440W for the cores (or 13.8W per core) and 63W for the non-core (IF, IO, IMC) which equates to only 12.5% of the full power consumption. It answers the question from our Infinity Fabric power page - if you want the interconnect to be less of the overall power draw, overclock!

We also tried 4.1 GHz, and that seemed to work as well, although we did not get a full benchmark run out of it before having to pack the system up. As stated above, 4.2 GHz was a no-go, even when increasing the voltage. With tweaking (and the right cooling), it could be possible. For anyone wanting to push here, chilled water might be the way to go.

Performance at 4.0 GHz

So if the all-core frequency was 3125 MHz, an overclock to 4000 MHz all-core should give a 28% performance increase, right? Here are some of the key tests from our suite.

AppTimer: GIMP 2.10.4 (copy)Blender 2.79b bmw27_cpu Benchmark (copy)POV-Ray 3.7.1 Benchmark (copy)WinRAR 5.60b3 (copy)PCMark10 Extended Score (copy)Agisoft Photoscan 1.3.3, Complex Test (copy)

Overclocking the 2990WX is a mixed bag, because of how it does really well in some tests, and how it still sits behind the 2950X in others due to the bi-modal nature of the cores. In the tests were it already wins, it pushes out a lot more: Blender is up 19% in throughput, POV-Ray is up 19%, 3DPM is up 19%. The other tests, is catches back up to the 2950X (Photoscan), or still lags behind (app loading, WinRAR).

Overclocking is not the cure-all for the performance issues on the 2990WX, but it certainly does help.

Power Consumption, TDP, and Prime95 vs POV-Ray Thermal Comparisons and XFR2: Remember to Remove the CPU Cooler Plastic!
POST A COMMENT

171 Comments

View All Comments

  • plonk420 - Tuesday, August 14, 2018 - link

    worse for efficiency?

    https://techreport.com/r.x/2018_08_13_AMD_s_Ryzen_...
    Reply
  • Railgun - Monday, August 13, 2018 - link

    How can you tell? The article isn’t even finished. Reply
  • mapesdhs - Monday, August 13, 2018 - link

    People will argue a lot here about performance per watt and suchlike, but in the real world the cost of the software and the annual license renewal is often far more than the base hw cost, resulting in a long term TCO that dwarfs any differences in some CPU cost. I'm referring here to the kind of user that would find the 32c option relevant.

    Also missing from the article is the notion of being able to run multiple medium scale tasks on the same system, eg. 3 or 4 tasks each of which is using 8 to 10 cores. This is quite common practice. An article can only test so much though, at this level of hw the number of different parameters to consider can be very large.

    Most people on tech forums of this kind will default to tasks like 3D rendering and video conversion when thinking about compute loads that can use a lot of cores, but those are very different to QCD, FEA and dozens of other tasks in research and data crunching. Some will match the arch AMD is using, others won't; some could be tweaked to run better, others will be fine with 6 to 10 cores and just run 4 instances testing different things. It varies.

    Talking to an admin at COSMOS years ago, I was told that even coders with seemingly unlimited cores to play with found it quite hard to scale relevant code beyond about 512 cores, so instead for the sort of work they were doing, the centre would run multilple simulations at the same time, which on the hw platform in question worked very nicely indeed (1856 cores of the SandyBridge-EP era, 14.5TB of globally shared memory, used primarily for research in cosmology, astrophysics and particle physics; squish it all into a laptop and I'm sure Sheldon would be happy. :D) That was back in 2012, but the same concepts apply today.

    For TR2, the tricky part is getting the OS to play nice, along with the BIOS, and optimised sw. It'll be interesting to see how 2990WX performance evolves over time as BIOS updates come out and AMD gets feedback on how best to exploit the design, new optimisations from sw vendors (activate TR2 mode!) and so on.

    SGI dealt with a lot of these same issues when evolving its Origin design 20 years ago. For some tasks it absolutely obliterated the competition (eg. weather modelling and QCD), while for others in an unoptimised state it was terrible (animation rendering, not something that needs shared memory, but ILM wrote custom sw to reuse bits of a frame already calculated for future frame, the data able to fly between CPUs very fast, increasing throughput by 80% and making the 32-CPU systems very competitive, but in the long run it was easier to brute force on x86 and save the coder salary costs).

    There are so many different tasks in the professional space, the variety is vast. It's too easy to think cores are all that matter, but sometimes having oodles of RAM is more important, or massive I/O (defense imaging, medical and GIS are good examples).

    I'm just delighted to see this kind of tech finally filter down to the prosumer/consumer, but alas much of the nuance will be lost, and sadly some will undoubtedly buy based on the marketing, as opposed to the golden rule of any tech at this level: ignore the publish benchmarks, the ony test that actually matters is your specific intended task and data, so try and test it with that before making a purchasing decision.

    Ian.
    Reply
  • AbRASiON - Monday, August 13, 2018 - link

    Really? I can't tell if posts like these are facetious or kidding or what?

    I want AMD to compete so badly long term for all of us, but Intel have such immense resources, such huge infrastructure, they have ties to so many big business for high end server solutions. They have the bottom end of the low power market sealed up.

    Even if their 10nm is delayed another 3 years, AMD will only just begin to start to really make a genuine long term dent in Intel.

    I'd love to see us at a 50/50 situation here, heck I'd be happy with a 25/75 situation. As it stands, Intel isn't finished, not even close.
    Reply
  • imaheadcase - Monday, August 13, 2018 - link

    Are you looking at same benchmarks as everyone else? I mean AMD ass was handed to it in Encoding tests and even went neck to neck against some 6c intel products. If AMD got one of these out every 6 months with better improvements sure, but they never do. Reply
  • imaheadcase - Monday, August 13, 2018 - link

    Especially when you consider they are using double the core count to get the numbers they do have, its not very efficient way to get better performance. Reply
  • crotach - Tuesday, August 14, 2018 - link

    It's happened before. AMD trashes Intel. Intel takes it on the chin. AMD leads for 1-2 years and celebrates. Then Intel releases a new platform and AMD plays catch-up for 10 years and tries hard not to go bankrupt.

    I dearly hope they've learned a lesson the last time, but I have my doubts. I will support them and my next machine will be AMD, which makes perfect sense, but I won't be investing heavily in the platform, so no X399 for me.
    Reply
  • boozed - Tuesday, August 14, 2018 - link

    We're talking about CPUs that cost more than most complete PCs. Willy-waving aside, they are irrelevant to the market. Reply
  • Ian Cutress - Monday, August 13, 2018 - link

    Hey everyone, sorry for leaving a few pages blank right now. Jet lag hit me hard over the weekend from Flash Memory Summit. Will be filling in the blanks and the analysis throughout today.

    But here's what there is to look forward to:

    - Our new test suite
    - Analysis of Overclocking Results at 4G
    - Direct Comparison to EPYC
    - Me being an idiot and leaving the plastic cover on my cooler, but it completed a set of benchmarks. I pick through the data to see if it was as bad as I expected

    The benchmark data should now be in Bench, under the CPU 2019 section, as our new suite will go into next year as well.

    Thoughts and commentary welcome!
    Reply
  • Tamz_msc - Monday, August 13, 2018 - link

    Are the numbers for test LuxMark C++ test correct? Seems they've been swapped(2900WX and 2950X). Reply

Log in

Don't have an account? Sign up now