Power Consumption and Distribution

With Threadripper weighing in at a TDP of 180W, it comes across as a big jump from previous AMD platforms that hover at 45-95W, or Intel platforms that are sub-95W for mainstream and up to 140W for the high-end desktop. Let us not forget that AMD actually released a 220W TDP processor in the form of the FX-9590 running at 5 GHz, which was initially sold for 12+ months as a part purely for OEMs and system integrators in order to ensure that users had sufficient cooling. Eventually it was released as a consumer product, bundled with a beefy double width liquid cooler and a pair of fans. AMD sampled us a CPU, not before I went and spent £300 on one myself and reviewed it:

Nonetheless, 180W for AMD isn’t a new concept for TDP. For this review I’ve been using the liquid cooler AMD shipped with our FX-9590 sample, because it was designed to handle at least 220W. (AMD also sampled a 3x120 Thermaltake cooler with Threadripper, which would have taken a lot longer to organise on the test bed.)

For our power testing, we run Prime95 for at least 60 seconds then use software to poll the integrated power counters on the chip to get results. Depending on the CPU, we can get data for the full chip, per core, DRAM, uncore or integrated graphics – it relies on our tool being up to date or the registers for this data to be known. Normally this way of reading the power consumption can be a smidge inaccurate compared to more invasive methods, it is quick and scriptable, and it is this data that governs if and when a CPU is hitting power limits and needs to adjust fan speeds/frequencies to compensate.

To start, let us take the full package power consumption for Threadripper.

Power: Total Package (Idle)

On the whole, Threadripper is a hungry chip even at idle. Most of the power here is being consumed by the memory controller and PCIe bus to keep the GPU ticking over with a static display. The fact that the 1950X running DDR4-3200 memory is pulling another 13W+ from the CPU shows how much of an impact the memory controller has on total power consumption. For all the chips, we’re recording sub 2W power draw from the cores.

When we load up the package with a single thread, it fires up the uncore/mesh as well as the memory and puts the system into its peak turbo state. Depending on the way the CPU is designed, this could fire up a single core or a bank of cores, so although in a bank of cores only one will be doing the work, it can still train power to be noticeable.

Power: Total Package (1T)

The results show all the Threadripper CPUs again hitting around the same mark, well above the Ryzen CPUs, and matching the 10C/8C parts from Broadwell-E and Haswell-E respectively. The 1950X running DDR4-3200 is still pulling an additional +13W, but interestingly the Skylake-X cores have jumped in power consumption to around this level. It would appear that the MoDe-X interconnect used in Skylake-X can also draw substantial power.

The next test is running the CPU will a full complement of threads for the design of the chip. This usually puts maximum strain on all the cores, the interconnect and the memory controller.

Power: Total Package (Full Load)

All the Threadripper CPUs hit around 177W, just under the 180W TDP, while the Skylake-X CPUs move to their 140W TDP. The 1950X in Game Mode seems to draw a little less power, which might be due to how the DRAM is being run in a NUMA environment.

One of the other graphs we have for some of the chips is the ‘cores-only’ power draw. At full load, we get an interesting plot:

Power: Cores Only (Full Load)

The key element to this graph is the 1950X running at DDR4-3200. Because the faster DRAM requires the memory controller to draw more power, it leaves less power for the CPU cores, potentially resulting in a lower turbo core frequency. So while the faster memory might guarantee faster performance in memory limited scenarios, the core frequency might end up lower given worse performance overall. It’s an interesting thought, so we plotted the per-core power for the 1950X at DDR4-2400 and DDR4-3200.

In this graph, the core number on the vertical axis is where the power measurement is taken, while from left to right is where we are loading up the cores, two threads at a time.

Initially we see that with two threads being loaded onto one core, that single core is drawing 20.77W. This quickly moves down to 19W, 17W, 16W to 11W by the time that half of the chip is loaded. At this point, with 8 cores loaded, the cores on their own are drawing 89W – if we add in the DRAM controllers, this would certainly be more than a Ryzen CPU.  However, as we move past 10 cores loaded, something odd happens – the total power consumption of the cores drops from 120W to 116W to 102W when 24 threads are in play. This is indicated by the second silicon die drawing less power per core. It then ramps up again, with the full chip giving each core about 8.2W.

Moving onto the DDR4-3200 graph shows a similar scenario:

At first, the single core gets a big 21W, although as we load up the cores by the time it hits 4 cores/8 threads, the sub-15W per core at DDR4-3200 is being eclipsed by the 16W per core at DDR4-2400. Moving through we see a small wobble at 24-26 threads again, with the final tally putting only 114W onto the cores, 20W less than at DDR4-2400.

Some of the data for Game Mode did not come through properly, so we can’t draw many conclusions from what we have, although an interesting point should be made. In Game Mode, when a system requires a low number of threads, say anywhere from 2-8, because SMT is disabled these threads need to run on different CCXes. In Creator Mode, these threads would group into 1-4 cores over one CCX, and consume less power. At DDR4-2400, this means 65W in Creator mode for 8 threads (4 cores) compared to 89W in Game mode for 8 cores active.

CPU Gaming Performance: Grand Theft Auto (1080p, 4K) Analyzing Creator Mode and Game Mode
Comments Locked

347 Comments

View All Comments

  • Ian Cutress - Thursday, August 10, 2017 - link

    Anand hasn't worked at the website for a few years now. The author (me) is clearly stated at the top.

    Just think about what you're saying. If I was in Intel's pocket, we wouldn't be being sampled by AMD, period. If they were having major beef with how we were reporting, I'd either be blacklisted or consistently on a call every time there's been an AMD product launch (and there's been a fair few this year).

    I've always let the results do the talking, and steered clear from hype generated by others online. We've gone in-depth into the how things are done the way they are, and the positives and negatives as to the methods of each action (rather than just ignoring the why). We've run the tests, and been honest about our results, and considered the market for the product being reviewed. My background is scientific, and the scientific method is applied rigorously and thoroughly on the product and the target market. If I see bullshit, I point it out and have done many times in the past.

    I'm not exactly sure what you're problem is - you state that the review is 'slanted journalism', but fail to give examples. We've posted ALL of our review data that we have, and we have a benchmark database for anyone that ones to go through all the data at any time. That benchmark database is continually being updated with new CPUs and new tests. Feel free to draw your own conclusions if you don't agree with what is written.

    Just note that a couple of weeks ago I was being called a shill for AMD. A couple of weeks before that, a shill for Intel. A couple before that... Nonetheless both companies still keep us on their sampling lists, on their PR lists, they ask us questions, they answer our questions. Editorial is a mile away from anything ad related and the people I deal with at both companies are not the ones dealing with our ad teams anyway. I wouldn't have it any other way.
  • MajGenRelativity - Thursday, August 10, 2017 - link

    I personally always enjoy reading your reviews Ian. Even though they don't always reach the conclusions I hoped they would reach before reading, you have the evidence and benchmarks to back it up. Keep up the good work!
  • Diji1 - Thursday, August 10, 2017 - link

    Agreed!
  • Zstream - Thursday, August 10, 2017 - link

    For me, it isn't about "scientific benchmarking", it's about what benchmarks are used and what story is being told. I think, along with many others, would never buy a threadripper to open a single .pdf. I could be wrong, but I don't think that's the target audience Intel or AMD is aiming for.

    I mean, why not forgo the .pdf and other benchmarks that are really useless for this product and add multi-threaded use cases. For instance, why not test how many VM's and I/O is received, or launching a couple VM's, running a SQL DB benchmark, and gaming at the same time?

    It could just be me, but I'm not going to buy a 7900x or 1950x for opening up .pdf files, or test SunSpider/Kraken lol. Hopefully we didn't include those benchmarks to tell a story, as mentioned above.

    We're goingto be compiling, 3d rendering with multi-gpu's, running multiple VM's, all while multi-tasking with other apps.

    My 2 cents.
  • DanNeely - Thursday, August 10, 2017 - link

    Single threaded use cases aren't why people buy really wide CPUs. But performing badly in them, since they represent a lot of ordinary basic usage, can be a reason not to buy one. Also running the same benches on all products allows for them all to be compared readily vs having to hunt for benches covering the specific pair you're interested in.

    VM type benchmarks are more Johan's area since that's a traditional server workload. OTOH there's a decent amount of overlap with developer workloads there too so adding it now that we've got a compile test might not be a bad idea. On the gripping hand, any new benchmarks need to be fully automated so Ian can push an easy button to collect data while he works on analysis of results. Also the value of any new benchmark needs to be weighed against how much it slows the entire benching run down, and how much time rerunning it on a large number of existing platforms will take to generate a comparison set.
  • iwod - Thursday, August 10, 2017 - link

    It really depends on use case. 20% slower on PDF opening? I dont care, because the time has reached diminishing returns and Intel needs to be MUCH faster for this to be a UX problem.

    But I think at $999 Intel has a strong case for its i9. But factoring in the MB AMD is still cheaper. Not sure if that is mentioned in the article.

    Also note Intel is on their third iteration of 14nm, against a new 14nm from AMD GloFlo.

    I am very excited for 7nm Zen 2 coming next year. I hope all the software and compiler as well as optimisation has time to catch up for Zen.
  • Zstream - Thursday, August 10, 2017 - link

    I won't get into an argument, but I and many of my friends, who are on the developer side of the house have been waiting for this review, and it doesn't provide me with any useful information. I understand it might be Johan's wheelhouse, but come on... opening a damn .pdf file, and testing SunSpider/Kraken/gaming benchmarks? That won't provide anyone interested in either CPU any validation of purchase. I'm not trying to be salty, I just want some more damn details vs. trying to put both vendors in a good light.
  • Ian Cutress - Thursday, August 10, 2017 - link

    Rather than have 20 different tests for each set of different CPUs and very minimal overlap, we have a giant glove that has all the tests for every CPU in a single script. So 80 test points, rather than 4x20. The idea is that there are benchmarks for everyone, so you can ignore the ones that don't matter, rather than expect 100% of the benchmarks to matter (e.g. if you care about five tests, does it matter to you if the tests are published alongside 75 other tests, or do they have to be the only five tests in the review?). It's not a case of trying to put both vendors in a good light, it's a case of this is a universal test suite.
  • Zstream - Thursday, August 10, 2017 - link

    Well, show me a database benchmark, virtual machine benchmark, 3dmax benchmark, blender benchmark and I'll shutty ;)

    It's hard for me to look at this review outside of a gamers perspective, which I'm not. Sorry, just the way I see it. I'll wait for more pro-consumer benchmarks?
  • Johan Steyn - Thursday, August 10, 2017 - link

    This is exactly my point as well. Why on earth so much focus on single threaded tests and games, since we all knew from way back TR was not going to be a winner here. Where are all the other benches as you mention. Oh, no, this will have Intel look bad!!!!!

Log in

Don't have an account? Sign up now