Power Consumption and Distribution

With Threadripper weighing in at a TDP of 180W, it comes across as a big jump from previous AMD platforms that hover at 45-95W, or Intel platforms that are sub-95W for mainstream and up to 140W for the high-end desktop. Let us not forget that AMD actually released a 220W TDP processor in the form of the FX-9590 running at 5 GHz, which was initially sold for 12+ months as a part purely for OEMs and system integrators in order to ensure that users had sufficient cooling. Eventually it was released as a consumer product, bundled with a beefy double width liquid cooler and a pair of fans. AMD sampled us a CPU, not before I went and spent £300 on one myself and reviewed it:

Nonetheless, 180W for AMD isn’t a new concept for TDP. For this review I’ve been using the liquid cooler AMD shipped with our FX-9590 sample, because it was designed to handle at least 220W. (AMD also sampled a 3x120 Thermaltake cooler with Threadripper, which would have taken a lot longer to organise on the test bed.)

For our power testing, we run Prime95 for at least 60 seconds then use software to poll the integrated power counters on the chip to get results. Depending on the CPU, we can get data for the full chip, per core, DRAM, uncore or integrated graphics – it relies on our tool being up to date or the registers for this data to be known. Normally this way of reading the power consumption can be a smidge inaccurate compared to more invasive methods, it is quick and scriptable, and it is this data that governs if and when a CPU is hitting power limits and needs to adjust fan speeds/frequencies to compensate.

To start, let us take the full package power consumption for Threadripper.

Power: Total Package (Idle)

On the whole, Threadripper is a hungry chip even at idle. Most of the power here is being consumed by the memory controller and PCIe bus to keep the GPU ticking over with a static display. The fact that the 1950X running DDR4-3200 memory is pulling another 13W+ from the CPU shows how much of an impact the memory controller has on total power consumption. For all the chips, we’re recording sub 2W power draw from the cores.

When we load up the package with a single thread, it fires up the uncore/mesh as well as the memory and puts the system into its peak turbo state. Depending on the way the CPU is designed, this could fire up a single core or a bank of cores, so although in a bank of cores only one will be doing the work, it can still train power to be noticeable.

Power: Total Package (1T)

The results show all the Threadripper CPUs again hitting around the same mark, well above the Ryzen CPUs, and matching the 10C/8C parts from Broadwell-E and Haswell-E respectively. The 1950X running DDR4-3200 is still pulling an additional +13W, but interestingly the Skylake-X cores have jumped in power consumption to around this level. It would appear that the MoDe-X interconnect used in Skylake-X can also draw substantial power.

The next test is running the CPU will a full complement of threads for the design of the chip. This usually puts maximum strain on all the cores, the interconnect and the memory controller.

Power: Total Package (Full Load)

All the Threadripper CPUs hit around 177W, just under the 180W TDP, while the Skylake-X CPUs move to their 140W TDP. The 1950X in Game Mode seems to draw a little less power, which might be due to how the DRAM is being run in a NUMA environment.

One of the other graphs we have for some of the chips is the ‘cores-only’ power draw. At full load, we get an interesting plot:

Power: Cores Only (Full Load)

The key element to this graph is the 1950X running at DDR4-3200. Because the faster DRAM requires the memory controller to draw more power, it leaves less power for the CPU cores, potentially resulting in a lower turbo core frequency. So while the faster memory might guarantee faster performance in memory limited scenarios, the core frequency might end up lower given worse performance overall. It’s an interesting thought, so we plotted the per-core power for the 1950X at DDR4-2400 and DDR4-3200.

In this graph, the core number on the vertical axis is where the power measurement is taken, while from left to right is where we are loading up the cores, two threads at a time.

Initially we see that with two threads being loaded onto one core, that single core is drawing 20.77W. This quickly moves down to 19W, 17W, 16W to 11W by the time that half of the chip is loaded. At this point, with 8 cores loaded, the cores on their own are drawing 89W – if we add in the DRAM controllers, this would certainly be more than a Ryzen CPU.  However, as we move past 10 cores loaded, something odd happens – the total power consumption of the cores drops from 120W to 116W to 102W when 24 threads are in play. This is indicated by the second silicon die drawing less power per core. It then ramps up again, with the full chip giving each core about 8.2W.

Moving onto the DDR4-3200 graph shows a similar scenario:

At first, the single core gets a big 21W, although as we load up the cores by the time it hits 4 cores/8 threads, the sub-15W per core at DDR4-3200 is being eclipsed by the 16W per core at DDR4-2400. Moving through we see a small wobble at 24-26 threads again, with the final tally putting only 114W onto the cores, 20W less than at DDR4-2400.

Some of the data for Game Mode did not come through properly, so we can’t draw many conclusions from what we have, although an interesting point should be made. In Game Mode, when a system requires a low number of threads, say anywhere from 2-8, because SMT is disabled these threads need to run on different CCXes. In Creator Mode, these threads would group into 1-4 cores over one CCX, and consume less power. At DDR4-2400, this means 65W in Creator mode for 8 threads (4 cores) compared to 89W in Game mode for 8 cores active.

CPU Gaming Performance: Grand Theft Auto (1080p, 4K) Analyzing Creator Mode and Game Mode
Comments Locked

347 Comments

View All Comments

  • drajitshnew - Thursday, August 10, 2017 - link

    You have written that "This socket is identical (but not interchangeable) to the SP3 socket used for EPYC,".
    Please, clarify.
    I was under the impression that if you drop an epyc in a threadripper board, it would disable 4 memory channels & 64 PCIe lanes as those will simply not be wired up.
  • Deshi! - Friday, August 11, 2017 - link

    No AMD have stated that won;t work. Its probably not hardware incompatible, but they probably put microcode on the CPUS so that if it doesn;t detect its a Ryzen CPU it doesn't work. There might also be differences in how the cores are wired up on the fabric since its 2 cores instead of 4. Remember, Threadripper has only 2 Physical Dies that are active. on Epyc all processors are 4 dies with cores on each die disabled right down to the 8 core part. (2 enabled on each physical die)
  • Deshi! - Friday, August 11, 2017 - link

    Wish there was an edit function..... but to add to that, If you pop in an Epyc processor, it might go looking for those extra lanes and memory busses that don;t exist on Threadripper boards, hence cause it not to function.
  • pinellaspete - Thursday, August 10, 2017 - link

    This is the second article where you've tried to start an acronym called SHED (Super High End Desktop) in referring to AMD Threadripper systems. You also say that Intel systems are HEDT (High End Desktop) when in all reality both AMD and Intel are HEDT. It is just that Intel has been keeping the core count low on consumer systems for so long you think that anything over a 10 core system is unusual.

    AMD is actually producing a HEDT CPU for $1000 and not inflating the price of a HEDT CPU and bleeding their customers like Intel was doing with the i7-6950X CPU for $1750. HEDT CPUs should cost about $1000 and performance should increase with every generation for the same price, not relentlessly jacking the price as Intel has done.

    HEDT should be increasing in performance every generation and you prove yourself to be Intel biased when something finally comes along that beats Intel's butt. Just because it beats Intel you want to put it into a different category so it doesn't look like Intel fares as bad. If we start a new category of computers called SHED what comes next in a few years? SDHED? Super Duper High End Desktop?
  • Deshi! - Friday, August 11, 2017 - link

    theres a good reason for that. Intel is not just inflating the cost because they want to. It literally cost them much more to produce their chips because of the monolithic die aproach vs AMDs Modular aproach. AMDs yeilds are much better than INtels in the higher core counts. Intel will not be able to match AMDs prices and still make significant profit unless they also adopt the same approach.
  • fanofanand - Tuesday, August 15, 2017 - link

    "HEDT CPUs should cost about $1000 "

    That's not how free markets work. Companies will price any given product at their maximum profit. If they can sell 10 @ $2000 or 100 at $1000 and it costs them $500 to produce, they would make $15,000 selling 10 and $50,000 selling 100 of them. Intel isn't filled with idiots, they priced their chips at whatever they thought would bring the maximum profits. The best way for the consumer to protest prices that we believe are higher than the "right" price is to not buy them. The companies will be forced to reduce their prices to find the market equilibrium. Stop complaining about Intel's gouging, vote with your wallet and buy AMD. Or don't, it's up to you.
  • Stiggy930 - Thursday, August 10, 2017 - link

    Honestly, the review is somewhat disappointing. For a pro-sumer product, there is no MySQL/PostgreSQL benchmark. No compilation test under Linux environment. Really?
  • name99 - Friday, August 11, 2017 - link

    "In an ideal world, all software would be NUMA-aware, eliminating any concerns over the matter."

    Why? This is an idiotic statement, like saying that in an ideal world all software would be aware of cache topology. In an actual ideal world, the OS would handle page or task migration between NUMA nodes transparently enough that almost no app would even notice NUMA, and even in an non-ideal world, how much does it actually matter?
    Given the way the tech world tends to work ("OMG, by using DRAM that's overclocked by 300MHz you can increase your Cinebench score by .5% !!! This is the most important fact in the history of the universe!!!") my suspicion, until proven otherwise, is that the amount of software for which this actually matters is pretty much negligible and it's not worth worrying about.
  • cheshirster - Friday, August 11, 2017 - link

    Anandtechs power and compiling tests are completely out of other rewiewers results.
    Still hiding poor Skylake-X gaming results.
    Most of the tests are completely out of that 16-core CPU target workloads.
    2400 memory used for tests.
    Absolutely zero perf/watt and price/perf analisys.

    Intel bias is over the roof here.
    Looks like I'm done with Anandtech.
  • Hurr Durr - Friday, August 11, 2017 - link

    Here`s your pity comment.

Log in

Don't have an account? Sign up now