TDP and Power Draw: No Real Surprises

The nature of reporting processor power consumption has become, in part, a dystopian nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP, but are broadly speaking applied the same. The difference comes to turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

In simple terms, processor manufacturers only ever guarantee two values which are tied together - when all cores are running at base frequency, the processor should be running at or below the TDP rating. All turbo modes and power modes above that are not covered by warranty.

For AMD’s new Ryzen 5000 processors, most of them have a 105 W TDP, with a Package Power Tracking (PPT) setting of 142 W. For these processors, we can see our peak power consumption through our testing matching that value. For the sole 65 W processor, the PPT value is 88 W, and we’re seeing only 76 W, showing some of the efficiencies on the Ryzen 5 5600X.

If we look directly at the Ryzen 9 5950X for chip wide power consumption over per-core loading, we get this following graph. Here we are reporting two of the values that we have access to on the chip, which the chip estimates as part of its turbo detection and action algorithms: total package power (for the whole chip), and the power solely used by the sum of cores, which includes the L3 cache. The difference between the two covers the IO die as well as any chiplet-to-chiplet communications, PCIe, CPU-to-chipset, and DRAM controller consumption.

There are two significant features of this graph.

First is the hump, and a slow decrease in total package power consumption after 8-10 core loading. We saw this when we first tested the previous generation 3950X, and is indicative of how the processor has increased current density as it loads up the cores, and as a result there’s a balance between the frequency it can give, delivering the power, and applying the voltage in a consistent way. We’re seeing the difference between the two values also increasing slightly, as more data is transferred over those off-chiplet communications. We see this effect on the 5900X as well, perhaps indicating this is a feature of the dual chiplet design – we’re not seeing it on the 5800X or 5600X.

The second feature is an odd dip in power moving from 4 to 5 cores loaded. Looking into the data, the frequency of the active cores drops from 4725 to 4675, which isn’t a big drop, however the voltage decreases from 1.38 V to 1.31 V, which seems to be more sizeable drop than other voltage readouts as we scale the core-to-core loading. There’s also a bigger increase in non-core power, up from 16 W to 21 W, which perhaps decreases the power to the cores, reducing the voltage.

This might be an odd quirk of our specific chip, our power test, or it might be motherboard or BIOS specific (or a combination of several factors). We might go back in future on other boards to see if this is consistent.

When we dive into per-core power loading, we get the following:

The big chip’s power distribution seems to go up in that 3-4 core loading before coming back down again. But as we load up the second chiplet moving from 8 to 9 core loading, it is worth noting that the second chipset is reporting lower core power, despite showing the same core frequency. AMD is able to supply the two chiplets different amounts of voltage and power, and we might be seeing this play out in real time.

Perhaps very important is that single core power consumption when we are at 5050 MHz of 20.6 W. Going back to our previous generation data, on Zen 2 we were only seeing a peak of 18.3 W, and a slightly higher voltage reported (1.45 V for Zen 2 vs 1.42 V for Zen 3). This means that from the perspective of our two chips, Zen 3 cores scale better in frequency, and even though the power increases as expected, the voltage simultaneously decreases (Note that there can be some silicon variability to also account for some of this.)

Moving down the stack, the 12-core Ryzen 9 5900X doesn’t show any surprises – we’re seeing the same drop off as we load up the cores, this time as we go beyond eight cores. As this processor uses two chiplets, each with six cores, that second set of six cores seem to be consuming lower power per core as we add in additional load.

Some users might be scratching their heads – why is the second chiplet in both of these chips using less power, and therefore being more efficient? Wouldn’t it be better to use that chiplet as the first chiplet for lower power consumption at low loads? I suspect the answer here is nuanced – this first chipet likely has cores that enable a higher leakage profile, and then could arguably hit the higher frequencies at the expense of the power.

Moving down to a single chiplet but will the full power budget, and there is some power savings by not having the communications of a second chiplet. However, at 8-core load, the 5800X is showing 4450 MHz: the Ryzen 9 processors are showing 4475 MHz and 4500 MHz, indicating that there is still some product differentiation going on with this sort of performance. With this chip we still saw 140 W peak power consumption, however it wasn’t on this benchmark (our peak numbers can come from a number of benchmarks we monitor, not just our power-loading benchmark set).

At the 65 W level of the 5600X, as mentioned before, the all-core frequency is 4450 MHz, which is actually 50 MHz behind the 5800X. However this chip is very consistent, still giving up +50 MHz on its peak turbo compared to the on-box number. It also carries this turbo through to at least 3 core loading, and doesn’t lose much to 5 core loading. Users looking for something low power and consistent could be swayed by this chip.

For some specific real-world tests, we’re going to focus solely on the Ryzen 9 5950X. First up is our image-model construction workload, using our Agisoft Photoscan benchmark. This test has a number of different areas that involve single thread, multi-thread, or memory limited algorithms.

Most of this test sits around the 130 W mark, as the workload has a variable thread count. There are a couple of momentary spikes above 140 W, however everything is well within expected parameters.

The second test is from y-Cruncher, which is our AVX2/AVX512 workload. This also has some memory requirements, which can lead to periodic cycling with systems that have lower memory bandwidth per core options.

Our y-Cruncher test often shows one of two patterns – either a flat line for power-limited processors, or this zig-zag as the test is loaded and also uses a good portion of memory transfers for the calculation. Usually it is the latter which showcases when we’re getting the most out of the processor, and we get this here.

Compared to other processors, for peak power, we report the highest loaded value observed from any of our benchmark tests.

(0-0) Peak Power

Due to AMD’s PPT implementation, we’re getting very consistent peak power results between multiple generations of AMD processors. Because OEMs play around with Intel’s turbo implementation, essentially to an unlimited peak turbo power, this is why we see full-loaded values well above 200 W. While Intel stays on its most optimized 14nm process and AMD leverages TSMC’s leading 7nm, along with multiple generations of DTCO, AMD will have that efficiency lead.

Frequency: Going Above 5.0 GHz SPEC2006 and SPEC2017 Single-Threaded Results
Comments Locked

339 Comments

View All Comments

  • jakky567 - Tuesday, November 24, 2020 - link

    Total system, I think the 5950x should be more popular. That being said, the 5900x is still great.
  • mdriftmeyer - Monday, November 9, 2020 - link

    I spend $100 or more per week on extra necessities from Costco. Your price hike concerns are laughable.
  • bananaforscale - Monday, November 9, 2020 - link

    5900X has good binning and the cheapest price per core. For productivity 3900X has *nothing* on 5900X for the 10% price difference and 5950X is disproportionately more expensive. Zen and Zen+ are not an option if you want high IPC, 3300X basically doesn't exist... I'll give you that 3600 makes more sense to most people than 5600X, it's not that much faster.
  • Kangal - Wednesday, November 11, 2020 - link

    "Price per Core".... yeah, that's a pointless metric.
    What you need to focus on is "Price per Performance", and this should be divided into two segments: Gaming Performance, Productivity Performance. You shouldn't be running productivity tools whilst gaming for plenty of reasons (game crashes, tool errors, attention span, etc etc). The best use case for a "mixed/hybrid" would be Twitch Gaming, that's still a niche case.... but that's where the 5800X and 5900X makes sense.

    Now, I don't know what productivity programs you would use, nor would I know which games you would play, or if you plan on becoming a twitcher. So for your personal needs, you would have to figure that out yourself. Things like memory configurations and storage can have big impacts on productivity. Whereas for Gaming the biggest factor is which GPU you use.

    What I'm grasping at is the differences should/will decrease for most real-world scenarios, as there is something known as GPU scaling and being limited or having bottlenecks. For instance, RTX 2070-Super owners would target 1440p, and not 1080p. Or RTX 3090 owners would target 4K, and not for 1440p. And GTX 1650 owners would target 1080p, they wouldn't strive for 4K or 1440p.

    For instance, if you combine a 5600X with a Ultra-1440p-card, and compare the performance to a 3600X, the differences will diminish significantly. And at Ultra/4K both would be entirely GPU limited, so no difference. So if you compare a 5800X to a 3900X, the 3900X would come cheaper/same price but offer notably better productivity performance. And when it comes to gaming they would be equal/very similar when you're (most likely) GPU limited. That scenario applies to most consumers. However, there are outliers or niche people, who want to use a RTX 3090 to run CS GO at 1080p-Low Settings so they can get the maximum frames possible. This article alludes to what I have mentioned. But for more details, I would recommend people watch HardwareUnboxed video from YouTube, and see Steve's tests and hear his conclusions.

    Whereas here is my recommendation for the smart buyer, do not buy the 5600X or 5800X or 5900X. Wait a couple months and buy then. For Pure Gaming, get the r5-5600 which should have similar gaming performance but come in at around USD $220. For Productivity, get the r7-5700 which should have similar performance to the 5800X but come in at around USD $360. For the absolute best performance, buy the r9-5950x now don't wait. And what about Twitch Streamers? Well, if you're serious then build one Gaming PC, and a second Streaming PC, as this would allow your game to run fast, and your stream to flow fluidly.... IF YOU HAVE A GOOD INTERNET CONNECTION (Latency, Upload, Download).
  • lwatcdr - Monday, November 9, 2020 - link

    "You can get the 3700 for much cheaper than the 5800X. Or for the same price you can get the 3900X instead."
    And if you want both gaming and productivity? They get the 5800X or 5900X. So AMD has something for every segment which is great.
  • TheinsanegamerN - Thursday, November 12, 2020 - link

    The 5900x is margin of error from the 5950x in games, still shows a small uptick in gaming compared to 5800/5600x, offers far better performance then 5600/5800x in productivity tasks, and is noticeably cheaper then the 5950x.

    How on earth is that a non buy?

    The rest may be better value for money, but by that metric a $2 pentium D 945 is still far better value for money depending on the task. The 5000 series consistently outperforms the 3000 series, offring 20% better performance for 10% better cash.
  • Kishoreshack - Saturday, November 14, 2020 - link

    AMD has the best products to offer
    Soo you expect them to sell it at a cheaper rate than intel ?
  • Threska - Monday, November 16, 2020 - link

    AMD has a good product RANGE, which means something for everyone AND all monies go to AMD regardless of consumer choice.
  • Ninjawithagun - Friday, November 20, 2020 - link

    The price hike is mainly to cover ongoing R&D for the next-gen Ryzen Zen 4 CPUs due out in 2022. The race between Intel and AMD must go on!
  • jakky567 - Monday, November 23, 2020 - link

    I disagree about the 5900x being a no buy.

    I feel like it goes 5950x for absolute performance. 5900x for high tier performance on a budget. And then the 3000 series for people on a budget, except the 3950x.

    The 5900x has all the l3 cache.

Log in

Don't have an account? Sign up now