Power Consumption

TDP or not the TDP, That is The Question

Notice: When we initially posted this page, we ran numbers with an ASRock Z370 board. We have since discovered that the voltage applied by the board was super high, beyond normal expectations. We have since re-run the numbers using the MSI MPG Z390 Gaming Edge AC motherboard, which does not have this issue.

As shown above, Intel has given each of these processors a Thermal Design Power of 95 Watts. This magic value, as mainstream processors have grown in the last two years, has been at the center of a number of irate users.

By Intel’s own definitions, the TDP is an indicator of the cooling performance required for a processor to maintain its base frequency. In this case, if a user can only cool 95W, they can expect to realistically get only 3.6 GHz on a shiny new Core i9-9900K. That magic TDP value does not take into account any turbo values, even if the all-core turbo (such as 4.7 GHz in this case) is way above that 95W rating.

In order to make sense of this, Intel uses a series of variables called Power Levels: PL1, PL2, and PL3.

That slide is a bit dense, so we should focus on the graph on the right. This is a graph of power against time.

Here we have four horizontal lines from bottom to top: cooling limit (PL1), sustained power delivery (PL2), battery limit (PL3), and power delivery limit.

The bottom line, the cooling limit, is effectively the TDP value. Here the power (and frequency) is limited by the cooling at hand. It is the lowest sustainable frequency for the cooling, so for the most part TDP = PL1.  This is our ‘95W’ value.

The PL2 value, or sustained power delivery, is what amounts to the turbo. This is the maximum sustainable power that the processor can take until we start to hit thermal issues. When a chip goes into a turbo mode, sometimes briefly, this is the part that is relied upon. The value of PL2 can be set by the system manufacturer, however Intel has its own recommended PL2 values.

In this case, for the new 9th Generation Core processors, Intel has set the PL2 value to 210W. This is essentially the power required to hit the peak turbo on all cores, such as 4.7 GHz on the eight-core Core i9-9900K. So users can completely forget the 95W TDP when it comes to cooling. If a user wants those peak frequencies, it’s time to invest in something capable and serious.

Luckily, we can confirm all this in our power testing.

For our testing, we use POV-Ray as our load generator then take the register values for CPU power. This software method, for most platforms, includes the power split between the cores, the DRAM, and the package power. Most users cite this method as not being fully accurate, however compared to system testing it provides a good number without losses, and it forms the basis of the power values used inside the processor for its various functions.

Starting with the easy one, maximum CPU power draw.

Power (Package), Full Load

Focusing on the new Intel CPUs we have tested, both of them go beyond the TDP value, but do not hit PL2. At this level, the CPU is running all cores and threads at the all-core turbo frequency. Both 168.48W for the i9-9900K and 124.27W for the i7=9700K is far and above that ‘TDP’ rating noted above.

Should users be interested, in our testing at 4C/4T and 3.0 GHz, the Core i9-9900K only hit 23W power. Doubling the cores and adding another 50%+ to the frequency causes an almost 7x increase in power consumption. When Intel starts pushing those frequencies, it needs a lot of juice.

If we break out the 9900K into how much power is consumed as we load up the threads, the results look very linear.

This is as we load two threads onto one core at a time. The processor slowly adds power to the cores when threads are assigned.

Comparing to the other two ‘95W’ processors, we can see that the Core i9-9900K pushes more power as more cores are loaded. Despite Intel officially giving all three the same TDP at 95W, and the same PL2 at 210W, there are clear differences due to the fixed turbo tables embedded in each BIOS.

So is TDP Pointless? Yes, But There is a Solution

If you believe that TDP is the peak power draw of the processor under default scenarios, then yes, TDP is pointless, and technically it has been for generations. However under the miasma of a decade of quad core processors, most parts didn’t even reach the TDP rating even under full load – it wasn’t until we started getting higher core count parts, at the same or higher frequency, where it started becoming an issue.

But fear not, there is a solution. Or at least I want to offer one to both Intel and AMD, to see if they will take me up on the offer. The solution here is to offer two TDP ratings: a TDP and a TDP-Peak. In Intel lingo, this is PL1 and PL2, but basically the TDP-Peak takes into account the ‘all-core’ turbo. It doesn’t have to be covered under warranty (because as of right now, turbo is not), but it should be an indication for the nature of the cooling that a user needs to purchase if they want the best performance. Otherwise it’s a case of fumbling in the dark.

Gaming: Integrated Graphics Overclocking
Comments Locked

274 Comments

View All Comments

  • The Original Ralph - Saturday, October 20, 2018 - link

    sorry, B&H's availability date should be JAN 1, 2100
  • eastcoast_pete - Saturday, October 20, 2018 - link

    JAN 1, 2100? Intel's manufacturing problems must be at lot more serious than we knew (:
    I wonder if the 9900K will be supported by "Windows 21" when they finally ship?
  • cubebomb - Saturday, October 20, 2018 - link

    you guys need to stop posting 1080p benchmarks for games already. come on now.
  • gammaray - Sunday, October 21, 2018 - link

    I agree, 1440p and higher, especially with the top CPUs
  • mapesdhs - Sunday, October 21, 2018 - link

    They would of course respond that they have to show 1080p in order to reveal CPU differences, even if the frame rates are so high that most people wouldn't care anyway. I suppose those who do game at 1080p on high refresh monitors would say they care about the data, but then the foundation of the RTX launch is a new pressure to move away from high refresh rates, something the aforementioned group of gamers physically cannot do.
  • piroroadkill - Monday, October 22, 2018 - link

    They need to show a meaningful difference between CPUs. setting a higher resolution makes the tests worthless, as you'll just be GPU bottlenecked.
  • eva02langley - Monday, October 22, 2018 - link

    They are important since they bring in perspective CPU bottleneck, however it is widely overpreached.

    1080p, 1440p and 2160p at max settings... enough said. Without multiple resolutions benchmarks, it is impossible to get a clear picture of the real performances to expect from a potential system.

    However, basically, a value rating system is now MANDATORY. It doesn't make any sense that the 9900k received 90% + score on Toms and WCCF. They offer abysmal value for gamers, so it is not "The Best Gaming CPU", however it is the "strongest"
  • DominionSeraph - Monday, October 22, 2018 - link

    It's $110 over the i7. If you're looking at a $2500 i7 rig, going to $2610 with an i9 is a 4% increase in price. Looks to me like it generally wins by over 4%. That's a really good value for a content creator since it stomps the i7 by over 20%.
  • Chestertonian - Wednesday, February 27, 2019 - link

    No kidding. Why are there barely any 1440p benchmarks, but there are tons of 8k benchmarks? I don't get it.
  • avatar-ds - Sunday, October 21, 2018 - link

    Something's fishy with the 8086k consistently underperforming the 8700k in many (most?) gaming tests by more than a margin of error where differences are significant enough. Undermines credibility of the whole thing.

Log in

Don't have an account? Sign up now