Power Consumption

TDP or not the TDP, That is The Question

Notice: When we initially posted this page, we ran numbers with an ASRock Z370 board. We have since discovered that the voltage applied by the board was super high, beyond normal expectations. We have since re-run the numbers using the MSI MPG Z390 Gaming Edge AC motherboard, which does not have this issue.

As shown above, Intel has given each of these processors a Thermal Design Power of 95 Watts. This magic value, as mainstream processors have grown in the last two years, has been at the center of a number of irate users.

By Intel’s own definitions, the TDP is an indicator of the cooling performance required for a processor to maintain its base frequency. In this case, if a user can only cool 95W, they can expect to realistically get only 3.6 GHz on a shiny new Core i9-9900K. That magic TDP value does not take into account any turbo values, even if the all-core turbo (such as 4.7 GHz in this case) is way above that 95W rating.

In order to make sense of this, Intel uses a series of variables called Power Levels: PL1, PL2, and PL3.

That slide is a bit dense, so we should focus on the graph on the right. This is a graph of power against time.

Here we have four horizontal lines from bottom to top: cooling limit (PL1), sustained power delivery (PL2), battery limit (PL3), and power delivery limit.

The bottom line, the cooling limit, is effectively the TDP value. Here the power (and frequency) is limited by the cooling at hand. It is the lowest sustainable frequency for the cooling, so for the most part TDP = PL1.  This is our ‘95W’ value.

The PL2 value, or sustained power delivery, is what amounts to the turbo. This is the maximum sustainable power that the processor can take until we start to hit thermal issues. When a chip goes into a turbo mode, sometimes briefly, this is the part that is relied upon. The value of PL2 can be set by the system manufacturer, however Intel has its own recommended PL2 values.

In this case, for the new 9th Generation Core processors, Intel has set the PL2 value to 210W. This is essentially the power required to hit the peak turbo on all cores, such as 4.7 GHz on the eight-core Core i9-9900K. So users can completely forget the 95W TDP when it comes to cooling. If a user wants those peak frequencies, it’s time to invest in something capable and serious.

Luckily, we can confirm all this in our power testing.

For our testing, we use POV-Ray as our load generator then take the register values for CPU power. This software method, for most platforms, includes the power split between the cores, the DRAM, and the package power. Most users cite this method as not being fully accurate, however compared to system testing it provides a good number without losses, and it forms the basis of the power values used inside the processor for its various functions.

Starting with the easy one, maximum CPU power draw.

Power (Package), Full Load

Focusing on the new Intel CPUs we have tested, both of them go beyond the TDP value, but do not hit PL2. At this level, the CPU is running all cores and threads at the all-core turbo frequency. Both 168.48W for the i9-9900K and 124.27W for the i7=9700K is far and above that ‘TDP’ rating noted above.

Should users be interested, in our testing at 4C/4T and 3.0 GHz, the Core i9-9900K only hit 23W power. Doubling the cores and adding another 50%+ to the frequency causes an almost 7x increase in power consumption. When Intel starts pushing those frequencies, it needs a lot of juice.

If we break out the 9900K into how much power is consumed as we load up the threads, the results look very linear.

This is as we load two threads onto one core at a time. The processor slowly adds power to the cores when threads are assigned.

Comparing to the other two ‘95W’ processors, we can see that the Core i9-9900K pushes more power as more cores are loaded. Despite Intel officially giving all three the same TDP at 95W, and the same PL2 at 210W, there are clear differences due to the fixed turbo tables embedded in each BIOS.

So is TDP Pointless? Yes, But There is a Solution

If you believe that TDP is the peak power draw of the processor under default scenarios, then yes, TDP is pointless, and technically it has been for generations. However under the miasma of a decade of quad core processors, most parts didn’t even reach the TDP rating even under full load – it wasn’t until we started getting higher core count parts, at the same or higher frequency, where it started becoming an issue.

But fear not, there is a solution. Or at least I want to offer one to both Intel and AMD, to see if they will take me up on the offer. The solution here is to offer two TDP ratings: a TDP and a TDP-Peak. In Intel lingo, this is PL1 and PL2, but basically the TDP-Peak takes into account the ‘all-core’ turbo. It doesn’t have to be covered under warranty (because as of right now, turbo is not), but it should be an indication for the nature of the cooling that a user needs to purchase if they want the best performance. Otherwise it’s a case of fumbling in the dark.

Gaming: Integrated Graphics Overclocking
Comments Locked

274 Comments

View All Comments

  • eastcoast_pete - Sunday, October 21, 2018 - link

    Yes; unfortunately, that's a major exception, and annoying to somebody like me who'd actually recommend AMD otherwise. I really hope that AMD improves it's AVX/AVX2 implementation and makes it truly 256 bit wide. If I remember correctly, the lag of Ryzen chips in 256 bit AVX vs. Intel is due to AMD using a 2 x 128 bit implementation (workaround, really), which is just nowhere near as fast as real 256 bit AVX. So, I hope that AMD gives their next Ryzen generation full 256 bit AVX, not the 2 x 128 bit workaround.
  • mapesdhs - Sunday, October 21, 2018 - link

    It's actually worse than that with pro apps. Even if AMD hugely improved their AVX, it won't help as much as it could so long as apps like Premiere remain so poorly coded. AE even has plugins that are still single-threaded from more than a decade ago. There are also several CAD apps that only use a single core. I once sold a 5GHz 2700K system to an engineering company for use with Majix, it absolutely blew the socks off their far more expensive XEON system (another largely single-threaded app, though not entirely IIRC).

    Makes me wonder what they're teaching sw engineering students these days; parallel coding and design concepts (hw and sw) was a large part of the comp sci stuff I did 25 years ago. Has it fallen out of favour because there aren't skilled lectures to teach it? Or students don't like tackling the hard stuff? Bot of both? Some of it was certainly difficult to grasp at first, but even back then there was a lot of emphasis on multi-threaded systems, or systems that consisted of multiple separate functional units governed by some kind of management engine (not unlike a modern game I suppose), at the time coding emphasis being on derivatives of C++. It's bizarre that after so long, Premiere inparticular is still so inefficient, ditto AE. One wonders if companies like Adobe simply rely on improving hw trends to provide customers with performance gains, instead of improving the code, though this would fly in the face of their claim a couple of years ago that they would spend a whole year focusing on improving performance since that's what users wanted more than anything else (I remember the survey results being discussed on creativcow).
  • eastcoast_pete - Sunday, October 21, 2018 - link

    Fully agree! Part of the problem is that the re-coding single-thread routines that could really benefit from parallel/multi-thread execution costs the Adobes of this world money, especially if one wants it done right. However, I believe that the biggest reason why so many programs, in full or in part, are solidly stuck in the last century is that their customers simply don't know what they are missing out on. Once volume licensees start asking their software supplier's sales engineers (i.e. sales people) "Yes, nice new interface. But, does this version now fully support multithreaded execution, and, if not, why not?", Adobe and others will give this the priority it should have had all along.
  • repoman27 - Friday, October 19, 2018 - link

    USB Type-C ports don't necessarily require a re-timer or re-driver (especially if they’re only using Gen 1 5 Gbit/s signaling), but they do require a USB Type-C Port Controller.

    The function of that chip is rather different though. Its job is to utilize the CC pins to perform device attach / detach detection, plug orientation detection, establish the initial power and data roles, and advertise available USB Type-C current levels. The port controller also generally includes a high-speed mux to steer the SuperSpeed signals to whichever pins are being used depending on the plug orientation. Referring to a USB Type-C Port Controller as a re-driver is both inaccurate and confusing to readers.
  • willis936 - Friday, October 19, 2018 - link

    Holy damn that's a lot of juice. 220W? That's 60 watts more than a 14x3GHz core IVB E5.

    They had better top charts with that kind of power draw. I have serious reservations about believing two DDR4 memory channels is enough to feed 8x5GHz cores. I would be interested in a study of memory scaling on this chip specifically, since it's the corner case for the question "Is two memory channels enough in 2018?".
  • DominionSeraph - Friday, October 19, 2018 - link

    This chip would be faster in everything than a 14 core IVB E5, while being over 50% faster in single-threaded tasks.
    Also, Intel is VERY generous with voltage in turbo. Note the 9700K at stock takes 156W in Blender for a time of 305, but when they dialed it in at 1.025V at 4.6GHz it took 87W for an improved time of 301, and they don't hit the stock wattage until they've hit 5.2GHz. When they get the 9900K scores up I expect that 220W number to be cut nearly in half by a proper voltage setting.
  • 3dGfx - Friday, October 19, 2018 - link

    How can you claim 9900k is the best when you never tested the HEDT parts in gaming? Making such claims really makes anandtech look bad. I hope you fix this oversight so skyX can be compared properly to 9900K and the skyX refresh parts!!! -- There was supposed to be a part2 to the i9-7980XE review and it never happened, so gaming benchmarks were never done, and i9-7940X and i9-7920X weren't tested either. HEDT is a gaming platform since it has no ECC support and isn't marketed as a workstation platform. Curious that intel says the 8-core part is now "the best" and you just go along with that without testing their flagship HEDT in games.
  • DannyH246 - Friday, October 19, 2018 - link

    If you want an unbiased review go here...

    https://www.extremetech.com/computing/279165-intel...

    Anandtech is a joke. Has been for years. Everyone knows it.
  • TEAMSWITCHER - Friday, October 19, 2018 - link

    Thanks... but no thanks. Why did you even come here? Just to post this? WEAK!
  • Arbie - Friday, October 19, 2018 - link

    What a stupid remark. And BTW Extremetech's conclusion is practically the same as AT's. The bias here is yours.

Log in

Don't have an account? Sign up now