Power Consumption

The nature of reporting processor power consumption has become, in part, a dystopian nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP, but are broadly speaking applied the same. The difference comes to turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

In simple terms, processor manufacturers only ever guarantee two values which are tied together - when all cores are running at base frequency, the processor should be running at or below the TDP rating. All turbo modes and power modes above that are not covered by warranty. Intel kind of screwed this up with the Tiger Lake launch in September 2020, by refusing to define a TDP rating for its new processors, instead going for a range. Obfuscation like this is a frustrating endeavor for press and end-users alike.

However, for our tests in this review, we measure the power consumption of the processor in a variety of different scenarios. These include full workflows, real-world image-model construction, and others as appropriate. These tests are done as comparative models. We also note the peak power recorded in any of our tests.

I’m here plotting the 10900K against the 10850K as we load the threads with AIDA’s stress test. Peak values are being reported.

On the front page, I stated that one of the metrics on where those quality lines were drawn, aside from frequency, is power and voltage response. Moving the needle for binning by 100 MHz is relatively easy, but binning for power is more difficult beast to control. Our tests show that for any full-threaded workload, despite being a lower frequency than the 10900K, our 10850K actually uses more power. At the extreme, this is +15-20W more, or up to 2 W per core, showcasing just how strict the metrics on the 10900K had to be (and perhaps why Intel has had difficulty manufacturing enough). However, one could argue that it was Intel’s decision to draw the line that aggressive.

In more lightly threaded workloads, the 10850K actually seems to use less power, which might indicate that this could be a current density issue being the prime factor in binning.

For a real workload, we’re using our Agisoft Photoscan benchmark. This test has a number of different areas that involve single thread, multi-thread, or memory limited algorithms.

At first glance, it looks as if the Core i9-10850K consumes more power at any loading, but it is worth noting the power levels in the 80-100% region of the test, when we dip below 50 W. This is when we’re likely using 1 or 2 threads, and the power of the Core i9-10900K is much higher as a percentage here, likely because of the 5300 MHz setting.

After getting these results, it caused me to look more at the data underneath. In terms of power per core, when testing POV-Ray at full load the difference is about a watt per core or just under. What surprised me more was the frequency response as well as the core loading temperature.

Starting with the 10900K:

In the initial loading, we get 5300 MHz and temperatures up into the 85-90ºC bracket. It’s worth noting that at these temperatures the CPU shouldn’t be in Thermal Velocity Boost, which should have a hard ceiling of 70ºC, but most modern motherboards will ignore that ‘Intel recommendation’. Also, when we look at watts per core, on the 10900K we’re looking at 26 W on a single core, just to get 5300 MHz, so no wonder it drops down to 15-19W per core very quickly.

The processor runs down to 5000 MHz at 3 cores loaded, sitting at 81ºC. Then as we go beyond three cores, the frequency dips only slightly, and the temperature of the whole package increases steadily up and up, until quite toasty 98ºC. This is even with our 2 kg copper cooler, indicating that at this point it’s more about thermal transfer inside the silicon itself rather than radiating away from the cooler.

When we do the same comparison for the Core i9-10850K however, the results are a bit more alarming.

This graph comes in two phases.

The first phase is the light loading, and because we’re not grasping for 5300 MHz, the temperature doesn’t go into the 90ºC segment at light loading like the 10900K does. The frequency profile is a bit more stair shaped than the 10900K, but as we ramp up the cores, even at a lower frequency, the power and the thermals increase. At full loading, with the same cooler and the same benchmarks in the same board, we’re seeing reports of 102ºC all-package temperature. The cooler is warm, but not excessively so, again showcasing that this is more an issue for thermal migration inside the silicon rather than cooling capacity.

To a certain degree, silicon is already designed with thermal migration in mind. It’s what we call ‘dark’ silicon, essentially silicon that is disabled/not anything that acts as a thermal (or power/electrical) barrier between different parts of the CPU. Modern processors already have copious amounts of dark silicon, and as we move to denser process node technologies, it will require even more. The knock on effect on this is die size, which could also affect yields for a given defect density.

Despite these thermals, none of our benchmarks (either gaming or high-performance compute) seemed to be out of line based on expectations – if anything the 10850K outperforms what we expected. The only gripe is going to be cooling, as we used an open test bed and arguably the best air cooler on the market, and users building into a case will need something similarly substantial, probably of the liquid cooling variety.

Intel Core i9-10850K Review CPU Tests: Microbenchmarks
Comments Locked

126 Comments

View All Comments

  • Deicidium369 - Monday, January 4, 2021 - link

    And TSMC is really killing the fabrication front with the inability to ship anything in meaningful numbers - due to a extremely fragile supply chain - other than Apple - everything else in still on some variation of TSMC's 10nm class process - they call "7nm"
  • sadick - Monday, January 4, 2021 - link

    You are right, but Intel desktop CPUs are manufactured on the 14nm process since 2014!!! Ok, it's 14++++ now, but what an evolution, I'm very impressed ;-)

    I'm not an AMD fan boy, actually using a i7-9700k!
  • regsEx - Thursday, January 7, 2021 - link

    At least they are much cheaper. 10-core 10850K cost same as 6-core 5600X.
  • Impostors - Monday, January 4, 2021 - link

    So is apple? Lmfao you thought they were making the chips? TSMC isn't behind on production, they are the production for literally everyone, from PC to mobile.
  • name99 - Monday, January 4, 2021 - link

    "you could argue that was the right call given the state of the market"

    Only if you drank your own koolaid about the end of Moore's Law...

    Remember a book called _Only the Paranoid Survive_? About how in High Tech there are *constant* upsets and changes, nothing ever stays the same?
    Hmm, if only someone at Intel had read that book and though "Gee, this seems to describe an industry very much like the one in which we operate"...
  • 0ldman79 - Saturday, January 9, 2021 - link

    Playing it safe would have been fine if they had a product to release afterwards.

    Thing is they didn't. They got so cocky they screwed up their fabs, reached too far while physics are only getting tougher to overcome.

    TSMC made 7nm work, whether it hit their target density and speed goals or not it works. Intel had a goal and rather than back off as needed to release a product they kept fighting to hit an ego check-mark. When 10nm didn't work they should have backed off the density and tried again in order to release a product. Ultimately that's what they had to do but they did it 3 years too late.
  • WaltC - Monday, January 4, 2021 - link

    M1 has very little in software and hardware compatibility to recommend it, however. Those are the #1 reasons people buy computer systems--raw performance is merely icing on the cake. AMD blows the M1, and Intel CPUs, away, imo. As it sits today, the M1 is not competitive with AMD (or even Intel, actually) in terms of multithreaded performance desktops & enterprise-level offerings. I very much doubt Apple will be going there--but we shall see...M1 as it sits is a good beginner's start...let's see where it goes from there.
  • Great_Scott - Monday, January 4, 2021 - link

    The techie rant from the early 2000's is coming to pass, finally.

    So many programs are either mobile or browser-based that the M1 is going to get a pass on compatibility.

    Apple got lucky on the timing, in other words.
  • name99 - Monday, January 4, 2021 - link

    Geniuses (and genius companies) make their own timing...

    Seems kinda bizarre to consider the rise of mobile computing as an exogenous factor when discussing Apple!
  • Calin - Tuesday, January 5, 2021 - link

    Just read an article about Flash no longer being supported... and it was instead replaced by HTML5 and the like...
    Guess that genius companies really are lucky indeed ;)

Log in

Don't have an account? Sign up now