Power Consumption

The nature of reporting processor power consumption has become, in part, a dystopian nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP, but are broadly speaking applied the same. The difference comes to turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

In simple terms, processor manufacturers only ever guarantee two values which are tied together - when all cores are running at base frequency, the processor should be running at or below the TDP rating. All turbo modes and power modes above that are not covered by warranty. Intel kind of screwed this up with the Tiger Lake launch in September 2020, by refusing to define a TDP rating for its new processors, instead going for a range. Obfuscation like this is a frustrating endeavor for press and end-users alike.

However, for our tests in this review, we measure the power consumption of the processor in a variety of different scenarios. These include full workflows, real-world image-model construction, and others as appropriate. These tests are done as comparative models. We also note the peak power recorded in any of our tests.

I’m here plotting the 10900K against the 10850K as we load the threads with AIDA’s stress test. Peak values are being reported.

On the front page, I stated that one of the metrics on where those quality lines were drawn, aside from frequency, is power and voltage response. Moving the needle for binning by 100 MHz is relatively easy, but binning for power is more difficult beast to control. Our tests show that for any full-threaded workload, despite being a lower frequency than the 10900K, our 10850K actually uses more power. At the extreme, this is +15-20W more, or up to 2 W per core, showcasing just how strict the metrics on the 10900K had to be (and perhaps why Intel has had difficulty manufacturing enough). However, one could argue that it was Intel’s decision to draw the line that aggressive.

In more lightly threaded workloads, the 10850K actually seems to use less power, which might indicate that this could be a current density issue being the prime factor in binning.

For a real workload, we’re using our Agisoft Photoscan benchmark. This test has a number of different areas that involve single thread, multi-thread, or memory limited algorithms.

At first glance, it looks as if the Core i9-10850K consumes more power at any loading, but it is worth noting the power levels in the 80-100% region of the test, when we dip below 50 W. This is when we’re likely using 1 or 2 threads, and the power of the Core i9-10900K is much higher as a percentage here, likely because of the 5300 MHz setting.

After getting these results, it caused me to look more at the data underneath. In terms of power per core, when testing POV-Ray at full load the difference is about a watt per core or just under. What surprised me more was the frequency response as well as the core loading temperature.

Starting with the 10900K:

In the initial loading, we get 5300 MHz and temperatures up into the 85-90ºC bracket. It’s worth noting that at these temperatures the CPU shouldn’t be in Thermal Velocity Boost, which should have a hard ceiling of 70ºC, but most modern motherboards will ignore that ‘Intel recommendation’. Also, when we look at watts per core, on the 10900K we’re looking at 26 W on a single core, just to get 5300 MHz, so no wonder it drops down to 15-19W per core very quickly.

The processor runs down to 5000 MHz at 3 cores loaded, sitting at 81ºC. Then as we go beyond three cores, the frequency dips only slightly, and the temperature of the whole package increases steadily up and up, until quite toasty 98ºC. This is even with our 2 kg copper cooler, indicating that at this point it’s more about thermal transfer inside the silicon itself rather than radiating away from the cooler.

When we do the same comparison for the Core i9-10850K however, the results are a bit more alarming.

This graph comes in two phases.

The first phase is the light loading, and because we’re not grasping for 5300 MHz, the temperature doesn’t go into the 90ºC segment at light loading like the 10900K does. The frequency profile is a bit more stair shaped than the 10900K, but as we ramp up the cores, even at a lower frequency, the power and the thermals increase. At full loading, with the same cooler and the same benchmarks in the same board, we’re seeing reports of 102ºC all-package temperature. The cooler is warm, but not excessively so, again showcasing that this is more an issue for thermal migration inside the silicon rather than cooling capacity.

To a certain degree, silicon is already designed with thermal migration in mind. It’s what we call ‘dark’ silicon, essentially silicon that is disabled/not anything that acts as a thermal (or power/electrical) barrier between different parts of the CPU. Modern processors already have copious amounts of dark silicon, and as we move to denser process node technologies, it will require even more. The knock on effect on this is die size, which could also affect yields for a given defect density.

Despite these thermals, none of our benchmarks (either gaming or high-performance compute) seemed to be out of line based on expectations – if anything the 10850K outperforms what we expected. The only gripe is going to be cooling, as we used an open test bed and arguably the best air cooler on the market, and users building into a case will need something similarly substantial, probably of the liquid cooling variety.

Intel Core i9-10850K Review CPU Tests: Microbenchmarks
Comments Locked

126 Comments

View All Comments

  • 1_rick - Monday, January 4, 2021 - link

    Because you've got the people who will spend any amount of money to get 5fps more in their games so they can smugly tell everyone who they've got the best.
  • lopri - Monday, January 4, 2021 - link

    I see Ryzens beating this thing by sizeable margins in games.
  • zodiacfml - Monday, January 4, 2021 - link

    Ryzen 5000 series is significantly faster than Intel's i9-10900k in all games though I haven't seen compared with overclocks. The Intel gets good at rendering/encode but I'd rather buy old Xeons with Chinese motherboards for those loads
  • V3ctorPT - Monday, January 4, 2021 - link

    In gaming the real star is the 5600X... awesome performance for its price, for a 65W(!) CPU...
  • lmcd - Monday, January 4, 2021 - link

    It's basically an 80W CPU though lol
  • Crazyeyeskillah - Monday, January 4, 2021 - link

    my 5600x is 10-20c hotter than my 3600 clock for clock on the same exact rig and watercooler.
  • JessNarmo - Monday, January 4, 2021 - link

    I was considering 10850k as an upgrade option when I it for $400. It's undeniably significantly better deal than 10900k at $530.

    But ultimately decided that it's just not good enough for an upgrade because it still doesn't support PCIE 4 so if I upgrade I would have to upgrade again very shortly.

    Would have to wait for 5900x availability or maybe intel will come up with something better.
  • edzieba - Monday, January 4, 2021 - link

    The same argument can be made for the 5900x and PCIe 5 (or DDR 5). There will always be a new protocol, or new interface, or etc on the horizon.
  • JessNarmo - Monday, January 4, 2021 - link

    Disagree. Right now I have the same Skylake cores running 5Ghz and the same PCIE 3, the same everything and it's still fine except I have less cores.

    With 5900x I'll get better single thread and multi thread performance as well as PCIE4 which is really important for future GPU's and upcoming upgrades unlike PCIE5 which isn't important at all at this point in time.
  • MDD1963 - Monday, January 4, 2021 - link

    PCI-e 4.0 was going to be 'critical' for GPUs to get best performance from a 3080/3090...; instead, it was/is still a non-player. Maybe that will change for next gen. Maybe not.

Log in

Don't have an account? Sign up now