Intel’s New Adaptive Boost Technology for Core i9-K/KF

Taken from our news item

To say that Intel’s turbo levels are complicated to understand is somewhat of an understatement. Trying to teach the difference between the turbo levels to those new to measuring processor performance is an art form in of itself. But here’s our handy guide, taken from our article on the subject.

Adaptive Boost Technology is now the fifth frequency metric Intel uses on its high-end enthusiast grade processors, and another element in Intel’s ever complex ‘Turbo’ family of features. Here’s the list, in case we forget one:

Intel Frequency Levels
Base Frequency - The frequency at which the processor is guaranteed to run under warranty conditions with a power consumption no higher than the TDP rating of the processor.
Turbo Boost 2.0 TB2 When in a turbo mode, this is the defined frequency the cores will run at. TB2 varies with how many cores are being used.
Turbo Boost Max 3.0 TBM3
'Favored Core'
When in a turbo mode, for the best cores on the processor (usually one or two), these will get extra frequency when they are the only cores in use.
Thermally Velocity Boost TVB When in a turbo mode, if the peak thermal temperature detected on the processor is below a given value (70ºC on desktops), then the whole processor will get a frequency boost of +100 MHz. This follows the TB2 frequency tables depending on core loading.
Adaptive Boost Technology ABT
'floating turbo'
When in a turbo mode, if 3 or more cores are active, the processor will attempt to provide the best frequency within the power budget, regardless of the TB2 frequency table. The limit of this frequency is given by TB2 in 2-core mode. ABT overrides TVB when 3 or more cores are active.
*Turbo mode is limited by the turbo power level (PL2) and timing (Tau) of the system. Intel offers recommended guidelines for this, but those guidelines can be overridden (and are routinely ignored) by motherboard manufacturers. Most gaming motherboards will implement an effective ‘infinite’ turbo mode. In this mode, the peak power observed will be the PL2 value. It is worth noting that the 70ºC requirement for TVB is also often ignored, and TVB will be applied whatever the temperature.

Intel provided a slide trying to describe the new ABT, however the diagram is a bit of a mess and doesn’t explain it that well. Here’s the handy AnandTech version.

First up is the Core i7-11700K that AnandTech has already reviewed. This processor has TB2, TBM3, but not TVB or ABT.

The official specifications show that when one to four cores are loaded, when in turbo mode, it will boost to 4.9 GHz. If it is under two cores, the OS will shift the threads onto the favored cores and Turbo Boost Max 3.0 will kick in for 5.0 GHz. More than four core loading will be distributed as above.

On the Core i9-11900, the non-overclocking version, we also get Thermal Velocity Boost which adds another +100 MHz onto every core max turbo, but only if the processor is below 70ºC.

We can see here that the first two cores get both TBM3 (favored core) as well as TVB, which makes those two cores give a bigger jump. In this case, if all eight cores are loaded, the turbo is 4.6 GHz, unless the CPU is under 70ºC, then we get an all-core turbo of 4.7 GHz.

Now move up to the Core i9-11900K or Core i9-11900KF, which are the only two processors with the new floating turbo / Adaptive Boost Technology. Everything beyond two cores changes and TVB no longer applies.

Here we see what looks like a 5.1 GHz all-core turbo, from three cores to eight cores loaded. This is +300 MHz above TVB when all eight cores are loaded. But the reason why I’m calling this a floating turbo is because it is opportunistic.

What this means is that, if all 8 cores are loaded, TB2 means that it will run at 4.7 GHz. If there is power budget and thermal budget, it will attempt 4.8 GHz. If there is more power budget and thermal budget available, it will go to 4.9 GHz, then 5.0 GHz, then 5.1 GHz. The frequency will float as long as it has enough of those budgets to play with, and it will increase/decrease as necessary. This is important as different instructions cause different amounts of power draw and such.

If this sounds familiar, you are not wrong. AMD does the same thing, and they call it Precision Boost 2, and it was introduced in April 2018 with Zen+. AMD applies its floating turbo to all of its processors – Intel is currently limiting floating turbo to only the Core i9-K and Core i9-KF in Core 11th Gen Rocket Lake.

One of the things that we noticed with AMD however is that this floating turbo does increase power draw, especially with AVX/AVX2 workloads. Intel is likely going to see similar increases in power draw. What might be a small saving grace here is that Intel’s frequency jumps are still limited to full 100 MHz steps, whereas AMD can do it on the 25 MHz boundary. This means that Intel has to manage larger steps, and will likely only cross that boundary if it knows it can be maintained for a fixed amount of time. It will be interesting to see if Intel gives the user the ability to change those entry/exit points for Adaptive Boost Technology.

There will be some users who are already familiar with Multi-Core Enhancement / Multi-Core Turbo. This is a feature from some motherboard vendors have, and often enable at default, which lets a processor reach an all-core turbo equal to the single core turbo. That is somewhat similar to ABT, but that was more of a fixed frequency, whereas ABT is a floating turbo design. That being said, some motherboard vendors might still have Multi-Core Enhancement as part of their design anyway, bypassing ABT.

Overall, it’s a performance plus. It makes sense for the users that can also manage the thermals. AMD caught a wind with the feature when it moved to TSMC’s 7nm. I have a feeling that Intel will have to shift to a new manufacturing node to get the best out of ABT, and then we might see the feature on the more mainstream CPUs, as well as becoming default as standard.

Motherboards and Overclocking Support Power Consumption: Caution on Core i9
Comments Locked

279 Comments

View All Comments

  • mitox0815 - Tuesday, April 13, 2021 - link

    Discounting the possibilty of great design ideas just because past attempts failed to varying degrees is a bit premature, methinks. But it does seem odd that it's constantly P6-esque design philosphies - VERY broadly speaking here - that take the price in the end when it comes to x86.
  • blppt - Tuesday, March 30, 2021 - link

    Even Jim Keller, the genius who designed the original x64 AMD chip, AND bailed out AMD with the excellent Zen, didn't last very long at Intel.

    Might be an indicator of how messed up things are there.
  • BushLin - Tuesday, March 30, 2021 - link

    It's still possible that a yet to be released Jim Keller designed Intel CPU finally delivers a meaningful performance uplift in the next few years... I wouldn't bet on it but it isn't impossible either.
  • philehidiot - Tuesday, March 30, 2021 - link

    Indeed, it's a generation out. It's called "Intel Dynamic Breakfast Response". It goes "ding" when your bacon is ready for turning, rather than BSOD.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    Raja Koduri is a terrible human being and has wasted money on party buses and booze while “managing” his side of the house at Intel. I think Jim Keller knew the corporation was a big pander fest of bureaucracy and was smart to leave when he did. The chiplet idea he brought to the table, while not innovation since AMD already was first to market, will help them to stay in the game which wouldn’t have happened if he hadn’t contributed it.
  • Oxford Guy - Saturday, April 3, 2021 - link

    Oh? Firstly, I doubt he was the exec at AMD who invented the FX 9000 series scam. Secondly, AMD didn’t beat Nvidia for performance per watt but the Fury X coming with an AIO was a great improvement in performance per decibel — an important metric that is frequently undervalued by the tech press.

    What he deserves the most credit for, though, is making GPUs that made miners happy. Fact is that AMD is a corporation not a charity. And, not only is it happy to sell its entire stock to miners it is pleased to compete against PC gamers by propping up the console scam.
  • mitox0815 - Tuesday, April 13, 2021 - link

    The first to the x86 market, yes. Chiplets - or modules, however you wanna call them - are MUCH much older than that. Just as AMD64 wasn't the stroke of genius it's made out to be by AMD diehards...they just repeated the trick Intel pulled off with their jump to 32 bit on the 386. Not even multicores were AMDs invention...I think both multicore CPUs and chiplet designs were done by IBM before.

    The same goes for Intel though, really. Or Microsoft. Or Apple. Or most other big players. Adopting ideas and pushing them with your market weight seems to be much more of a success story than actually innovating on your own...innovation pressure is always on the underdogs, after all.
  • KAlmquist - Wednesday, April 7, 2021 - link

    The tick-tock model was designed to limit the impact of failures. For example, Broadwell was delayed because Intel couldn't get 14nm working, but that didn't matter too much because Broadwell was the same architecture as Haswell, just on a smaller node. By the time the Skylake design was completed, Intel had fixed the issues with 14nm and Skylake was released on schedule.

    What happened next indicates that people at Intel were still following the tick-tock model but had not internalized the reasoning that led Intel to adopt the tick-tock model in the first place. When Intel missed its target for 14nm, that meant it was likely that 10nm would be delayed as well. Intel did nothing. When the target date for 10nm came and went, Intel did nothing. When the target date for Sunny Cove arrived and it couldn't be released because the 10nm process wasn't there, Intel did nothing. Four years later, Intel has finally ported it to 14nm.

    If Intel had been following the philosophy behind tick-tock, they would have released Rocket Lake in 2017 or 2018 to compete with Zen 1. They would have designed a new architecture prior to the release of Zen 3. The only reason they'd be trying to pit a Sunny Cove variant against Zen 3 would be if their effort to design a new architecture failed.
  • Khenglish - Tuesday, March 30, 2021 - link

    I've said it before but I'll say it again. Software render Crysis by setting the benchmark to use the GPU, but disable the GPU driver in the device manager. This will cause Crysis to use the built-in Windows 10 software renderer, which is much newer and should be more optimized than the Crysis software renderer. It may even use AVX and AVX2, which Crysis certainly is too old for.
  • Adonisds - Tuesday, March 30, 2021 - link

    Great! Keep doing those Dolphin emulator tests. I wish there were even more emulator tests

Log in

Don't have an account? Sign up now