Defining Turbo, Intel Style

Since 2008, mainstream multi-core x86 processors have come to the market with this notion of ‘turbo’. Turbo allows the processor, where plausible and depending on the design rules, to increase its frequency beyond the number listed on the box. There are tradeoffs, such as Turbo may only work for a limited number of cores, or increased power consumption / decreased efficiency, but ultimately the original goal of Turbo is to offer increased throughput within specifications, and only for limited time periods. With Turbo, users could extract more performance within the physical limits of the silicon as sold.

In the beginning, Turbo was basic. When an operating system requested peak performance from a processor, it would increase the frequency and voltage along a curve within the processor power, current, and thermal limits, or until it hit some other limitation, such as a predefined Turbo frequency look-up table. As Turbo has become more sophisticated, other elements of the design come into play: sustained power, peak power, core count, loaded core count, instruction set, and a system designer’s ability to allow for increased power draw. One laudable goal here was to allow component manufacturers the ability to differentiate their product with better power delivery and tweaked firmwares to give higher performance.

For the last 10 years, we have lived with Intel’s definition of Turbo (or Turbo Boost 2.0, technically) as the defacto understanding of what Turbo is meant to mean. Under this scheme, a processor has a sustained power level, and peak power level, a power budget, and assuming budget is available, the processor will go to a Turbo frequency based on what instructions are being run and how many cores are active. That Turbo frequency is governed by a Turbo table.

The Turbo We All Understand: Intel Turbo

So, for example. I have a hypothetical processor that has a sustained power level (PL1) of 100W. The peak power level (PL2) is 150W*. The budget for this turbo (Tau) is 20 seconds, or the equivalent of 1000 joules of energy (20*(150-100)), which is replenished at a rate of 50 joules per second. This quad core CPU has a base frequency of 3.0 GHz, but offers a single core turbo of 4.0 GHz, and 2-core to 4-core of 3.5 GHz.

So tabulated, our hypothetical processor gets these values:

Sustained Power Level PL1 / TDP 100 W
Peak Power Level PL2 150 W
Turbo Window* Tau 20 s
Total Power Budget* (150-100) * 20 1000 J
*Turbo Window (and Total Power Budget) is typically defined for a given workload complexity, where 100% is a total power virus. Normally this value is around 95%

*Intel provides ‘suggested’ PL2 values and ‘suggested’ Tau values to motherboard manufacturers. But ultimately these can be changed by the manufacturers – Intel allows their partners to adjust these values without breaking warranty. Intel believes that its manufacturing partners can differentiate their systems with power delivery and other features to allow a fully configurable value of PL2 and Tau. Intel sometimes works with its partners to find the best values. But the take away message about PL2 and Tau is that they are system dependent. You can read more about this in our interview with Intel’s Guy Therien.

Now please note that a workload, even a single thread workload, can be ‘light’ or it can be ‘heavy’. If I created a piece of software that was a never ending while(true) loop with no operations, then the workload would be ‘light’ on the core and not stressing all the parts of the core. A heavy workload might involve trigonometric functions, or some level of instruction-level parallelism that causes more of the core to run at the same time. A ‘heavy’ workload therefore draws more power, even though it is still contained with a single thread.

If I run a light workload that requires a single thread, it will start the processor at 4.0 GHz. If the power of that single thread is below 100W, then I will use none of my budget, as it is refilled immediately. If I then switch to a heavy workload, and the core now consumes 110W, then my 1000 joules of turbo budget would decrease by 10 joules every second. In effect, I would get 100 seconds of turbo on this workload, and when the budget is depleted, the sustained power level (PL1) would kick in and reduce the frequency to ensure that the consumption on the chip stayed at 100W. My budget of energy for turbo would not increase, because the 100 joules/second that is being added is immediately taken away by the heavy workload. This frequency may not be the 3.0 GHz base frequency – it depends on the voltage/power characteristics of the individual chip. That 3.0 GHz base value is the value that Intel guarantees on its hardware – so every one of this hypothetical processor will be a minimum of 3.0 GHz at 100W on a sustained workload.

To clarify, Intel does not guarantee any turbo speed that is part of the specification sheet.

Now with a multithreaded workload, the same thing occurs, but you are more likely to hit both the peak power level (PL2) of 150W, and the 1000 joules of budget will disappear in the 20 seconds listed in the firmware. If the chip, with a 4-core heavy workload, hits the 150W value, the frequency will be decreased to maintain 150W – so as a result we may end up with less than the ‘3.5 GHz’ four-core turbo that was listed on the box, despite being in turbo.

So when a workload is what we call ‘bursty’, with periods of heavy and light work, the turbo budget may be refilled quicker than it is used in light workloads, allowing for more turbo when the workload gets heavy again. This makes it important when benchmarking software one after another – the first run will always have the full turbo budget, but if subsequent runs do not allow the budget to refill, it may get less turbo.

As stated, that turbo power level (PL2) and power budget time (Tau) are configurable by the motherboard manufacturer. We see that on enterprise motherboards, companies often stick to Intel’s recommended settings, but with consumer overclocking motherboards, the turbo power might be 2x-5x higher, and the power budget time might be essentially infinite, allowing for turbo to remain. The manufacturer can do this if they can guarantee that the power delivery to the processor, and the thermal solution, are suitable.

(It should be noted that Intel actually uses a weighted algorithm for its budget calculations, rather than the simplistic view I’ve given here. That means that the data from 2 seconds ago is weighted more than the data from 10 seconds ago when determining how much power budget is left. However, when the power budget time is essentially infinite, as how most consumer motherboards are set today, it doesn’t particularly matter either way given that the CPUs will turbo all the time.)

Ultimately, Intel uses what are called ‘Turbo Tables’ to govern the peak frequency for any given number of cores that are loaded. These tables assume that the processor is under the PL2 value, and there is turbo budget available. For example, here are Intel’s turbo tables for Intel’s 8th Generation Coffee Lake desktop CPUs.

So Intel provides the sustained power level (PL1, or TDP), the Base frequency (3.70 GHz for the Core i7-8700K), and a range of turbo frequencies based on the core loading, assuming the motherboard manufacturer set PL2 isn’t hit and power budget is available.

The Effect of Intel’s Turbo Regime, and Intel’s Binning

At the time, Intel did a good job in conveying its turbo strategy to the press. It helped that staying on quad-core processors for several generations meant that the actual turbo power consumption of these quad-core chips was actually lower than sustained power value, and so we had a false sense of security that turbo could go on forever. With the benefit of hindsight, the nuances relating to turbo power limits and power budgets were obfuscated, and people ultimately didn’t care on the desktop – all the turbo for all the time was an easy concept to understand.

One other key metric that perhaps went under the radar is how Intel was able to apply its turbo frequencies to the CPU.

For any given CPU, any core within that design could hit the top turbo. It allowed for threads to be loaded onto whatever core was necessary, without the need to micromanage the best thread positioning for the best performance. If Intel stated that the single core turbo frequency was 4.6 GHz, then any core could go up to 4.6 GHz, even if each individual core could go beyond that.

For example, here’s a theoretical six-core Core i5-9600K, with a 3.7 GHz base frequency, and a 4.6 GHz turbo frequency. The higher numbers represent theoretical maximums of each core at the turbo voltage.

This is actually a strategy related to how Intel segments its CPUs after manufacturing, a process called binning. If a processor has the right power/thermal characteristics to reach a given frequency in a given power, then it could be labelled as the most appropriate CPU for retail and sold as such. Because Intel aimed for a homogeneous monolithic design, every core in the design was tested such that it performed equally (or almost equally) with every other core. Invariably some cores will perform better than others, if tweaked to the limits, but under Intel’s regime, it helped Intel to spread the workloads around as to not create thermal hotspots on the processor, and also level out any wear and tear that might be caused over the lifetime of the product. It also meant that in a hypervisor, every virtual machine could experience the same peak frequencies, regardless of the cores they used.

With binning, Intel (or any other company), is selecting a set of voltages and frequencies for a processor to which it is guaranteed. From the manufacturing, Intel (or others) can see the predicted lifespan of a given processor for a range of frequencies and voltages, and the ones that hit the right mark (based on internal requirements) means that a silicon chip ends up as a certain CPU. For example, if a piece of silicon does hit 9900K voltages and frequencies, but the lifespan rating of that piece of silicon is only two years, Intel might knock it down to a 9700K, which gives a predicted lifespan of fifteen years. It’s that sort of thing that determines how high a chip can perform. Obviously chips that can achieve high targets can also be reclassified as slower parts based on inventory levels or demand.

This is how the general public, the enthusiasts, and even the journalists and reviewers covering the market, have viewed Turbo for a long time. It’s a well-known part of the desktop space and to a large extent is easy to understand. If someone said ‘Turbo’ frequency, everyone was agreed on the same basic principles and no explanation was needed. We all assumed that when Turbo was mentioned, this is what they meant, and this is what it would mean for eternity.

Now insert AMD, March 2017, with its new Zen core microarchitecture. Everyone assumed Turbo would work in exactly the same way. It does not.

Reaching for Turbo: Aligning Perception with AMD’s Frequency Metrics AMD’s Turbo: Something Different
Comments Locked

144 Comments

View All Comments

  • StrangerGuy - Wednesday, September 18, 2019 - link

    Pretty much the only people left OCing CPUs are epeen wavers with more money than sense.

    "$300 mobo and 360mm AIO for that Intel 8 core <10% OC at >200W...Look at all that *free* performance! Amirite?"
  • Korguz - Wednesday, September 18, 2019 - link

    " Pretty much the only people left OCing CPUs are epeen wavers with more money than sense" that a pretty bold statement. i know quite a few people who overclock their cpus, because intel charged too much for the higher end ones, so they had to get a lower tier chip. with zen, thats not the case as much any more as they have switched over to amd, because by this time, they would of had to get a new mobo any way, because intels upgrade paths are only 2, maybe 3 years if they are lucky.
    dont " need " and AIO, as there are some pretty good air coolers out there, and some, dont like the idea of water, or liquid in their comps :-)
  • Xyler94 - Thursday, September 19, 2019 - link

    Sorry, here's where I'll have to disagree with you.

    You'll never overclock an i3 to i5/i7 levels. If my choices were between an overclocking i3, with a Z series board, or a locked i5 with an H series board, I'd choose the i5 in a heartbeat, as that's just generally better. Overclocking will never make up the lack of physical cores.

    So I agree, mostly these days overclocking is reserved for A: People with e-peens, and B:, people who genuinely need 5ghz on a single core... which are fewer than those who can utilize the multi-threaded horsepower of Ryzen... so yeah,
  • evernessince - Tuesday, September 17, 2019 - link

    Does it deliver well? I see plenty of people on the Intel reddit not hitting advertised turbo speeds. That's considering they are using $50+ CPU coolers as well.

    "Pretty impressive to see a server cpu with 20% lower ST performance only because the
    low power process utilized is unable to deliver a clock speed near 4Ghz, absurd thing considering
    that Intel 14nm LP gives 4GHz at 1V without struggles."

    What CPU are you talking about? Even AMD's 64 core monster has damn near the same IPC as the Intel Xeon 8280 (thanks to Zen 2 IPC improvements) and that CPU has LESS THEN HALF THE CORES and only consumes 20w more. The Intel CPU also costs almost twice as much. Only a moron brings up single threaded performance in a server chip conversation anyways, it's one of the least important metric for server chips. AMD's new EPYC chip crushes Intel in Core count, TCO, power consumption, and security. Everything that is important to server.
  • yankeeDDL - Wednesday, September 18, 2019 - link

    You do realize that the clock speed does not depend only by the process, right? Your comment sounds like that that of a disgruntled Intel fanboy trying to put AMD in under a bar light. For 25MHz.
  • Spunjji - Monday, September 23, 2019 - link

    Absolute codswallop. AMD are getting 100-300Mhz more on their peak clock speeds for Zen2 with this first-gen 7nm process tech than they were seeing with Zen+ 12nm (and nearly 400Mhz more than Zen on 14nm), so nothing about that implies that 7nm is slower than 14nm. Intel's architecture and process tech are not remotely comparable to AMD's, and we don't know what is the primary limiting factor on Zen clockspeeds.

    Not sure why you're claiming lower ST performance on the server parts either - Rome is better in every single regard than its predecessors, and it's better pound-for-pound than anything Intel will be able to offer in the next 12-18 months.
  • PeachNCream - Tuesday, September 17, 2019 - link

    I see a tempest in a teapot on the stove of a person who is busy splitting hairs at the kitchen table. It would be more interesting to calculate how much energy and time was expended on the issue to see if the performance uplift from the fix will offset the global societal cost of all the clamoring this generated. For that, I suppose you'd have to know how many Ryzen chips are actually doing something productive as opposed to crunching at video games.

    The idea of buying cheaper hardware for non-work needs sticks here. Less investment means less worry about maximizing your return on your super-happy-fun-box and less heartburn over a little bit of clockspeed on a component that plays second fiddle to GPU performance when it comes to entertainment anyway.
  • psychobriggsy - Tuesday, September 17, 2019 - link

    Ultimately in the end it isn't MHz that counts, it is observed performance in the software that you care about. That's why we read reviews, and why the review industry is so large.

    If performance X was good enough, does it matter if it was achieved at 4.5GHz or 4.425GHz? Not really. But if the CPU manufacturer is using it as a primary competitive comparison metric (rather than a comparative metric with their other SKUs) then it has to be considered, like in this article.

    It is sad that MHz is still a major metric in the industry, although now Intel IPC is similar to AMD IPC, it is actually kinda relevant.

    What I'd like is better CPU power draw measurements versus what the manufacturer says. Because TDP advertising seems to be even more fraught with lies/marketing than MHz marketing! Obviously most users don't care about 10 or 20% extra power draw at a CPU level, as at a system level it will be a tiny change, but it's when it is 100% that it matters.

    IMO, I'd like TDPs to be reported at all core sustained turbo, not base clocks. Sure, have a typical TDP measurement as well as the more information the better, but don't hide your 200W CPU under a 95W TDP.
  • TechnicallyLogic - Tuesday, September 17, 2019 - link

    Personally, I feel that AMD should have 2 numbers for the max frequency of the CPU; "Boost Clock" and "Burst Clock". Assuming that you have adequete cooling and power delivery, the boost clock would be sustainable indefinitely on a single core, while the burst clock would be the peak frequency that a single core on the CPU can reach, even if it's just for a few ms.
  • fatweeb - Tuesday, September 17, 2019 - link

    I could see them eventually going in this direction considering Navi already has three clocks: Base, Gaming, and Boost. The first two would be guaranteed, the last not so much.

Log in

Don't have an account? Sign up now