GPU Boost 2.0: Temperature Based Boosting

With the Kepler family NVIDIA introduced their GPU Boost functionality. Present on the desktop GTX 660 and above, boost allows NVIDIA’s GPUs to turbo up to frequencies above their base clock so long as there is sufficient power headroom to operate at those higher clockspeeds and the voltages they require. Boost, like turbo and other implementations, is essentially a form of performance min-maxing, allowing GPUs to offer higher clockspeeds for lighter workloads while still staying within their absolute TDP limits.

With the first iteration of GPU Boost, GPU Boost was based almost entirely around power considerations. With the exception of an automatic 1 bin (13MHz) step down in high temperatures to compensate for increased power consumption, whether GPU Boost could boost and by how much depended on how much power headroom was available. So long as there was headroom, GPU Boost could boost up to its maximum boost bin and voltage.

For Titan, GPU Boost has undergone a small but important change that has significant ramifications to how GPU Boost works, and how much it boosts by. And that change is that with GPU Boost 2, NVIDIA has essentially moved on from a power-based boost system to a temperature-based boost system. Or perhaps more precisely, a system that is predominantly temperature based but is also capable of taking power into account.

When it came to GPU Boost 1, its greatest weakness as explained by NVIDIA is that it essentially made conservative assumptions about temperatures and the interplay between high temperatures and high voltages in order keep from seriously impacting silicon longevity. The end result being that NVIDIA was picking boost bin voltages based on the worst case temperatures, which meant those conservative assumptions about temperatures translated into conservative voltages.

So how does a temperature based system fix this? By better mapping the relationship between voltage, temperature, and reliability, NVIDIA can allow for higher voltages – and hence higher clockspeeds – by being able to finely control which boost bin is hit based on temperature. As temperatures start ramping up, NVIDIA can ramp down the boost bins until an equilibrium is reached.

Of course total power consumption is still a technical concern here, though much less so. Technically NVIDIA is watching both the temperature and the power consumption and clamping down when either is hit. But since GPU Boost 2 does away with the concept of separate power targets – sticking solely with the TDP instead – in the design of Titan there’s quite a bit more room for boosting thanks to the fact that it can keep on boosting right up until the point it hits the 250W TDP limit. Our Titan sample can boost its clockspeed by up to 19% (837MHz to 992MHz), whereas our GTX 680 sample could only boost by 10% (1006MHz to 1110MHz).

Ultimately however whether GPU Boost 2 is power sensitive is actually a control panel setting, meaning that power sensitivity can be disabled. By default GPU Boost will monitor both temperature and power, but 3rd party overclocking utilities such as EVGA Precision X can prioritize temperature over power, at which point GPU Boost 2 can actually ignore TDP to a certain extent to focus on power. So if nothing else there’s quite a bit more flexibility with GPU Boost 2 than there was with GPU Boost 1.

Unfortunately because GPU Boost 2 is only implemented in Titan it’s hard to evaluate just how much “better” this is in any quantities sense. We will be able to present specific Titan numbers on Thursday, but other than saying that our Titan maxed out at 992MHz at its highest boost bin of 1.162v, we can’t directly compare it to how the GTX 680 handled things.

Titan For Compute GPU Boost 2.0: Overclocking & Overclocking Your Monitor
Comments Locked

157 Comments

View All Comments

  • TheJian - Wednesday, February 20, 2013 - link

    http://www.guru3d.com/articles-pages/geforce_gtx_t...
    1176mhz from 876 (boost). No bad for $2500 K20 basically for $1000. I've never done homework on it, but I don't think K20's overclock, but I could be wrong.

    Can't wait to see the review tomorrow. Clearly he'll bench it there and he has 3 :) You should get your answers then :)

    I'm wondering if some hacker will enable the K20 drivers, or if that's possible. It seems a lot of reviewers got 3, so you should have lots of data by weekend.
  • Bill Brasky - Tuesday, February 19, 2013 - link

    There were rumors this card would launch at 799-899, which made more sense. But for a grand this thing better be pretty darn close to 690.
  • wand3r3r - Tuesday, February 19, 2013 - link

    The price tag just makes this card a failure. It's a 580 replacement no matter how they label it, so they can shove it. They lost a potential customer...
  • karasaj - Tuesday, February 19, 2013 - link

    So what is the 680?
  • Sandcat - Tuesday, February 19, 2013 - link

    A GK104, which replaced the GF104,

    The GK110 is the replacement for the GF110, which was the GTX 580.
  • Ananke - Tuesday, February 19, 2013 - link

    the 680 was meant as a 560ti replacement...however, NVidia decided it turns too good to be sold too cheap, and changed the model numbering...I have several close friends in the marketing at NV :)
    However, NV is using this GK110 core for HPComputing for the very beginning in the Quadro cards, since there they really cannot skip on the double precision.
  • CeriseCogburn - Sunday, February 24, 2013 - link

    BS.
    The 680 core is entirely different, the rollout time is over half a year off, and that just doesn't happen on a whim in Jan2012 with the post mental breakdown purported 7970 epic failure by amd...

    So after the scat lickers spew the 7970 amd failure, they claim it's the best card ever, even now.
    R O F L

    Have it both ways rumor mongering retreads. No one will notice... ( certainly none of you do).
  • rolodomo - Tuesday, February 19, 2013 - link

    Their business model has become separating money from the wallets of well-to-do who have no sense of value and technology (NVIDIA's PR admits this in the article ). It is a business model, but boutique. Doesn't do much for the their name brand in the view of the technorati either (NVIDIA: We Market to Suckers).
  • Wreckage - Tuesday, February 19, 2013 - link

    It's almost as fast as a pair of 7970's that cost $1100 at launch.

    AMD set the bar on high prices. Now that they are out of the GPU race, don't expect much to change.

    At least NVIDIA was able to bring a major performance increase this year. While AMD has become the new Matrox.
  • Stuka87 - Tuesday, February 19, 2013 - link

    AMD is out of the GPU race? What are you smoking? A $1000 dollar card does not put AMD out of the GPU race. The 7970GE competes well with the 680 for less money (They go back and forth depending on the game).

    Now if this card was priced at $500 then that would hurt AMD as the prices on the 660/670/680 would all drop. But its not the case, so your point is moot. Not to mention this card was due out a year ago, and it got delayed. Which is why the GK104 was bumped up to the 680 slot.

Log in

Don't have an account? Sign up now