GPU Boost 2.0: Temperature Based Boosting

With the Kepler family NVIDIA introduced their GPU Boost functionality. Present on the desktop GTX 660 and above, boost allows NVIDIA’s GPUs to turbo up to frequencies above their base clock so long as there is sufficient power headroom to operate at those higher clockspeeds and the voltages they require. Boost, like turbo and other implementations, is essentially a form of performance min-maxing, allowing GPUs to offer higher clockspeeds for lighter workloads while still staying within their absolute TDP limits.

With the first iteration of GPU Boost, GPU Boost was based almost entirely around power considerations. With the exception of an automatic 1 bin (13MHz) step down in high temperatures to compensate for increased power consumption, whether GPU Boost could boost and by how much depended on how much power headroom was available. So long as there was headroom, GPU Boost could boost up to its maximum boost bin and voltage.

For Titan, GPU Boost has undergone a small but important change that has significant ramifications to how GPU Boost works, and how much it boosts by. And that change is that with GPU Boost 2, NVIDIA has essentially moved on from a power-based boost system to a temperature-based boost system. Or perhaps more precisely, a system that is predominantly temperature based but is also capable of taking power into account.

When it came to GPU Boost 1, its greatest weakness as explained by NVIDIA is that it essentially made conservative assumptions about temperatures and the interplay between high temperatures and high voltages in order keep from seriously impacting silicon longevity. The end result being that NVIDIA was picking boost bin voltages based on the worst case temperatures, which meant those conservative assumptions about temperatures translated into conservative voltages.

So how does a temperature based system fix this? By better mapping the relationship between voltage, temperature, and reliability, NVIDIA can allow for higher voltages – and hence higher clockspeeds – by being able to finely control which boost bin is hit based on temperature. As temperatures start ramping up, NVIDIA can ramp down the boost bins until an equilibrium is reached.

Of course total power consumption is still a technical concern here, though much less so. Technically NVIDIA is watching both the temperature and the power consumption and clamping down when either is hit. But since GPU Boost 2 does away with the concept of separate power targets – sticking solely with the TDP instead – in the design of Titan there’s quite a bit more room for boosting thanks to the fact that it can keep on boosting right up until the point it hits the 250W TDP limit. Our Titan sample can boost its clockspeed by up to 19% (837MHz to 992MHz), whereas our GTX 680 sample could only boost by 10% (1006MHz to 1110MHz).

Ultimately however whether GPU Boost 2 is power sensitive is actually a control panel setting, meaning that power sensitivity can be disabled. By default GPU Boost will monitor both temperature and power, but 3rd party overclocking utilities such as EVGA Precision X can prioritize temperature over power, at which point GPU Boost 2 can actually ignore TDP to a certain extent to focus on power. So if nothing else there’s quite a bit more flexibility with GPU Boost 2 than there was with GPU Boost 1.

Unfortunately because GPU Boost 2 is only implemented in Titan it’s hard to evaluate just how much “better” this is in any quantities sense. We will be able to present specific Titan numbers on Thursday, but other than saying that our Titan maxed out at 992MHz at its highest boost bin of 1.162v, we can’t directly compare it to how the GTX 680 handled things.

Titan For Compute GPU Boost 2.0: Overclocking & Overclocking Your Monitor
Comments Locked

157 Comments

View All Comments

  • CeriseCogburn - Monday, March 4, 2013 - link

    lol - DREAM ON about goodwill and maintaining it.

    nVidia is attacked just like Intel, only worse. They have the least amount of "goodwill" any company could possibly have, as characterized by the dunderheads all over the boards and the also whining reviewers who cannot stand the "arrogant know it all confident winners who make so much more money playig games as an nVidia rep"...

    Your theory is total crap.

    What completely overrides it is the simple IT JUST WORKS nVidia tech and end user experience.
    Add in the multiplied and many extra features and benefits, and that equals the money in the bank that lets the end user rest easy that new games won't become an abandoned black holed screen.

    Reputation ? The REAL reputation is what counts, not some smarmy internet crybaby loser with lower self esteem than a confident winner with SOLID products, the BEST of the industry.
    That's arrogance, that's a winner, that's a know it all, that's Mr. Confidence, that's the ca$h and carry ladies magnet, and that's what someone for the crybaby underdog loser crash crapster company cannot stand.
  • Galvin - Tuesday, February 19, 2013 - link

    Can this card do 10bit video or still limited to 8bit?
  • alpha754293 - Tuesday, February 19, 2013 - link

    Does this mean that Tesla-enabled applications will be able to make use of Titan?
  • Ryan Smith - Tuesday, February 19, 2013 - link

    It depends on what features you're trying to use. From a fundamental standpoint even the lowly GT 640 supports the baseline Kepler family features, including FP64.
  • Ankarah - Tuesday, February 19, 2013 - link

    Highly unusual for a company to have two of their products at the exact same price point, catering to pretty much the same target audience.

    I guess it could be viewed as a poor-man's-Tesla but as far as the gaming side goes, it's quite pointless next to the 690, not to mention very confusing to anyone other than those are completely up-to-date on the latest news stories.
  • CeriseCogburn - Monday, March 4, 2013 - link

    Let's see, single GPU core fastest in the gaming world, much lower wattage, no need for profiles, constant FPS improvement - never the same or no scaling issues across all games, and you find it strange ?

    I find your complete lack of understanding inexcusable since you opened the piehole and removed all doubt.
  • Voidman - Tuesday, February 19, 2013 - link

    Finally somehting I could be excited about. I have a hard time caring much about the latest smart phone or tablet. A new high end video card though is something different all together. And then it turns out to be a "luxury product" and priced at 1k. Cancel excitement. Oh well, I'm happy with my 680 still, and I'm pretty sure I've still got overclocking room on it to boot. But for all those that love to hate on either AMD or Nvidia, this is what happens when one is not pushing the other. I have no doubt what so ever that AMD would do the same if they were on top at the moment.

  • HanakoIkezawa - Tuesday, February 19, 2013 - link

    The price is a bit disappointing but not unexpected. I was hoping this would be 750-850 not so I could buy one but so that I could get a second 670 for a bit cheaper :D

    But in all seriousness, this coming out does not make the 680 or 670 any slower or less impressive. In the same way the 3970x's price tag doesn't make the 3930k any less of a compelling option.
  • johnsmith9875 - Tuesday, February 19, 2013 - link

    Why not just make the video card the computer and let the intel chip handle graphics???
  • Breit - Tuesday, February 19, 2013 - link

    Thanks Ryan, this made my day! :)

    Looking forward to part 2...

Log in

Don't have an account? Sign up now