GPU Boost: Turbo For GPUs

Now that we’ve had a chance to take a look at the Kepler architecture, let’s jump into features. We’ll start with the feature that’s going to have the biggest impact on performance: GPU Boost.

Much like we’ve seen with CPUs in previous years, GPUs are reaching a point where performance is being limited by overall power consumption. Until the last couple of years GPU power consumption has been allowed to slowly drift up with each generation, allowing for performance to scale to an incredible degree. However for many of the same reasons NVIDIA has been focusing on efficiency in general, GPUs are being pressured to do more without consuming more.

The problem of course is compounded by the fact that there are a wide range of possible workloads for a GPU, much like there is for a CPU. With the need to design video cards around specific TDPs for both power supply and heat dissipation reasons, the goal becomes one of maximizing your performance inside of your assigned TDP.

The answer to that problem in the CPU space is turbo boosting – that is increasing the clockspeed of one or more CPU cores so long as the chip as a whole remains at or under its TDP. By using turbo, Intel and AMD have been able to both maximize the performance of lightly threaded applications by boosting a handful of cores to high speeds, and at the same time maximize heavily threaded performance by boosting a large number of cores by little to none. For virtually any CPU-bound workload the CPU can put itself into a state where the appropriate execution units are making the most of their TDP allocation.

Of course in the GPU world things aren’t that simple – for starters we don’t have a good analogy for a lightly threaded workload – but the concept is similar. GPUs need to be able to run demanding tasks such as Metro 2033 or even pathological applications like FurMark while staying within their designated TDPs, and at the same time they need to be sure to deliver good performance for compute applications and games that aren’t quite so demanding. Or put another way, tasks that are GPU limited but aren’t maxing out every aspect of the GPU need to be able to get good performance without being held back by the need to keep heavy workloads in check.

In 2010 AMD took a stab that this scenario with PowerTune, which was first introduced on the Radeon HD 6900 series. With PowerTune AMD could set their clockspeeds relatively high, and should any application demand too much of the GPU, PowerTune would throttle down the GPU in order to avoid going over its TDP. In essence with PowerTune the GPU could be clocked too high, and simply throttled down if it tried to draw too much power. This allowed lighter workloads to operate at higher clockspeeds, while keeping power consumption in check for heavy workloads.

With the introduction of Kepler NVIDIA is going to be tackling this problem for their products, and their answer is GPU Boost.

In a nutshell, GPU Boost is turbo for the GPU. With GPU Boost NVIDIA is able to increase the core clock of GTX beyond its 1006MHz base clock, and like turbo on CPUs this is based on the power load, the GPU temperature, and the overall quality of the GPU. Given the right workload the GTX 680 can boost by 100MHz or more, while under a heavy workload the GTX 680 may not move past 1006MHz.

With GPU Boost in play this adds a new wrinkle to performance of course, but ultimately there are 2 numbers to pay attention to. The first number is what NVIDIA calls the base clock: this is another name for the regular core clock, and it represents the minimum full load clock for GTX 680; when operating at its full 3D clocks, the GTX 680 will never drop below this number.

The second number is what NVIDIA calls the boost clock, and this one is far more nebulous, as it relates to the operation of GPU Boost itself. With GPU Boost NVIDIA does not have an explicit top clock; they’re letting chip quality play a significant role in GPU Boost. Because GPU Boost is based around power consumption and temperatures, higher quality GPUs that operate with lower power consumption can boost higher than lower quality GPUs with higher power consumption. In essence the quality of the chip determines its boost limit under normal circumstances.

Accordingly, the boost clock is intended to convey what kind of clockspeeds buyers can expect to see with the average GTX 680. Specifically, the boost clock is based on the average clockspeed of the average GTX 680 that NVIDIA has seen in their labs. This is what NVIDIA had to say about the boost clock in their reviewer’s guide:

The “Boost Clock” is the average clock frequency the GPU will run under load in many typical non-TDP apps that require less GPU power consumption. On average, the typical Boost Clock provided by GPU Boost in GeForce GTX 680 is 1058MHz, an improvement of just over 5%. The Boost Clock is a typical clock level achieved running a typical game in a typical environment

In other words, when the average GTX 680 is boosting it reaches 1058MHz on average.

Ultimately NVIDIA and their customers are going to go through some teething issues on this, and there’s no way around it. Although the idea of variable performance isn’t a new one – we already see this to some degree with CPU turbo – this is the first time we’ve seen something like this in the GPU space, and it’s going to take some time to get used to.

In any case while we can’t relate to you what the average GTX 680 does with GPU Boost, we can tell you about GPU Boost based on what we’ve seen with our review sample.

First and foremost, GPU Boost operates on the concept of steps, analogous to multipliers on a CPU. Our card has 9 steps, each 13MHz apart, ranging from 1006MHz to 1110MHz. And while it’s not clear whether every GTX 680 steps up in 13MHz increments, based on NVIDIA’s boost clock of 1058MHz this would appear to be the case, as that would be 4 steps over the base clock.

At each step our card uses a different voltage, listed in the table below. We should note that we’ve seen different voltages reported for the same step in some cases, so it’s not entirely clear what’s going on. In any case we’re listing the most common voltage we’ve recorded for each step.

GeForce GTX 680 GPU Boost Step Table
Frequency Voltage
1110MHz 1.175v
1097MHz 1.15v
1084MHz 1.137v
1071MHz 1.125v
1058MHz 1.125v
1045MHz 1.112v
1032MHz 1.100v
1019MHz 1.075v
1006MHz 1.062v

As for deciding what clockspeed to step up to, GPU boost determines this based on power consumption and GPU temperature. NVIDIA has on-card sensors to measure power consumption at the rails leading into the GPU, and will only allow the video card to step up so long as it’s below the GPU Boost power target. This target isn’t published, but NVIDIA has told us that it’s 170W. Note that this is not the TDP of the card, which is 195W. Because NVIDIA doesn’t have a true throttling mechanism with Kepler, their TDP is higher than their boost target as heavy workloads can push power consumption well over 170W even at 1006MHz.

Meanwhile GPU temperatures also play an important role in GPU boost. Our sample could only hit the top step (1110MHz) if the GPU temperature was below 70C; as soon as the GPU reached 70C it would be brought down to the next highest step of 1097MHz. This means that the top step is effectively unsustainable on the stock GTX 680, as there are few if any applications that are both intensive enough to require high clockspeeds and light enough to not push GPU temperatures up.

Finally, with the introduction of GPU Boost overclocking has been affected as well. Rather than directly controlling the core clock, overclocking is accomplished through the combined manipulation of the GPU Boost power target and the use of a GPU clock offset. Power target manipulation works almost exactly as you’d expect: you can lower and raise the GPU Boost power target by -30% to +32%, similar to how adjusting the PowerTune limit works on AMD cards. Increasing the power target allows the video card to pull more power, thereby allowing it to boost to higher steps than is normally possible (but no higher than the max step), while decreasing the power target keeps it from boosting at all.

The GPU offset meanwhile manipulates the steps themselves. By adjusting the GPU offset all of the GPU Boost steps are adjusted by roughly an equal amount, depending on what clocks the PLL driving the GPU can generate. E.G. a +100MHz offset clock would increase the 1st step to 1120MHz, etc up to the top step which would be increased to 1210MHz.

While each factor can be adjusted separately, it’s adjusting both factors together that truly unlock overclocking. Adjusting the GPU offset alone won’t achieve much if most workloads are limited by GPU Boost’s power target, and adjusting the power target alone won’t improve the performance of workloads that are already allowed to reach the highest step. By combining the two you can increase the GPU clock and at the same time increase the power target so that workloads are actually allowed to hit those new clocks.

On that note, overclocking utilities will be adding support for GPU Boost over the coming weeks. The first overclocking utility with support for GPU Boost is EVGA’s Precision X, the latest rendition of their Precision overclocking utility. NVIDIA supplied Precision X Beta 20 with our review samples, and as we understand it that will be made available shortly for GTX 680 buyers.

Finally, while we’ll go into full detail on overclocked performance in a bit, we wanted to quickly showcase the impact GPU Boost, both on regular performance and on overclocking. First up, we ran all of our benchmarks at 2560 with the power target for GPU boost set to -16%, which reduces the power target to roughly 142W. While GPU Boost cannot be disabled outright, this was enough to ensure that it almost never activated.

As is to be expected, the impact of GPU Boost varies depending on the game, but overall we found that enabling GPU boost on our card only improves performance by an average of 3%, and by no more than 5%. While this is effectively free performance, it also is a stark reminder that GPU Boost isn’t nearly as potent as turboing on a CPU – at least not quite yet. As there’s no real equivalent to the lightly threaded workload for GPUs, the need for a wide range of potential GPU Boost clocks is not nearly as great as the need for high turbo clocks on a CPU. Even a light GPU workload is relatively heavy when graphics itself is an embarrassingly parallel task.

Our other quick look is at overclocking. The following is what our performance looked like at 2560 with stock GPU Boost settings, a power target of +16% (195W), and a GPU offset of +100MHz.

Overall raising the GPU offset is much more effective than raising the power target to improve performance, reflecting the fact that in our case most games were limited by the GPU Boost clock rather than the power target at least some of the time.

The Kepler Architecture: Efficiency & Scheduling Meet the GeForce GTX 680
Comments Locked

404 Comments

View All Comments

  • Arbie - Friday, March 23, 2012 - link

    "I've always said, choose your hardware by application, not by overall results"

    Actually, that' is what I said. But I wasn't as pompous about it, which may have confused you.

    ;)
  • CeriseCogburn - Thursday, March 22, 2012 - link

    Well it's a good thing fair and impartial Ryan put the two games 680 doesn't trounce the 7970 in up first in the bench line up, so it would make amd look very good to the chart and chan click through crowd.
    Yeah, I like an alphabet that goes C for Crysis then M for Metro, so in fact A for AMD comes in first !
  • Sivar - Thursday, March 22, 2012 - link

    Many Anandtech articles not written by Anand have a certain, "written by an intelligent, geeky, slightly insecure teenager" feel to them. While still much better than other tech websites, and I've been around them all for some time, Anand is a cut above.

    This article, and a few others you've written, show that you are really getting the hang of being a truly professional writer.
    - Great technical detail without paraphrasing marketing material.
    - Not even the slightest hint of "fanboyism" for one company over another.
    - Doesn't drag on and on or repeat the same thing several times in slightly different ways.
    - Anand, who usually takes the cool articles for himself, had the trust in you to let you do this one solo.

    I would request, however, that you hyperlink some of the acronyms used. Even after being a reader since the Geocities days, it's sometimes difficult to remember every term and three letter combination on an article with so much depth and breadth.
    Also, for the sake of mobile users and image quality, there really needs to be some internal discussion on when to use which image format. PNG-8 for large areas of flat or gradient color, charts, screen captures, and slides -- but only when the source is not originally a JPG (because JPG subtly corrupts the image so as to ruin PNG's chance of compression) and JPG for pretty much all photographs. I wrote a program to analyze images and suggest a format -- Look for "ImageGuide" on Google Code.

    In any case, the fact that I can think of only the most minor of suggestions, as opposed to when I read a certain other website named after its founder of a much shorter name.
  • Sabresiberian - Thursday, March 22, 2012 - link

    I agree, another thorough review by one of the better people doing it on the internet. Thanks Ryan!

    As far as the dig on Tomshardware, I don't quite agree there. I notice Chris Angelini wrote the GTX 680 article for that website, and I'm very much looking forward to reading another thorough review.

    ;)
  • Sivar - Thursday, March 22, 2012 - link

    Tom's may have improved greatly since I last gave it another chance, but since not long after they were bought out, I've found the reporting to be flagrantly sensationalist and light on fact. The entity that bought them out, and the journalists he hired, are well known for just that. Many times I read the author's conclusion and wondered if he was looking at the same bar charts that I was.

    To be blunt, at times when people quoted their site, I felt as if I'd shifted into an alternate dimension where otherwise knowledgeable people were comically oblivious to the most egregiously flawed journalism. It was as if a group of Nobel prize winners were unthinkingly quoting Bill O'Reilly or Michael Moore on a political matter as if it was assumed they were a paragon of truth and even-headedness.
  • Sabresiberian - Thursday, March 22, 2012 - link

    Very well said. (I especially like the comment using both a staunch conservative and flaming liberal as examples of poor source material.)

    I do tend to look at specific writers, and probably give Toms too much credit based on that more narrow view. I freely admit to having a somewhat fanboy feel for the site, too, since it was one of the first and set a mark, at one time, unreached by any other site I knew about.

    I have been a bit confused by some statements made by some writers on that site, conclusions that didn't seem to be supported by the data they published. Perhaps it's time to step up and comment when that happens, instead of just interpreting my confusion as a lack of careful reading on my part (which happens to the best of us).

    ;)
  • Nfarce - Sunday, March 25, 2012 - link

    "It was as if a group of Nobel prize winners were unthinkingly quoting Bill O'Reilly or Michael Moore on a political matter"

    Well Obama, Al Gore, and Arafat were each given a Nobel Prize, so I'd hardly consider that entity a good reference point of analogy in validity. In Any event, I welcome opinions from all sides. The main stream "news" media long ago abandoned objective reporting. One is most informed by reading different takes on the same "facts" and formulate one's own opinion. Of course, you have to also research outside the spectrum for some information that the main stream media will hide from time to time: like how bad off the US economy really is.
  • Ryan Smith - Thursday, March 22, 2012 - link

    Thanks for the kind words, though I'm not sure whether "slightly insecure teenager" is a compliment on my youthful vigor or a knock against my immaturity.;-)

    Anyhow, we usually use PNGs where it makes sense. All of my photo processing is done with Photoshop, so I know ahead of time whether JPG or PNG will spit out a smaller image, and any blurring that may result. Generally speaking we should be using the right format in the right place, but if you have any specific examples where it's not, drop me a line (it will be hard to keep track of this thread) and I'll take a look.
  • IlllI - Thursday, March 22, 2012 - link

    ok there seems to be some confusion here. many times in the review you directly compare it to GF114 (which i think was never present in the 580 series) yet also at the same time you say the 680 is a direct replacement for the 580.
    i dont think it is. what it DOES seem like however, is that this 680 was indeed suppose to be the mainstream part, but since the ati competition was so low that nvidia just jacked up the card number (and price).
  • CeriseCogburn - Friday, March 23, 2012 - link

    So Nvidia should have dropped the 680, their GTX580($450+) killer in at $299...
    Charlie D's $299 rumor owns internet group think brains.

Log in

Don't have an account? Sign up now