GPU Boost 2.0: Overclocking & Overclocking Your Monitor

The first half of the GPU Boost 2 story is of course the fact that with 2.0 NVIDIA is switching from power based controls to temperature based controls. However there is also a second story here, and that is the impact to overclocking.

With the GTX 680, overclocking capabilities were limited, particularly in comparison to the GeForce 500 series. The GTX 680 could have its power target raised (guaranteed “overclocking”), and further overclocking could be achieved by using clock offsets. But perhaps most importantly, voltage control was forbidden, with NVIDIA going so far as to nix EVGA and MSI’s voltage adjustable products after a short time on the market.

There are a number of reasons for this, and hopefully one day soon we’ll be able to get into NVIDIA’s Project Greenlight video card approval process in significant detail so that we can better explain this, but the primary concern was that without strict voltage limits some of the more excessive users may blow out their cards with voltages set too high. And while the responsibility for this ultimately falls to the user, and in some cases the manufacturer of their card (depending on the warranty), it makes NVIDIA look bad regardless. The end result being that voltage control on the GTX 680 (and lower cards) was disabled for everyone, regardless of what a card was capable of.

With Titan this has finally changed, at least to some degree. In short, NVIDIA is bringing back overvoltage control, albeit in a more limited fashion.

For Titan cards, partners will have the final say in whether they wish to allow overvolting or not. If they choose to allow it, they get to set a maximum voltage (Vmax) figure in their VBIOS. The user in turn is allowed to increase their voltage beyond NVIDIA’s default reliability voltage limit (Vrel) up to Vmax. As part of the process however users have to acknowledge that increasing their voltage beyond Vrel puts their card at risk and may reduce the lifetime of the card. Only once that’s acknowledged will users be able to increase their voltages beyond Vrel.

With that in mind, beyond overvolting overclocking has also changed in some subtler ways. Memory and core offsets are still in place, but with the switch from power based monitoring to temperature based monitoring, the power target slider has been augmented with a separate temperature target slider.

The power target slider is still responsible for controlling the TDP as before, but with the ability to prioritize temperatures over power consumption it appears to be somewhat redundant (or at least unnecessary) for more significant overclocking. That leaves us with the temperature slider, which is really a control for two functions.

First and foremost of course is that the temperature slider controls what the target temperature is for Titan. By default for Titan this is 80C, and it may be turned all the way up to 95C. The higher the temperature setting the more frequently Titan can reach its highest boost bins, in essence making this a weaker form of overclocking just like the power target adjustment was on GTX 680.

The second function controlled by the temperature slider is the fan curve, which for all practical purposes follows the temperature slider. With modern video cards ramping up their fan speeds rather quickly once cards get into the 80C range, merely increasing the power target alone wouldn’t be particularly desirable in most cases due to the extra noise it generates, so NVIDIA tied in the fan curve to the temperature slider. By doing so it ensures that fan speeds stay relatively low until they start exceeding the temperature target. This seems a bit counterintuitive at first, but when put in perspective of the goal – higher temperatures without an increase in fan speed – this starts to make sense.

Finally, in what can only be described as a love letter to the boys over at 120hz.net, NVIDIA is also introducing a simplified monitor overclocking option, which can be used to increase the refresh rate sent to a monitor in order to coerce it into operating at that higher refresh rate. Notably, this isn’t anything that couldn’t be done before with some careful manipulation of the GeForce control panel’s custom resolution option, but with the monitor overclocking option exposed in PrecisionX and other utilities, monitor overclocking has been reduced to a simple slider rather than a complex mix of timings and pixel counts.

Though this feature can technically work with any monitor, it’s primarily geared towards monitors such as the various Korean LG-based 2560x1440 monitors that have hit the market in the past year, a number of which have come with electronics capable of operating far in excess of the 60Hz that is standard for those monitors. On the models that have been able to handle it, modders have been able to get some of these 2560x1440 monitors up to and above 120Hz, essentially doubling their native 60Hz refresh rate to 120Hz, greatly improving smoothness to levels similar to a native 120Hz TN panel, but without the resolution and quality drawbacks inherent to those TN products.

Of course it goes without saying that just like any other form of overclocking, monitor overclocking can be dangerous and risks breaking the monitor. On that note, out of our monitor collection we were able to get our Samsung 305T up to 75Hz, but whether that’s due to the panel or the driving electronics it didn’t seem to have any impact on performance, smoothness, or responsiveness. This is truly a “your mileage may vary” situation.

GPU Boost 2.0: Temperature Based Boosting Origin’s Genesis: Titan on Water & More to Come
Comments Locked

157 Comments

View All Comments

  • bigboxes - Tuesday, February 19, 2013 - link

    This is Wreckage we're talking about. He's trolling. Nothing to see here. Move along.
  • chizow - Tuesday, February 19, 2013 - link

    I agree with his title, that AMD is at fault at the start of all of this, but not necessarily with the rest of his reasonings. Judging from your last paragraph, you probably agree to some degree as well.

    This all started with AMD's pricing of the 7970, plain and simple. $550 for a card that didn't come anywhere close to justifying the price against the last-gen GTX 580, a good card but completely underwhelming in that flagship slot.

    The 7970 pricing allowed Nvidia to:

    1) price their mid-range ASIC, GK104, at flagship SKU position
    2) undercut AMD to boot, making them look like saints at the time and
    3) delay the launch of their true flagship SKU, GK100/110 nearly a full year
    4) Jack up the prices of the GK110 as an ultra-premium part.

    I saw #4 occurring well over a year ago, which was my biggest concern over the whole 7970 pricing and GK104 product placement fiasco, but I had no idea Nvidia would be so usurous as to charge $1k for it. I was expecting $750-800....$1k....Nvidia can go whistle.

    But yes, long story short, Nvidia's greed got us here, but AMD definitely started it all with the 7970 pricing. None of this happens if AMD prices the 7970 in-line with their previous high-end in the $380-$420 range.
  • TheJian - Wednesday, February 20, 2013 - link

    You realize you're dogging amd for pricing when they lost 1.18B for the year correct? Seriously you guys, how are you all not understanding they don't charge ENOUGH for anything they sell? They had to lay of 30% of the workforce, because they don't make any money on your ridiculous pricing. Your idea of pricing is KILLING AMD. It wasn't enough they laid of 30%, lost their fabs, etc...You want AMD to keep losing money by pricing this crap below what they need to survive? This is the same reason they lost the cpu war. They charged less for their chips for the whole 3yrs they were beating Intel's P4/presHOT etc to death in benchmarks...NV isn't charging too much, AMD is charging too LITTLE.

    AMD has lost 3-4B over the last 10yrs. This means ONE thing. They are not charging you enough to stay in business.

    This is not complicated. I'm not asking you guys to do calculus here or something. If I run up X bills to make product Y, and after selling Y can't pay X I need to charge more than I am now or go bankrupt.

    Nvidia is greedy because they aren't going to go out of business? Without Intel's money they are making 1/2 what they did 5yrs ago. I think they should charge more, but this is NOT gouging or they'd be making some GOUGING like profits correct? I guess none of you will be happy until they are both out of business...LOL
  • chizow - Wednesday, February 20, 2013 - link

    1st of all, AMD as a whole lost money, AMD's GPU division (formerly ATI) has consistently operated at a small level of profit. So comparing GPU pricing/profits impact on their overall business is obviously going to be lost in the sea of red ink on AMD's P&L statement.

    Secondly, the massive losses and devaluation of AMD has nothing to do with their GPU pricing, as stated, the GPU division has consistently turned a small profit. The problem is the fact AMD paid $6B for ATI 7 years ago. They paid way too much, most sane observers realized that 7 years ago and over the past 5-6 years it's become more obvious. The former ATI's revenue and profits did not justify the $6B price tag and as a result, AMD was *FORCED* to write down their assets as there were some obvious valuation issues related to the ATI acquisition.

    Thirdly, AMD has said this very month that sales of their 7970/GHz GPUs in January 2013 alone exceeded sales of those cards in the previous *TWELVE MONTHS* prior. What does that tell you? It means their previous price points that steadily dropped from $550>500>$450 were more than the market was willing to bear given the product's price:performance relative to previous products and the competition. Only after they settled in on that $380/$420 range for the 7970/GHz edition along with a very nice game bundle did they start moving cards in large volume.

    Now you do the math, if you sell 12x as many cards in 1 month at $100 profit instead of 1/12x as many cards at $250 profit over the course of 1 year, would you have made more money if you just sold the higher volume at a lower price point from the beginning? The answer is yes. This is a real business case that any Bschool grad will be familiar with when performing a cost-value-profit analysis.
  • CeriseCogburn - Sunday, February 24, 2013 - link

    Wow, first of all, basic common sense is all it takes, not some stupid idiot class for losers who haven't a clue and can't do 6th grade math.

    Unfortunately, in your raging fanboy fever pitch, you got the facts WRONG.

    AMD said it sold more in January than any other SINGLE MONTH of 2012 including "Holiday Season" months.

    Nice try there spanky, the brain farts just keep a coming.
  • frankgom23 - Tuesday, February 19, 2013 - link

    Who wants to pay more for less
    no new features..., this is a paper launch of a useless board for the consumer, I don't even need to see official benchmarks, I'm completely dissapointed.
    Maybe it's time to go back to ATI/AMD.
  • imaheadcase - Tuesday, February 19, 2013 - link

    If you would actually READ the article you would know why.

    I love how people cry a river without actually knowing how the card will perform yet.
  • CeriseCogburn - Sunday, February 24, 2013 - link

    Yes, go back, your true home is with losers and fools and crashers and bankrupt idiots who cannot pay for their own stuff.

    The last guy I talked to who installed a new AMD card for his awesome Eyefinity monitors gaming setup struggled for several days encompassing dozens of hours to get the damned thing stable, exclaimed several times he had finally achieved, and yet, the next day at it again, and finally took the thing, walked outside and threw it up against the brick wall "shattering it into 150 pieces" and "he's not going dumpster diving" he tells me, to try to retrieve a piece or part of it which might help him repair one of the two other DEAD upper range amd cards ( of 4 dead amd cards in the house ) he recently bought for mega gaming system.
    ROFL
    Yeah man, not kidding. He doesn't like nVidia by the way. He still is an amd fanboy.
    He is a huge gamer with multiple systems all running all day and night - and his "main" is "down"... needless to say it was quite stressful for him and has done nothing good for the very long friendship.
    LOL - Took it and in a seeing red rage and smashed that puppy to smithereens against the brick wall.

    So please, head back home, lots of lonely amd gamers need support.
  • iMacmatician - Tuesday, February 19, 2013 - link

    "For our sample card this manifests itself as GPU Boost being disabled, forcing our card to run at 837MHz (or lower) at all times. This is why NVIDIA’s official compute performance figures are 4.5 TFLOPS for FP32, but only 1.3 TFLOPS for FP64. The former assumes that boost is enabled, while the latter is calculated around GPU Boost being disabled. The actual execution rate is still 1/3."

    But the 837 MHz base and 876 MHz boost clocks give 2·(876 MHz)·(2688 CCs) = 4.71 SP TFLOPS and 2·(837 MHz)·(2688 CCs)·(1/3) = 1.50 DP TFLOPS. What's the reason for the discrepancies?
  • Ryan Smith - Tuesday, February 19, 2013 - link

    Apparently in FP64 mode Titan can drop down to as low as 725MHz in TDP-constrained situations. Hence 1.3TFLOPS, since that's all NVIDIA can guarantee.

Log in

Don't have an account? Sign up now