GPU Boost 2.0: Overclocking & Overclocking Your Monitor

The first half of the GPU Boost 2 story is of course the fact that with 2.0 NVIDIA is switching from power based controls to temperature based controls. However there is also a second story here, and that is the impact to overclocking.

With the GTX 680, overclocking capabilities were limited, particularly in comparison to the GeForce 500 series. The GTX 680 could have its power target raised (guaranteed “overclocking”), and further overclocking could be achieved by using clock offsets. But perhaps most importantly, voltage control was forbidden, with NVIDIA going so far as to nix EVGA and MSI’s voltage adjustable products after a short time on the market.

There are a number of reasons for this, and hopefully one day soon we’ll be able to get into NVIDIA’s Project Greenlight video card approval process in significant detail so that we can better explain this, but the primary concern was that without strict voltage limits some of the more excessive users may blow out their cards with voltages set too high. And while the responsibility for this ultimately falls to the user, and in some cases the manufacturer of their card (depending on the warranty), it makes NVIDIA look bad regardless. The end result being that voltage control on the GTX 680 (and lower cards) was disabled for everyone, regardless of what a card was capable of.

With Titan this has finally changed, at least to some degree. In short, NVIDIA is bringing back overvoltage control, albeit in a more limited fashion.

For Titan cards, partners will have the final say in whether they wish to allow overvolting or not. If they choose to allow it, they get to set a maximum voltage (Vmax) figure in their VBIOS. The user in turn is allowed to increase their voltage beyond NVIDIA’s default reliability voltage limit (Vrel) up to Vmax. As part of the process however users have to acknowledge that increasing their voltage beyond Vrel puts their card at risk and may reduce the lifetime of the card. Only once that’s acknowledged will users be able to increase their voltages beyond Vrel.

With that in mind, beyond overvolting overclocking has also changed in some subtler ways. Memory and core offsets are still in place, but with the switch from power based monitoring to temperature based monitoring, the power target slider has been augmented with a separate temperature target slider.

The power target slider is still responsible for controlling the TDP as before, but with the ability to prioritize temperatures over power consumption it appears to be somewhat redundant (or at least unnecessary) for more significant overclocking. That leaves us with the temperature slider, which is really a control for two functions.

First and foremost of course is that the temperature slider controls what the target temperature is for Titan. By default for Titan this is 80C, and it may be turned all the way up to 95C. The higher the temperature setting the more frequently Titan can reach its highest boost bins, in essence making this a weaker form of overclocking just like the power target adjustment was on GTX 680.

The second function controlled by the temperature slider is the fan curve, which for all practical purposes follows the temperature slider. With modern video cards ramping up their fan speeds rather quickly once cards get into the 80C range, merely increasing the power target alone wouldn’t be particularly desirable in most cases due to the extra noise it generates, so NVIDIA tied in the fan curve to the temperature slider. By doing so it ensures that fan speeds stay relatively low until they start exceeding the temperature target. This seems a bit counterintuitive at first, but when put in perspective of the goal – higher temperatures without an increase in fan speed – this starts to make sense.

Finally, in what can only be described as a love letter to the boys over at 120hz.net, NVIDIA is also introducing a simplified monitor overclocking option, which can be used to increase the refresh rate sent to a monitor in order to coerce it into operating at that higher refresh rate. Notably, this isn’t anything that couldn’t be done before with some careful manipulation of the GeForce control panel’s custom resolution option, but with the monitor overclocking option exposed in PrecisionX and other utilities, monitor overclocking has been reduced to a simple slider rather than a complex mix of timings and pixel counts.

Though this feature can technically work with any monitor, it’s primarily geared towards monitors such as the various Korean LG-based 2560x1440 monitors that have hit the market in the past year, a number of which have come with electronics capable of operating far in excess of the 60Hz that is standard for those monitors. On the models that have been able to handle it, modders have been able to get some of these 2560x1440 monitors up to and above 120Hz, essentially doubling their native 60Hz refresh rate to 120Hz, greatly improving smoothness to levels similar to a native 120Hz TN panel, but without the resolution and quality drawbacks inherent to those TN products.

Of course it goes without saying that just like any other form of overclocking, monitor overclocking can be dangerous and risks breaking the monitor. On that note, out of our monitor collection we were able to get our Samsung 305T up to 75Hz, but whether that’s due to the panel or the driving electronics it didn’t seem to have any impact on performance, smoothness, or responsiveness. This is truly a “your mileage may vary” situation.

GPU Boost 2.0: Temperature Based Boosting Origin’s Genesis: Titan on Water & More to Come
Comments Locked

157 Comments

View All Comments

  • vacaloca - Tuesday, February 19, 2013 - link

    A while ago when K20 released and my advisor didn't want to foot the bill, I ended up doing it myself. Looks like the K20 might be going to eBay since I don't need HyperQ MPI and GPU Direct RDMA or ECC for that matter. I do suspect that it might be possible to crossflash this card with a K20 or K20X BIOS and mod the softstraps to enable the missing features... but probably the video outputs would be useless (and warranty void, and etc) so it's not really an exercise worth doing.

    Props to NVIDIA for releasing this for us compute-focused people and thanks to AnandTech for the disclosure on FP64 enabling. :)
  • extide - Tuesday, February 19, 2013 - link

    Can you please run some F@H benchmarks on this card? I would be very very interested to see how well it folds. Also if you could provide some power consumption numbers (watts @ system idle and watts when gpu only is folding).

    That would be great :)
    Thanks!
  • Ryan Smith - Tuesday, February 19, 2013 - link

    OpenCL is broken with the current press drivers. So I won't have any more information until NVIDIA issues new drivers.
  • jimhans1 - Tuesday, February 19, 2013 - link

    Alright, the whining about this being a $1000 card is just stupid; nVidia has priced this right in my eyes on the performance/noise/temperature front, they have never billed this as being anything other than an Extreme style GPU, just like the 690, yes the 690 will outperform this in raw usage, but not by much I'm guessing, and it will run hotter, louder and use more power than the Titan, not to mention possible SLI issues that have plagued ALL SLI/CF on one PCB cards to date. If you want THE high end MAINSTREAM card, you get the 680, if you wan't the EXTREME card(s), you get the Titan or 690.

    Folks, we don't yell at Ferrari or Bugatti for pricing their vehicles to their performance capabilities; nobody yelled at Powercolor for pricing the Devil 13 at $1000 even though the 690 spanks it on ALMOST all fronts for $100 LESS.

    Yes, I wish I could afford 1 or 3 of the Titans; but' I am not going to yell and whine about the $1000 price because I CAN'T afford them, it gives me a goal to try and save my sheckles to get at least 2 of them before years end, hopefully the price may (but probably won't) have dropped by then.
  • chizow - Tuesday, February 19, 2013 - link

    The problem with your car analogy is that Nvidia is now charging you Bugatti prices for the same BMW series you bought 2 years ago. Maybe an M3 level of trim this time around, but it's the same class of car, just 2x the price.
  • Sandcat - Wednesday, February 20, 2013 - link

    The high end 28nm cards have all been exercises in gouging. At least they're being consistent with the 'f*ck the customer because we have a duopoly' theme.
  • Kevin G - Tuesday, February 19, 2013 - link

    The card is indeed a luxury product. Like all consumer cards, this is crippled in in some way compared to the Quadro and Tesla lines. Not castrating FP64 performance is big. I guess nVidia finally realized that the HPC market values reliability more than raw computer and hence why EDC/ECC is disabled. ditto for RMDA, though I strongly suspect that RMDA is still used for SLI between Geforce cards - just a lock out to another vendor's hardware.

    The disabling of GPU Boost for FP64 workloads is odd. Naturally it should consumer a bit more energy to do FP64 workloads which would either result in higher temps at the same frequency as FP32 or lower clocks at the same frequency as FP32. The surprise is that users are don't have the flexibility to choose or adjust those settings.

    Display overclocking has me wondering exactly what is being altered. DVI and DP operate at distinct frequencies and moving to a higher refresh rate at higher resolutions should also increase this. Cable quality would potentially have an impact here as well. Though for lower resolutions, driving them at a higher refresh rate should still be within the cabling spec.
  • Kepe - Tuesday, February 19, 2013 - link

    The comment section is filled with NVIDIA hate, on how they dropped the ball, lost their heads, smoked too much and so on. What you don't seem to understand is that this is not a mainstream product. It's not meant for those who look at performance/$ charts when buying their graphics cards. This thing is meant for those who have too much money on their hands. Not the average Joe building his next gaming rig. And as such, this is a valid product at a valid price point. A bit like the X-series Intel processors. If you look at the performance compared to their more regular products the 1000+ dollar price is completely ridiculous.

    You could also compare the GTX Titan to a luxury phone. They use extravagant building materials, charge a lot of extra for the design and "bling", but raw performance isn't on the level of what you'd expect by just looking at the price tag.
  • jimhans1 - Tuesday, February 19, 2013 - link

    I agree, the pricing is in line with the EXPECTED user base for the card; it is NOT a mainstream card.
  • Sandcat - Tuesday, February 19, 2013 - link

    The disconnect regards the Gx110 chip. Sure, it's a non-mainstream card, however people do have the impression that it is the lock-step successor to the 580, and as such should be priced similarly.

    Nvidia does need to be careful here, they enjoy a duopoly in the market but goodwill is hard to create and maintain. I've been waiting for the 'real' successor to the 580 to replace my xfire 5850's and wasn't impressed with the performance increase of the 680. Looks like it'll be another year....at least.

    :(

Log in

Don't have an account? Sign up now