Overclocked: Power, Temperature, & Noise

Our final task is our look at GTX 690’s overclocking capabilities. NVIDIA has told us that with GTX 690 they weren’t just looking to duplicate GTX 680 SLI’s performance, but also its overclocking capabilities. This is quite the lofty goal, since with GTX 690 NVIDIA is effectively packing 2 680s into the same amount of space, leaving far less space for VRM circuitry and trace routing.

GeForce 600 Series Overclocking
  GTX 690 GTX 680
Shipping Core Clock 915MHz 1006MHz
Shipping Max Boost Clock 1058MHz 1110MHz
Shipping Memory Clock 6GHz 6GHz
Shipping Max Boost Voltage 1.175v 1.175v
     
Overclock Core Clock 1040MHz 1106MHz
Overclock Max Boost Clock 1183MHz 1210MHz
Overclock Memory Clock 7GHz 6.5GHz
Overclock Max Boost Voltage 1.175v 1.175v

In practice NVIDIA has not quite kept up with GTX 680, and in other ways completely exceeded it. When it comes to the core clock we didn’t quite reach parity with our reference GTX 680; the GTX 680’s highest boost clock bin could hit 1210MHz, while the GTX 690’s highest boost clock bin topped out at 1183MHz, some 27MHz (2%) slower.

On the other hand, our memory overclock is so high as to be within the “this doesn’t seem physically possible” range. As we have discussed time and time again, GDDR5 memory busses are difficult to run at high clocks on a good day, never mind a bad day. With GF110 NVIDIA couldn’t get too far past 4GHz, and even with GTX 680 NVIDIA was only shipping at 6GHz.

It would appear that no one has told NVIDIA’s engineers that 7GHz is supposed to be impossible, and as a result they’ve gone and done the unthinkable. Some of this is certainly down to the luck of the draw, but it doesn’t change the fact that our GTX 690 passed every last stability test we could throw at it at 7GHz. And what makes this particularly interesting is the difference between the GTX 680 and the GTX 690 – both are equipped with 6GHz GDDR5 RAM, but while the GTX 680 is equipped with Hynix the GTX 690 is equipped with Samsung. Perhaps the key to all of this is the Samsung RAM?

In any case, our final result was a +125MHz core clock offset and a +1000MHz memory clock offset, which translates into a base clock of 1040MHz, a max boost clock of 1183MHz, and a memory clock of 7GHz. This represents a 12%-14% core overclock and a 17% memory overclock, which is going to be enough to put quite the pep in the GTX 690’s step.

As always we’re going to start our look at overclocking in reverse, beginning with power, temperature, and noise. For the purpose of our testing we’ve tested our GTX 690 at two different settings: at stock clocks with the power target set to 135% (GTX 690 PT), and with our custom overclock alongside the same 135% power target (GTX 690 OC). This allows us to look at both full overclocking and the safer option of merely maxing out the boost clocks for all they’re worth.

As expected, merely increasing the power target to 135% was enough to increase the GTX 690’s power consumption, though overclocking further adds to that. Even with the power target increase however, the power consumption at the wall for the GTX 690 is still lower than the GTX 680 SLI by over 20W, which is quite impressive. As we’ll see in our section on performance this is more than enough to erase the GTX 690’s performance gap, meaning at this point its still consuming less power than the GTX 680 SLI while offering better performance than its dual-card cousin.

It’s only after outright overclocking that we finally see power consumption equalize with the GTX 680 SLI. The overclocked GTX 690 is within 10W of the GTX 680 SLI, though as we’ll see the performance is notably higher.

What does playing with clocks and the power target do to temperatures? The impact isn’t particularly bad, though we’re definitely reaching the highest temperatures we really want to hit. For the GTX 690 PT things are actually quite good under Metro, with the temperature not budging an inch even with the higher power consumption. Under OCCT however temperatures have risen 5C to 87C. Meanwhile the GTX 690 OC reaches 84C under Metro and a toasty 89C under Metro. These should be safe temperatures, but I would not want to cross 90C for any extended period of time.

Finally we have load noise. Unsurprisingly, because load temperatures did not go up for the GTX 690 PT under Metro load noise has not gone up either. On the other hand load noise under OCCT has gone up 3.5dB, making the GTX 690 PT just as loud as our GTX 680 SLI in its adjacent configuration. In practice the noise impact from raising the power target is going trend closer to Metro than OCCT, but Metro is likely an overly optimistic scenario; there’s going to be at least a small increase in noise here.

The GTX 690 OC meanwhile approaches the noise level of the GTX 680 SLI under Metro, and shoots past it under OCCT. Considering the performance payoff some users will no doubt find this worth the noise, but it should be clear that overclocking like this means sacrificing the stock GTX 690’s quietness.

Power, Temperature, & Noise Overclocked: Gaming Performance
Comments Locked

200 Comments

View All Comments

  • CeriseCogburn - Friday, May 4, 2012 - link

    I disagree
  • chadwilson - Thursday, May 3, 2012 - link

    I have some issues with this article, the first of course being availability. Checking the past week, I have yet to see any availability of the 680 besides $200+ over retail premium cards on ebay. How can you justify covering yet another paper launch card without blaring bold print caveats, that for all intents and purposes, nVidia can't make for a very long time? There is a difference between ultra rare and non-existant.

    Is a card or chip really the fastest if it doesn't exist to be sold?

    Second, the issue of RAM, that's a problem in that most games are 32 bit, and as such, they can only address 3.5GB of RAM total between system and GPU RAM. This means you can have 12GB of RAM on your video card and the best you will ever get is 3GB worth of usage.

    Until games start getting written with 64 bit binaries (which won't happen until Xbox 720 since almost all PC games are console ports), anything more than 2-3GB GPU RAM is wasteful. We're still looking at 2014 until games even START using 64 bit binaries.

    Want it to change? Lobby your favorite gaming company. They're all dragging their feet, they're all complicit.
  • Ryan Smith - Thursday, May 3, 2012 - link

    Hi Chad;

    While I'm afraid we're not at liberty to discuss how many 680 and 690 cards NVIDIA has shipped, we do have our ears to the ground and as a result we have a decent idea as to how many have shipped. Suffice it to say, NVIDIA is shipping a fair number of cards; this is not a paper launch otherwise we would be calling NVIDIA out on it. NVIDIA absolutely needs to improve the stock situation, but at this point this is something that's out of their hands until either demand dies down or TSMC production picks up.

    -Thanks
    Ryan Smith
  • silverblue - Thursday, May 3, 2012 - link

    The 690 is a stunning product... but I'm left wanting to see the more mainstream offerings. That's really where NVIDIA will make its money, but we're just left wondering about supply issues and the fact that AMD isn't suffering to the same degree.
  • CeriseCogburn - Sunday, May 6, 2012 - link

    A single EVGA GTX680 sku at newegg has outsold the entire line up of 7870 and 7850 cards combined with verified owners reviews.
    So if availability is such a big deal, you had better ask yourselves why the 7870 and 7850 combined cannot keep pace with a single EVGA 680 card selling at Newegg.
    Go count them up - have at it - you shall see.
    108 sales for the single EVGA 680, more than the entire combined lot of all sku's in stock and out of the 7870 and 7850 combined total sales.
    So when you people complain, I check out facts - and I find you incorrect and failing almost 100% of the time.
    That's what happens when one repeats talking points like a sad PR politician, instead of checking available data.
  • ltcommanderdata - Thursday, May 3, 2012 - link

    Have you considered using WinZip 16.5 with it's OpenCL accelerated file compression/decompression as a compute benchmark? File compression/decompression is a common use case for all computer users, so could be the broadest application of GPGPU relevant to consumers if there is an actual benefit. The OpenCL acceleration in WinZip 16.5 is developed/promoted in association with AMD so it'll be interesting to see if it is hobbled on nVidia GPUs, as well as how well if scales with GPU power, whether it scales with SLI/dual GPU cards, and whether there are advantages with close IGP-CPU integration as with Llano and Ivy Bridge.
  • ViRGE - Thursday, May 3, 2012 - link

    Doesn't WinZip's OpenCL mode only work with AMD cards? If so, what use would that be in an NVIDIA review?
  • ltcommanderdata - Thursday, May 3, 2012 - link

    I actually don't know if it's AMD only. I know AMD worked on it together with WinZip. I just assumed that since it's OpenCL, it would be vendor/platform agnostic. Given AMD's complaints about use of vendor-specific CUDA in programs, if they developed an AMD-only OpenCL application, I would find that very disappointing.
  • ViRGE - Thursday, May 3, 2012 - link

    Going by their website it's only for AMD cards.

    "WinZip has been working closely with Advanced Micro Devices (AMD) to bring you a major leap in file compression technology. WinZip 16.5 uses OpenCL acceleration to leverage the significant power of AMD Fusion processors and AMD Radeon graphics hardware graphics processors (GPUs). The result? Dramatically faster compression abilities for users who have these AMD products installed! "
  • CeriseCogburn - Friday, May 4, 2012 - link

    Oh, amd the evil company up to it's no good breaking of openCL misdeeds again.
    Wow that's evil- the way it's meant to be unzipped.

Log in

Don't have an account? Sign up now