Power, Temperature, & Noise

As always, we’re wrapping up our look at a video card’s stock performance with a look at power, temperature, and noise. More so than even single GPU cards, this is perhaps the most important set of metrics for a multi-GPU card. Poor cooling that results in high temperatures or ridiculous levels of noise can quickly sink a multi-GPU card’s chances. Ultimately with a fixed power budget of 300W or 375W, the name of the game is dissipating that heat as quietly as you can without endangering the GPUs.

GeForce GTX 600 Series Voltages
Ref GTX 690 Boost Load Ref GTX 680 Boost Load Ref GTX 690 Idle
1.175v 1.175v 0.987v

It’s interesting to note that the GPU voltages on GTX 680 and GTX 690 are identical; both idle at the 0.987v, and both max out at 1.175v for the top boost bin. It would appear that NVIDIA’s binning process for the GTX 690 is looking almost exclusively at leakage; they don’t need to find chips that operate at a lower voltage, they merely need chips that don’t waste too much power.

NVIDIA has progressively brought down their idle power consumption and it shows. Where the GTX 590 would draw 155W at the wall at idle, we’re drawing 130W with the GTX 690. For a single GPU NVIDIA’s idle power consumption is every bit as good as AMD’s, however they don’t have any way of shutting off the 2nd GPU like AMD does, meaning that the GTX 690 still draws more power at idle than the 7970CF. Being able to shut off that 2nd GPU really mitigates one of the few remaining disadvantages of a dual-GPU card, and it’s a shame NVIDIA doesn’t have something like this.

Long idle power consumption merely amplifies this difference. Now NVIDIA is running 2 GPUs while AMD is running 0, which means the GTX 690 is leading to us pulling 19W more at the wall while doing absolutely nothing.

Thanks to NVIDIA’s binning, the load power consumption of the GTX 690 looks very good here. Under Metro we’re drawing 63W less at the wall compared to the GTX 680 SLI, even though we’ve already established that performance is within 5%. The gap with the 7970CF is even larger; the 7970CF may have a performance advantage, but it comes at a cost of 175W more at the wall.

OCCT power is much the same story. Here we’re drawing 429W at the wall, an incredible 87W less than the GTX 680 SLI. In fact a GTX 690 draws less power than a single GTX 580. That is perhaps the single most impressive statistic you’ll see today. Meanwhile compared to the 7970CF the difference at the wall is 209W. The true strength of multi-GPU cards is their power consumption relative to multiple cards, and thanks to NVIDIA’s ability to get the GTX 690 so very close to the GTX 680 SLI the GTX 690 is absolutely sublime here.

Moving on to temperatures, how well does the GTX 690 do? Quite well. Like all dual-GPU cards GPU temperatures aren’t as good as with single-GPU cards, but it’s also no worse than any dual-GPU setup. In fact of all the dual-GPU cards in our benchmark selection this is the coolest, beating even the GTX 590. Kepler’s low power consumption really pays off here.

For load temperatures we’re going to split things up a bit. While our official testing protocol is to test with our video cards directly next to each other when doing multi-card configurations, we’ve gone ahead and tested the GTX 680 SLI both in an adjacent and spaced configuration, with the spaced configuration marked with a *.

When it comes to load temperatures the GTX 690 once again does well for itself. Under Metro it’s warmer than most single GPU cards, but only barely so. The difference from a GTX 680 is only 3C, 1C with a spaced GTX 680 SLI, and it’s 4C cooler than an adjacent GTX 680 SLI setup.  More importantly perhaps is that Metro temperatures are 6C cooler than on the GTX 590.

As for OCCT, the numbers are different but the story is the same. The GTX 690 is 3C warmer than the GTX 680, 1C warmer than a spaced GTX 680 SLI, and 4C cooler than an adjacent GTX 680 SLI. Meanwhile temperatures are now 8C cooler than the GTX 590 and even 6C cooler than the GTX 580.

So the GTX 680 does well with power consumption and temperatures, but is there a noise tradeoff? At idle the answer is no; at 40.9dB it’s effectively as quiet as the GTX 680 and incredibly enough over 6dB quieter than the GTX 590. NVIDA’s progress at idle continues to impress, even if they can’t shut off the second GPU.

When NVIDIA was briefing us on the GTX 690 they said that the card would be notably quieter than even a GTX 680 SLI, which is quite the claim given how quiet the GTX 680 SLI really is. So out of all the tests we have run, this is perhaps the result we’ve been the most eager to get to. The results are simply amazing. The GTX 690 is quieter than a GTX 680 SLI alright; it’s quieter than a GTX 680 SLI whether the cards are adjacent or spaced. The difference with spaced cards is only 0.5dB under Metro, but it’s still a difference. Meanwhile with that 55.1dB noise level the GTX 690 is doing well against a number of other cards here, effectively tying the 7970 and beating out every other multi-GPU configuration on the board.

OCCT is even more impressive, thanks to a combination of design and the fact that NVIDIA’s power target system effectively serves as a throttle for OCCT. 55.8dB is not only just a hair louder than under Metro, but it’s still a hair quieter than a spaced GTX 680 SLI setup. It’s also quieter than a 7970, a GTX 580, and every other multi-GPU configuration we’ve tested. The only thing it’s not quieter than is the GTX 680 and the 6970.

With all things considered the GTX 690 is not that much quieter than the GTX 590 under gaming loads, but NVIDIA has improved performance just enough that they can beat their own single-GPU cards in SLI. And at the same time the GTX 690 consumes significantly less power for what amounts to a temperature tradeoff of only a couple of degrees. The fact that the GTX 690 can’t quite reach the GTX 680 SLI’s performance may have been disappointing thus far, but after looking at our power, temperature, and noise data it’s a massive improvement on the GTX 680 SLI for what amounts to a very small gaming performance difference.

Compute Performance Overclocked: Power, Temperature, & Noise
Comments Locked

200 Comments

View All Comments

  • InsaneScientist - Sunday, May 6, 2012 - link

    Or don't...

    It's 2 days later, and you've been active in the comments up through today. Why'd you ignore this one, Cerise?
  • CeriseCogburn - Sunday, May 6, 2012 - link

    Because you idiots aren't worth the time and last review the same silverblue stalker demanded the links to prove my points and he got them, and then never replied.
    It's clear what providing proof does for you people, look at the sudden 100% ownership of 1920x1200 monitors..
    ROFL
    If you want me to waste my time, show a single bit of truth telling on my point on the first page.
    Let's see if you pass the test.
    I'll wait for your reply - you've got a week or so.
  • KompuKare - Thursday, May 3, 2012 - link

    It is indeed sad. AMD comes up with really good hardware features like eyefinity but then never polishes up the drivers properly. Looking some of crossfire results is sad too: in Crysis and BF3 CF scalling is better than SLI (unsure but I think the trifire and quadfire results for those games are even more in AMD's favour), but in Skyrim it seems that CF is totally broken.

    Of course compared to Intel, AMD's drivers are near perfect but with a bit more work they could be better than Nvidia's too rather than being mostly at 95% or so.

    Tellingly, JHH did once say that Nvidia were a software company which was a strange thing for a hardware manufacturer to say. But this also seems to mean that they forgotten the most basic primary thing which all chip designers should know: how to design hardware which works. Yes I'm talking about bumpgate.

    See despite all I said about AMD's drivers, I will never buy Nvidia hardware again after my personal experience of their poor QA. My 8800GT, my brother's 8800GT, this 8400M MXM I had, plus number of laptops plus one nForce motherboard: they all had one thing in common, poorly made chips made by BigGreen and they all died way before they were obsolete.

    Oh, and as pointed out in the Anand VC&G forums earlier today:

    "Well, Nvidia has the title of the worst driver bug in history at this point-
    http://www.zdnet.com/blog/hardware/w...hics-card/7... "

    killing cards with a driver is a record.
  • Filiprino - Thursday, May 3, 2012 - link

    Yep, that's true. They killed cards with a driver. They should implement hardware auto shutdown, like CPUs. As for the nForce, I had one motherboard, the best nForce they made: nForce 2 for AMD Athlon. The rest of mobo chipsets were bullshit, including nForce 680.

    The QA I don't think is NVIDIA's fault but videocard manufacturers.
  • KompuKare - Thursday, May 3, 2012 - link


    The QA I don't think is NVIDIA's fault but videocard manufacturers.


    No, 100% Nvidia's fault. Although maybe QA isn't the right word. I was referring to Nvidia using the wrong solder underfil for a few million chips (the exact number is unknown): they were mainly mobile parts and Nvidia had to put $250 million aside to settle a class action.

    http://en.wikipedia.org/wiki/GeForce_8_Series#Prob...

    Although that wiki article is rather lenient towards Nvidia since that bit about fan speeds is red herring: more accurately it was Nvidia which spec'ed their chips to a certain temperature and designs which run way below that will have put less stress on the solder but to say it was poor OEM and AIB design which lead to the problem is not correct. Anyway, the proper expose was by Charlie D. in the Inquirer and later SemiAccurate
  • CeriseCogburn - Friday, May 4, 2012 - link

    But in fact it was a bad heatsink design, thank HP, and view the thousands of heatsink repairs, including the "add a copper penny" method to reduce the giant gap between the HS and the NV chip.
    Charlie was wrong, a liar, again, as usual.
  • KompuKare - Friday, May 4, 2012 - link

    Don't be silly. While HP's DV6000s were the most notorious failures and that was due to HP's poorly designed heatsink / cooling bumpgate also saw Dells, Apples and others:

    http://www.electronista.com/articles/10/09/29/suit...
    http://www.nvidiadefect.com/nvidia-settlement-t874...

    The problem was real, continues to be real and also affects G92 desktop parts and certain nForce chipsets like the 7150.

    Yes, the penny shim trick will fix it for a while but if you actually were to read up on technicians forums who fix laptops, that plus reflows are only a temporary fix because the actual chips are flawed. Re-balling with new, better solder is a better solution but not many offer those fixes since it involves 100s of tiny solder balls per chip.

    Before blindly leaping to Nvidia's defence like a fanboy, please do some research!
  • CeriseCogburn - Saturday, May 5, 2012 - link

    Before blindly taking the big lie from years ago repeated above to attack nvidia for no reason at all other than all you have is years old misinformation, then wail on about it, while telling someone else some more lies about it, check your own immense bias and lack of knowledge, since I had to point out the truth for you to find, and you forgot DV9000, dv2000 and dell systems with poor HS design, let alone apple amd console video chip failings, and the fact that payment was made and restitution was delivered, which you also did not mention, because of your fanboy problems, obviously in amd's favor.
  • Ashkal - Thursday, May 3, 2012 - link

    In price comparison in Final words you are not referring with AMD products. I think AMD is better in price performance ratio.
  • prophet001 - Thursday, May 3, 2012 - link

    I agree

Log in

Don't have an account? Sign up now