Power, Temperature, & Noise

As always, we’re wrapping up our look at a video card’s stock performance with a look at power, temperature, and noise. Unlike GTX 660, GTX 650 Ti does not have GPU boost, which means the GTX 650 Ti’s load voltage is fixed at a single value. This is more important for overclocking, but because NVIDIA is not trying to min-max performance by sacrificing some power consumption, it allows the GTX 650 Ti to really turn down its power consumption. As a reminder, NVIDIA’s TDP here is 110W, with no power target (though NVIDIA throws around a “typical” number of 80W).

GeForce GTX 650 Ti Series Voltages
Ref GTX 650 Ti Load EVGA GTX 650 Ti Load Zotac GTX 650 Ti Load Gigabyte GTX 650 Ti Load
1.087v 1.05v 1.087v 1.087v

Without GPU boost voltages are quite low for desktop GeForce 600 cards. Instead of spiking at 1.175v we’re seeing a range from 1.05v to 1.087v. Meanwhile the idle voltage is typical for a GK106 card at 0.875v.

While we’re on the subject of voltages, it’s worth noting that while NVIDIA doesn’t have GPU boost active, this doesn’t mean they’ve thrown away Kepler’s power management system. Specifically there’s still a hard 1.175v ceiling, and at high voltages NVIDIA will still step down the voltage based on temperature in order to combat leakage. When overvolting our reference GTX 650 Ti we saw voltage step downs at a relatively low 56C, and another at 74C. This only applies to overvolting though, and there’s no corresponding reduction in clockspeed in any scenario.

Starting as always with idle, having already seen the GK106 based GTX 660 there are no grand surprises here. Even with the disabling of some functional units the GTX 650 Ti doesn’t do any better than the full-fledged GTX 660, which is not to say that this is bad. At an NV estimated idle TDP of 5W this is still better than the 7800, and is good enough to tie the 7770. The factory overclocked cards fare no worse here either, all of which lead to our testbed hitting the same marks at the wall.

The one place NVIDIA can’t compete is in the long idle scenario with no active displays. The 7800 series still has a 5W advantage at the wall, though the benefits of something like AMD’s ZeroCore technology are not nearly as great here since the GTX 650 Ti can’t be SLI’d (and hence have a headless card).

Moving on to load power consumption we’re finally seeing a test where the GTX 650 Ti has a clear architectural/design advantage over the competition. Though not strictly comparable, the 7850 has an official TDP of 150W versus 110W for the GTX 650 Ti, a difference of 40W. Meanwhile under Metro the gap between the 7850 and the GTX 650 Ti is exactly 40W (before taking into consideration PSU efficiency of course). Paper specs aside the GTX 650 Ti is clearly intended to be a lower power card than the 7850 and here it delivers. If it can’t beat the 7850 in performance then it is going to need to beat the 7850 on power consumption.

Meanwhile our factory overclocked cards present an interesting lineup. All 3 are closely clustered together in spite of the fact that the Zotac and Gigabyte cards have an extra 1GB of GDDR5 RAM to power. GDDR5 has quite the reputation for being a power hog (for a RAM), so it’s a bit surprising not to see a greater difference. The biggest driver of any power increase seems to be the overclocks themselves, leading to the marginally lower value we see for the Gigabyte card.

Turning to OCCT, the GTX 650 Ti doesn’t maintain the same large lead over the 7850 that it did with Metro thanks to AMD’s more aggressive throttling (and without GPU boost NVIDIA may as well not have any throttling), but reinforcing the fact that these cards are in two different power classes the GTX 650 Ti still ends up drawing less power. In fact it draws less power than the GTX 550 Ti or the GTS 450, the latter of which is not typically a high power card. Even without GPU boost – or perhaps especially without GPU boost – NVIDIA’s high power efficiency is maintained, though at a cost of rendering performance notably weaker than the immediate competition.

A low power GPU combined with open air coolers often leads to very low idle temperatures, and the GTX 650 Ti lives up to that tradition. It’s simply not that often that we see GPUs hit temperatures only a couple of degrees above room temperature. And in the case of Gigabyte’s card with its oversized that’s the lowest idle GPU temperature we’ve ever recorded, once more proving that there’s no kill like overkill.

Moving on to load temperatures we get to see the effectiveness of an open air cooler combined with the relatively low power consumption of the GTX 650 Ti. At 59C under Metro and 65C under OCCT our reference GTX 650 Ti holds up amazingly well, and just wait until we get to the noise readings, since this is where the GTX 650 Ti and 7850 will really stand apart. Meanwhile even with their similar designs the EVGA and Zotac cards both end up being a bit cooler, than the reference GTX 650 Ti. But the real winner is the Gigabyte card and its oversized cooler; 43C with Metro is unheard of, and 50C with OCCT is as equally impressive.

Transitioning to noise measurements we start with idle noise, where there is no great surprise as most cards have long since ceased having loud idle states. The EVGA card is a bit disappointing though, since there’s no great reason why such an open air card should be much above 41db(A).

Taking a look at load noise we finally get to see the full picture. Earlier we had high praises for NVIDIA’s reference design, and this is the reason why. NVIDIA almost always hits a good balance between power, temperature, and noise, and nowhere is this more evident than with the GTX 650 Ti. Peaking at 41.5db(A) with OCCT it barely gets off of the noise floor. Meanwhile AMD’s reference 7850, complete with its blower, is almost 9dB(A) louder. Open air non-reference 7850s won’t be nearly as loud, but this is a reminder of what open air coolers can do, particularly when power consumption is low enough.

The real winner for load noise generation however is not NVIDIA’s reference design or even the Gigabyte card and its oversized cooler; it’s Zotac and their fairly plain open air cooler. Admittedly we’re looking at just a 1dB(A) difference, but if noise is crucial the Zotac card looks particularly good. Meanwhile the EVGA card, though starting out rough, doesn’t end up doing too poorly at load here. 0.3dB(A) is a tiny increase in noise over idle and it’s actually good enough to be in the middle of the pack for our retail cards.

The surprising result is the Gigabyte card, which should have the easiest time cooling. It looks like Gigabyte’s fan curve is a bit more aggressive than the rest of the GTX 650 Ti cards, which helps explain its amazingly low load temperatures, but that means there’s a noise tradeoff. Given the temperatures we’re seeing, Gigabyte was a little too aggressive with their fan curve; had they not then they could have easily swept this entire section.

Synthetics OC: Power, Temperature, & Noise
Comments Locked

91 Comments

View All Comments

  • TheJian - Tuesday, October 9, 2012 - link

    The 7850 is more money, it should perform faster. I'd expect nothing less. So what this person would end up with is 10-20% less perf (in the situation you describe) for 10-20% less money. ERGO, exactly what they should have got. So basically a free copy of AC3 :) Which is kind of the point. The 2GB beating the 650TI in that review is $20 more. It goes without saying you should get more perf for more $. What's your point?

    Your wrong. IN the page you point to (just looking at that, won't bother going though them all), the 650TI 1GB scores 32fps MIN, vs. 7770 25fps min. So unplayable on 7770, but playable on 650TI. Nuff said. Spin that all you want all day. The card is worth more than 7770. That's OVER 20% faster 1920x1080 4xAA in witcher 2. You could argue for $139 maybe, but not with the AC3 AAA title, and physx support in a lot of games and more to come.
    http://www.geforce.com/games-applications/physx
    All games with physx so far. Usually had for free, no hit, see hardocp etc. Borderlands 2, Batman AC & AAsylum, Alice Madness returns, Metro2033, sacred2FA, etc etc...The list of games is long and growing. This isn't talked about much, nor what these effects at to the visual experience. You just can't do that on AMD. Considering these big titles (and more head to the site) use it, any future revs of these games (sequels etc) will likely use it also and the devs now have great experience with physx. This will continue to become a bigger issue as we move forward. What happens when all new games support this, and there's no hit for having it on (hardocp showed they were winning WITH it on for free)? There's quite a good argument even now that a LOT of good games are different in a good way on NV based cards. Soon it won't be a good argument, it will be THE argument. Unfortunately for AMD/Intel havok never took off and AMD has no money to throw at devs to inspire them to support it. NV continues to make games more fun on their hardware (either on android tegrazone stuff, or PC stuff). Much tighter connections with devs on the NV side. Money talks, unfortunately for AMD debt can't talk for you (accept to say don't buy my stock we're in massive debt) :)
  • jtenorj - Wednesday, October 10, 2012 - link

    No, you are wrong. Lower end nvidia cards(whether this card falls into that category or not is debatable) generally cannot run physx on high, but require it to be set to medium, low or off. AMD cards can run physx in a number of games on medium by using the cpu without a massive performance hit. There hasn't been a lot of time since nvidia got physx tech from ageia for game developers to include it in titles because developement cycles are getting longer and longer. Still, I think most devs shy away from physx because it hurts the bottom line(more time to impliment= more money spend on salaries and later release, alienate 40% of potential market by making it so the full experience is not an option for them, losing more money). Take a look at the havok page on wikipedia vs the physx page(which is more extensive than what even nvidia lists on their own site). Havok and other software physics engines are used in the vast majority of released and soon to be released titles because they will work with anyone's card. I'm not saying HD7770 is better than gtx650ti(it is in fact worse than the new card), but the HD7850 is a far better value(especially the 2GB version). Finally, it is possible to add a low end geforce like gt610 to a higher end AMD primary as a dedicated physx card in some systems.
  • ocre - Thursday, October 11, 2012 - link

    but it doest alienate 40% of the market.

    You said this yourself:

    "AMD cards can run physx in a number of games on medium by using the cpu without a massive performance hit."

    Then try to turn it all around???? Clever? Doubtful!!

    And this is what all the AMD fanboys cried about. Nvidia purposefully crippling physX on the CPU. Nvidia evil for making physX nvidia only. But now they have improved their physX code on the CPU and every single game as of late offers acceptable physX performance on AMD hardware via the CPU. Of course you will only get fully fledged GPU accelerated physX with Nvidia hardware but you cannot really expect more, can you?

    Even if your not capable of seeing the improvements Nvidia made it is there. They have reached over and extended the branch to AMD users. They got physX to run better on multicore CPUs. They listened to complaints (even from AMD users) and made massive improvements.

    This is the thing with nvidia. They are listening and steadily improving. Removing those negatives one at a time. Its gonna be hard for AMD fanboys to come up with negatives because nvidia is responding with every generation. PhysX is one example, the massive power efficiency improvement of kepler is another. Nvidia is proactive and looking for ways to improve their direction. All these things complaints on Nvidia are getting addressed. There is nothing you can really say except they are making good progress. But that will not stop AMD fans from desperately searching for any negative that they can grasp on to. But more and more people are taking note of this progress, if you havent noticed yourself.
  • CeriseCogburn - Friday, October 12, 2012 - link

    Oh, so that's why the crybaby amd fans have shut their annoying traps on that, not to mention their holy god above all amd/radeon videocards apu holy trinity company after decades of foaming the fuming rage amd fanboys into mooing about "proprietary Physx! " like a sick monkey in heat and half dead, and extolling the pure glorious god like and friendly neighbor gamer love of "open source" and spewwwwwwwing OpenCL as if they had it sewed all over their private parts and couldn't stop staring and reading the teleprompter, their glorious god amd BLEW IT- and puked out their proprietary winzip !
    R O F L

    Suddenly the intense and insane constant moaning and complaining and attacking and dissing and spewing against nVidia "proprietary" was gone...

    Now "winzip" is the big a compute win for the freak fanboy of we know which company. LOL
    P R O P R I E T A R Y ! ! ! ! ! ! ! ! ! ! 1 ! 1 100100000

    JC said it well : Looooooooooooooooooooooseeerrrr !
    (that's Jim Carey not the Savior)
  • CeriseCogburn - Friday, October 12, 2012 - link

    " You buy a GPU to play 100s of games not 1 game. "

    Good for you, so the $50 games times 100 equals your $5,000.00 gaming budget for the card.

    I guess you can stop moaning and wailing about 20 bucks in a card price now, you freaking human joke with the melted amd fanboy brain.
  • Denithor - Tuesday, October 9, 2012 - link

    Hopefully your shiny new GTX 650 Ti will be able to run AC3 smoothly...

    :D
  • chizow - Thursday, October 11, 2012 - link

    According to Nvidia, the 650Ti ran AC3 acceptably at 1080p with 4xMSAA on Medium settings: http://www.geforce.com/whats-new/articles/nvidia-g...

    "In the case of Assassin’s Creed III, which is bundled with the GTX 650 Ti at participating e-tailers and retailers, we recorded 36.9 frames per second using medium settings."

    That's not all that surprising to me though as the GTX 280 ran AC2/ACB Anvil engine games at around the same framerate. While AC3 will certainly be more demanding, the 650Ti is a good bit faster than the 280.

    I'm not in the market though for a GTX 650Ti, I'm more interested in the AC3 bundle making its way to other GeForce parts as I'm interested in grabbing another 670. :D
  • HisDivineOrder - Tuesday, October 9, 2012 - link

    Perhaps you might test without AA when dealing with cards in a sub-$200 price range as that would seem the more likely use for the card. Not saying you can't test with AA, too, but to have all tests include AA seems to be testing a new Volkswagon bug with a raw speed test through a live fire training exercise you'd test a humvee with.
  • RussianSensation - Tuesday, October 9, 2012 - link

    AA testing is often used to stress the ROP and memory bandwidth of GPUs. Also, it's what separates consoles from PCs. If a $150 GPU cannot handle AA but a $160-180 competitor can, it should be discussed. When GTX650Ti and its after-market versions are so closely priced to 7850 1GB/7850 2GB, and it's clear that 650Ti is so much slower, the only one to blame here is NV for setting the price at $149, not the reviewer for using AA.

    GTX560/560Ti/6870/6950 were all tested with AA and this card not only competes against HD7850 but gives owners of older cards a perspective of how much progress there has been with new generation of GPUs. Not using AA would not allow for such a comparison to be made unless you dropped AA from all the cards in this review.

    It sounds like you are trying to find a way to make this card look good but sub-$200 GPUs are capable of running AA as long as you get a faster card.

    HD7850 is 34% faster than GTX650Ti with 4xAA at 1080P and 49% faster with 8xAA at 1080P

    http://www.computerbase.de/artikel/grafikkarten/20...

    All that for $20-40 more. Far better value.
  • Mr Perfect - Tuesday, October 9, 2012 - link

    I thought GTX was reserved for high end cards, with lower tier cards being GT. I guess they gave up on that?

Log in

Don't have an account? Sign up now