Power, Temperature, & Noise

Last but not least as always is our look at the power consumption, temperatures, and acoustics of the GTX 570. While NVIDIA chose to optimize for both performance and power on the GTX 570/580, the GTX 560 Ti is almost exclusively optimized for performance. As a result NVIDIA’s official TDP has gone up by 10W, and as we’ll see in practice the difference is even greater than that.

GeForce 400/500 Series Load Voltage
Ref GTX 560 Ti Ref GTX 460 1GB Ref GTX 460 768MB
0.950v 1.025v 0.987v

Starting with VIDs, we once again only have 1 card so there’s not too much data we can draw. Our GTX 560 sample has a VID of 0.95v, which is actually lower than our reference 1GB card, which had a VID of 1.025v. NVIDIA’s transistor optimizations can allow them to run their cards at lower voltages compared to the earlier designs, but at the same time based on our data a lower VID appears to be critical for keeping the GTX 560 Ti’s power consumption from significantly growing versus the GTX 460. This is not wholly unexpected – GF104 never suffered from leakage nearly as badly as GF100 did.

Idle power is effectively a wash here. The GTX 560 Ti does have additional functional units which are now consuming power even when the GPU is idling, but at the same time the transistor changes are keeping idle power in check. At 161W it’s as good as the rest of our mid-range cards.

Under Crysis we get our first glimpse that while NVIDIA’s TDP may only be 10W higher, the real world difference is greater. While we cannot isolate just the power consumption of the video card, and we can explain at least some of the difference as being a result of a greater workload on the CPU due to a higher framerate, we can’t fully explain away the difference between the GTX 460 1GB and the GTX 560 Ti. With a 26W delta, we’re confident the extra power draw we’re seeing with the GTX 560 Ti is well over 10W – and keep in mind this is a game test where our numbers are similar to how NVIDIA defines their TDP. Performance per watt is always increasing, but for a mid-range card the GTX 560 Ti doesn’t do so hot here, and as a result it’s a bit worse than even the quite similar Radeon HD 5870.

Under FurMark the gap between the GTX 460 1GB and GTX 560 Ti shrinks some, even after we disable the GTX 560 Ti’s overcurrent protection. It’s now a 21W gap, which is still more than we’d expect given NVIDIA’s official TDP numbers. Furthermore AMD’s PowerTune technology makes NVIDIA look quite bad here – the 200W capped 6950 1GB rig is drawing 44W less, and even the 6970 rig is only drawing 9W more. As we said when AMD first introduced PowerTune, we’re not against TDP-limited technologies so long as they’re done on a truly application-wide basis – so hopefully at some point we’ll see NVIDIA’s overcurrent protection technology evolve in to something closer to PowerTune instead of something that merely targets 2 of many stress testing applications.

The ultimate lesson from this however is that it once more reiterates the importance of good case cooling when using open-ended cards that don’t fully exhaust their hot air, such as the GTX 460 and GTX 560. Our airy GPU test bed has no problem with these cards, but at the end of the day you’re looking at 200W of heat and only around half of which is getting blown out of the case, leaving another 100W for the case to deal with. It’s a reasonable design choice on NVIDIA’s part, but it means you need to use the right case for the job. The GTX 560 Ti makes this all the more important, and I suspect we may be at the limits of what’s practical for a non-blower card.

What do you get when you combine a large, open-air video card with an equally large and airy GPU test rig? Very, very low idle temperatures. Last year the GTX 460 set a new mark on our rig at 34C for idle, but with a bigger cooler and similar idle power consumption numbers, the GTX 560 Ti takes this one step farther. At 28C not only is the GTX 560 Ti several degrees cooler than the aforementioned GTX 560s, but it’s also only several degrees off of room temperature. This is something a blower design such as the GTX 570 or Radeon HD 6950 simply cannot match, even if idle power consumption is similar.

Compared to idle things end up being a little less rosy for the GTX 560 Ti, but they still look good. Even with a bigger cooler the GTX 560 cannot match the GTX 460s, but it’s doing fairly well against everything else. The open-air design still gives it an advantage versus blowers, but not by quite as much – the 6950 is only 4C off. Given that we figure the actual power consumption of the GTX 560 Ti is around 20W more than the GTX 460 1GB, it looks like the GTX 560 Ti’s cooler can’t fully make up for the additional heat the GTX 560 Ti’s GPU puts off.

Under FurMark the story is quite similar. The GTX 560 Ti again is several degrees warmer than the GTX 460, and AMD’s blowers do start catching up thanks to PowerTune. Otherwise at 79C the card still runs quite cool, merely not as cool as the GTX 460 before it. When we get to our noise data however, we’ll see that NVIDIA may have optimized the GTX 560 for noise ahead of temperatures more than they did the GTX 460.

With that said, based on our numbers and TDP estimations though, we’re all the more curious at just how much case cooling is necessary for the GTX 560, and if it’s going to be as flexible as the GTX 460. It may be worth building a secondary GPU test rig with poor ventilation to see if the GTX 560 Ti is still suitable under sub-optimal conditions.

At idle virtually all of our cards run up against the noise floor, so there’s little to be surprised about here. The GTX 560 Ti is effectively as good as everything else.

It’s our load noise values that make us reconsider our earlier temperature data. While the GTX 560 Ti may run hotter than a GTX 460, NVIDIA clearly didn’t lose their knack when it comes to noise. The GTX 460 768MB set a new record for a mid-range card, and the GTX 560 Ti is the second –best, besting even the GTX 460 1GB by a bit over 2dB. When it comes to our GPU testbed the GTX 560 is just shy of silent, which is quite an accomplishment for as much power as it consumes. This also handily illustrates why we don’t consider the Radeon HD 6870 to be much competition for the GTX 560 Ti – it may be cheaper, but it’s also a heck of a lot louder. It takes a 6950 to find an AMD card with similar performance that has acoustic qualities in the same neighborhood.

Compute & Tessellation Final Thoughts
Comments Locked

87 Comments

View All Comments

  • ggathagan - Tuesday, January 25, 2011 - link

    I believe you mean "Apparently Anandtech's efforts to find good writers were in vain."
  • phoible4 - Tuesday, January 25, 2011 - link

    The GTX560 looks interesting. However, prices for 768MB 460s are hitting rock bottom. I just paid $90 for one from TigerDirect (after rebates), and it looks like there are a few under $130 on Newegg. It seems like it would cost about the same to run SLI 460s and 1 560 (assuming your case can handle it), and I can guess that the SLI config would be faster in most games.

    I actually kind of expected NVidia to release a dual-chip 460 as their next-gen 580, and take a page out of AMD's playbook (wonder how hot/loud that would be).
  • Belard - Thursday, January 27, 2011 - link

    The GF 460-768mb are slow compared to their 1Gb versions. They run out of memory way too quick. But for $90... that would be a great deal that is worthwhile. Newegg is showing $150 on avg for the 768mb 460s. Which is about $25 less than a newer 6850 card which is easily faster. Its even faster than the 1GB 460 and cost less.
  • mosox - Tuesday, January 25, 2011 - link

    [quoteAMD’s scramble to launch the Radeon HD 6950 1GB has produced a card with similar levels of performance and pricing as the GTX 560 Ti, making it impossible to just blindly recommend the GTX 560 Ti.[/quote]

    What? The 6950 2GB is faster than the 560 and the The 6950 2GB is FASTER than the 6950 2GB at every resolution except the highest ones like 2560x1600.

    This is from Tom's:

    Of course, mid-week, a 1 GB card showed up, so I ran it through our complete benchmark suite. In just about every case, the smaller frame buffer (and tighter memory timings) yields one or two more frames per second than the 2 GB model. It's not worth rehashing in a page full of charts. Literally, expect one or two more frames per second across the board.
  • mosox - Tuesday, January 25, 2011 - link

    Read that as The 6950 1GB is FASTER than the 6950 2GB, sorry.
  • Visual - Wednesday, January 26, 2011 - link

    you read that right - "tighter memory timings"
  • ritalinkid18 - Tuesday, January 25, 2011 - link

    I would just like to say, very nice article... well written and informative. I've been a fan of anandtech for many years and the GPU articles never disappoint.

    Is it just me or does anyone else find reading about Nvidia's architecture a lot more interesting?

    Also, I really hate that the comments are filled with people that say you are bias towards NVIDIA. To all those people, PLEASE go read the some other reviews. A majority of them praise the 560. This article is more critical of the 560 value than most.
  • jonks - Tuesday, January 25, 2011 - link

    "The GTX 560 is always faster than the GTX 470, but never immensely so; and at higher resolutions the GTX 470 still has an advantage."

    So the 560 is always faster than the 470 except when it's not. :)
  • poohbear - Tuesday, January 25, 2011 - link

    wow the gpu market is definitely intense! nvidia and AMD are neck & neck now, very nice time to buy a vid card!
  • 7Enigma - Tuesday, January 25, 2011 - link

    Thanks again Ryan and Anandtech for keeping the 4870 in your charts for 1920X1200 res. I've always read the new gpu reviews and been saddened that although the new cards are fast they were still not approaching 2X the performance of my 4870. With the constant name change with the same parts, or slightly faster parts, it's taken until just about now to have a card worth the upgrade.

    Now my question is will I see the performance improvement in GAMES using my C2D 8500 (OC'd to 3.8GHz), or do I need to rebuild the system with Sandy Bridge to actually see the 2X GPU performance?

Log in

Don't have an account? Sign up now