Power, Temperature, & Noise

Last but not least as always is our look at the power consumption, temperatures, and acoustics of the GTX 570. While NVIDIA’s optimizations to these attributes were largely focused on bringing the GTX 480’s immediate successor to more reasonable levels, the GTX 570 certainly stands to gain some benefits too.

Starting with VIDs, we once again only have 1 card so there’s not too much data we can draw. Our GTX 570 sample has a lower VID than our GTX 580, but as we know NVIDIA is using a range of VIDs for each card. From what we’ve seen with the GTX 470 we’d expect the average GTX 570 VID to be higher than the average GTX 580 VID, as NVIDIA is likely using ASICs with damaged ROPs/SMs for the GTX 570, along with ASICs that wouldn’t make the cut at a suitable voltage with all functional units turned on. As a result the full power advantage of lower clockspeeds and fewer functional units would not always be realized here.

GeForce GTX 500 Series Voltages
Ref 580 Load Asus 580 Load Ref 570 Load Ref 570 Idle
1.037v 1.000v 1.025v 0.912v

With fewer functional units than the GTX 580 and a less leaky manufacturing process than the GTX 400 series, the GTX 570 is the first GF1x0 card to bring our test rig under 170W at idle. NVIDIA’s larger chips still do worse than AMD’s smaller chips here though, which is driven home by the point that the 6850 CF is only drawing 1 more watt at idle. Nevertheless this is a 4W improvement over the GTX 470 and a 21W improvement over the GTX 480; the former in particular showcases the process improvements as there are more functional units and yet power consumption has dropped.

Under load we once again see NVIDIA’s design and TSMC’s process improvements in action. Even though the GTX 570 is over 20% faster than the GTX 470 on Crysis, power consumption has dropped by 5W. The comparison to the GTX 480 is even more remarkable at a 60W reduction in power consumption for the same level of performance, a number well in excess of the power savings from removing 2 GDDR5 memory chips. It’s quite remarkable how a bunch of minor changes can add up to so much.

On the flipside however we have the newly reinstated FurMark, which we can once again use thanks to W1zzard of TechPowerUp’s successful disabling of NVIDIA’s power throttling features on the GTX 500 series. We’ve rebenched the GTX 580 and GTX 580 SLI along with the GTX 570, and now have fully comparable numbers once more.

It’s quite interesting that while the GTX 570 does so well under Crysis, it does worse here than the GTX 470. The official TDP difference is in the GTX 470’s favor, but not to this degree. I put more stock in the Crysis numbers than the FurMark numbers, but it’s worth noting that the GTX 570 can at times be worse than the GTX 470. However also take note of the fact that the GTX 570 is still beating the GTX 480 by 33W, and we’ve already established the two have similar performance.

Of course the stand-outs here are AMD’s cards, which benefit from AMD’s small, more power efficient chip designs. The Radeon HD 5870 draws 50W+ less than the GTX 570 in all situations, and even the 6850 CF wavers between being slightly worse and slightly better than the GTX 570. With the architecture and process improvements the GTX 570’s power consumption is in line with historically similar cards, but NVIDIA still cannot top AMD on power efficiency.

Up next are idle temperatures. With nearly identical GPUs on an identical platform it should come as little surprise that the GTX 570 performs just like a GTX 580 here: 37C. The GTX 570 joins the ranks among some of our coolest idling high-end cards, and manages to edge out the 5870 while clobbering the SLI and CF cards.

With the same cooling apparatus as the GTX 580 and lower power consumption, we originally expected the GTX 570 to edge out the GTX 580 in all cases, but as it turns out it’s not so simple. When it comes to Crysist the 82C GTX 570 is 3C warmer than the GTX 580, bringing it in line with other cards like the GTX 285. More importantly however it’s 11-12C cooler than the GTX 480/470, driving home the importance of the card’s lower power consumption along with the vapor chamber cooler. Is does not end up being quite as competitive with the 5870 and SLI/CF cards however, as those other setups are between 0C-5C cooler.

As for FurMark, temperatures approach the mid-to-upper 80s, putting it in good company of most other high-end cards. The 5870 edges out the GTX 570 by 2C, while everything else is as warm or warmer. The GTX 480/470 in particular end up being 4C warmer, and as we’ll see a good bit louder.

It’s once we look at our noise data that it’s clear NVIDIA has been tinkering with the fan parameters of their design for the GTX 570, as the GTX 570 doesn’t quite match the GTX 580 here. When it comes to idle noise for example the GTX 570 manages to just about hit the noise floor of our rig, emitting 1.5dB less noise than the GTX 580/480/470. For the purposes of our testing, it’s effectively a silent card at idle in our test rig.

Under load, it’s finally apparent that NVIDIA has tweaked the GTX 570 to be quieter as opposed to cooler. For the slightly higher temperatures we saw earlier the GTX 570 is 2dB quieter than the GTX 580, 3.6dB quieter than the GTX 470, and 6.2dB quieter than the similarly performing GTX 480. For its performance level the GTX 570 effectively tops the charts here – the next quietest cards are the 5850 and 6870, a good pedigree to be compared to. On the other side of the chart we have the 5870 at 1.4dB louder, and the SLI/CF cards at anywhere between 1.2dB and 2.4dB louder.

Compute & Normalized Numbers Final Thoughts
POST A COMMENT

54 Comments

View All Comments

  • xxtypersxx - Tuesday, December 07, 2010 - link

    If this thing can hit 900mhz it changes the price/performance picture entirely, why no overclock coverage in such a comprehensive review?

    Otherwise great write up as always!
    Reply
  • Bhairava - Tuesday, December 07, 2010 - link

    Yes good point. Reply
  • vol7ron - Tuesday, December 07, 2010 - link

    Why do graphics cards cost more than cpu+mobo these days?

    I know there's a different design process and maybe there isn't as much an economy of scale, but I'm just thinking about the days when it was reverse.
    Reply
  • Klinky1984 - Tuesday, December 07, 2010 - link

    Well you're essentially buying a computer on a card with a CPU these days. High performance GPU w/ high performance, pricey ram, all of which needs high quality power components to run. GPUs are now computers inside of computers. Reply
  • lowlymarine - Tuesday, December 07, 2010 - link

    I think it's simply that GPUs can't get cheaper to the extent that CPUs have, since the die sizes are so much larger. I certainly wouldn't say they're getting MORE expensive - I paid $370 for my 8800GTS back in early 2007, and $400 for a 6800 in early 2005 before that. Reply
  • DanNeely - Tuesday, December 07, 2010 - link

    High end GPU chips are much larger than high end CPUchips nowdays. The GF110 has 3bn transistors. For comparison a quadcore i7 only has 700m, and a 6 core athlon 900m, so you get 3 or 4 times as many CPUs from a wafer as you can GPUs. The quad core Itanic and octo core I7 are both around 2bn transistors but cost more than most gaming rigs for just the chip.

    GDDR3/5 are also significantly more expensive than the much slower DDR3 used by the rest of the computer.
    Reply
  • ET - Tuesday, December 07, 2010 - link

    They don't. A Core i7-975 costs way more than any graphics card. A GIGABYTE GA-X58A-UD9 motherboard costs $600 at Newegg. Reply
  • ET - Tuesday, December 07, 2010 - link

    Sorry, was short on time. I'll add that you forgot to consider the price of the very fast memory on high end graphics cards.

    I do agree, though, that a combination of mid-range CPU and board and high end graphics card is cost effective.
    Reply
  • mpschan - Wednesday, December 08, 2010 - link

    Don't forget that in a graphics card you're getting a larger chip with more processing power, a board for it to run on, AND memory. 1GB+ of ultra fast memory and the tech to get it to work with the GPU is not cheap.

    So your question needs to factory in cpu+mobo+memory, and even then it does not have the capabilities to process graphics at the needed rate.

    Generic processing that is slower at certain tasks will always be cheaper than specialized, faster processing that excels at said task.
    Reply
  • slagar - Wednesday, December 08, 2010 - link

    High end graphics cards were always very expensive. They're for enthusiasts, not the majority of the market.
    I think prices have come down for the majority of consumers. Mostly thanks to AMDs moves, budget cards are now highly competitive, and offer acceptable performance in most games with acceptable quality. I think the high end cards just aren't as necessary as they were 'back in the day', but then, maybe I just don't play games as much as I used to. To me, it was always the case that you'd be paying an arm and a leg to have an upper tier card, and that hasn't changed.
    Reply

Log in

Don't have an account? Sign up now