Power, Temperature, and Noise

Last but not least as always is our look at the power consumption, temperatures, and acoustics of the GTX 580. NVIDIA’s performance improvements were half of the GTX 580 story, and this is the other half.

Starting quickly with voltage, as we only have one card we can’t draw too much from what we know, but there are still some important nuggets. NVIDIA is still using multiple VIDs, so your mileage may vary. What’s clear from the start though is that NVIDIA’s operating voltages compared to the GTX 480 are higher for both idle and load. This is the biggest hint that leakage has been seriously dealt with, as low voltages are a common step to combat leakage. Even with these higher voltages running on a chip similar to GF100, overall power usage is still going to be lower. And on that note, while the voltages have changed the idle clocks have not; idle remains at 50.6MHz for the core.

GeForce GTX 480/580 Voltages
Ref 480 Load Ref 480 Idle Ref 580 Load Ref 580 Idle
0.959v 0.875v 1.037v 0.962v

Beginning with idle power, we’re seeing our second biggest sign that NVIDIA has been tweaking things specifically to combat leakage. Idle power consumption has dropped by 17W on our test system even though the idle clocks are the same and the idle voltage higher. NVIDIA doesn’t provide an idle power specification, but based on neighboring cards idle power consumption can’t be far off from 30-35W. Amusingly it still ends up being more than the 6870 CF however, thanks to the combination of AMD’s smaller GPUs and ULPS power saving mode for the slave GPU.

Looking at Crysis, we begin to see the full advantage of NVIDIA’s optimizations and where a single GPU is more advantageous over multiple GPUs. Compared to the GTX 480 NVIDIA’s power consumption is down 10% (never mind the 15% performance improvement), and power consumption comes in under all similar multi-GPU configurations. Interestingly the 5970 still draws less power here, a reminder that  we’re still looking at cards near the peak of the PCIe specifications.

As for FurMark, due to NVIDIA’s power throttling we’ve had to get a bit creative. FurMark is throttled to the point where the GTX 580 registers 360W, thanks to a roughly 40% reduction in performance under FurMark. As a result for the GTX 580 we’ve swapped out FurMark for another program that generates a comparable load, Program X. At this point we’re going to decline to name the program, as should NVIDIA throttle it we may be hard pressed to determine if and when this happened.

In any case, under FurMark & X we can see that once again NVIDIA’s power consumption has dropped versus the GTX 480, this time by 27W or around 6%. NVIDIA’s worst case scenario has notably improved, and in the process the GTX 580 is back under the Radeon HD 5970 in terms of power consumption. Thus it goes without saying that while NVIDIA has definitely improved power consumption, the GTX 580 is still a large, power hungry GPU.

With NVIDIA’s improvements in cooling and in idle power consumption, there’s not a result more dramatic than idle GPU temperatures. The GTX 580 isn’t just cooler, it’s cool period. 37C is one of the best results out of any of our midrange and high-end GPUs, and is a massive departure from the GTX 480 which was at least warm all the time. As we’ll see however, this kind of an idle temperature does come with a small price.

The story under load is much the same as idle: compared to the GTX 480 the GTX 580’s temperatures have dramatically dropped. At 79C it’s in the middle of the pack, beating a number of single and multi GPU setups, and really only losing to mainstream-class GPUs and the 6870 CF. While we’ve always worried about the GTX 480 at its load temperatures, the GTX 580 leaves us with no such concerns.

Meanwhile under FurMark and Program X, the gap has closed, though the GTX 580 remains in the middle of the pack. 87C is certainly toasty, but it’s still well below the thermal threshold and below the point where we’d be worried about it. Interestingly however, the GTX 580 is actually just a bit closer to its thermal threshold than the GTX 480 is; NVIDIA rated the 480 for 105C, while the 580 is rated for 97C. We’d like to say this vindicates our concerns about the GTX 480’s temperatures, but it’s more likely that this is a result of the transistors NVIDIA is using.

It’s also worth noting that NVIDIA seems to have done away with the delayed fan ramp-up found on the GTX 480. The fan ramping on the GTX 580 is as near as we can tell much more traditional, with the fan immediately ramping up with higher temperatures. For the purposes of our tests, this keeps the temperatures from spiking as badly.

Remember where we said there was a small price to pay for such low idle temperatures? This is it. At 44.4dB, the 580 is ever so slightly (and we do mean slightly) louder than the GTX 480; it also ends up being a bit louder than the 5970 or 6870CF. 44.4 is not by any means loud, but if you want a card that’s whisper silent at idle, the GTX 580 isn’t going to be able to deliver.

And last but not least is load noise. Between their improvements to power consumption and to cooling, NVIDIA put a lot of effort in to the amount of noise the GTX 580 generates. Where the GTX 480 set new records for a single GPU card, the GTX 580 is quieter than the GTX 285, the GTX 470, and even the Radeon HD 5870. In fact it’s only a dB off of the 5850, a card under most circumstances we’d call the epitome of balance between performance and noise. Graphs alone cannot demonstrate just how much of a difference there is between the GTX 480 and GTX 580 – the GTX 580 is not whisper quiet, but at no point in our testing did it ever get “loud”. It’s a truly remarkable difference; albeit one that comes at the price of pointing out just how lousy the GTX 480 was.

Often the mark of a good card is a balance between power, temperature, and noise, and NVIDIA seems to have finally found their mark. As the GTX 580 is a high end card the power consumption is still high, but it’s no longer the abnormality that was the GTX 480. Meanwhile GPU temperatures have left our self-proclaimed danger zone, and yet at the same time the GTX 580 has become a much quieter card under load than the GTX 480. If you had asked us in what NVIDIA needed to work on with the GTX 480, we would have said noise, temperature, and power consumption in that order; the GTX 580 delivers on just what we would have wanted.

Compute and Tessellation Final Thoughts
POST A COMMENT

159 Comments

View All Comments

  • nitrousoxide - Tuesday, November 09, 2010 - link

    Delaying is something good because it indicates that Cayman can be very big, very fast and...very hungry making it hard to build. What AMD needs is a card that can defeat GTX580, no matter how hot or power-hungry it is. Reply
  • GeorgeH - Tuesday, November 09, 2010 - link

    Is there any word on a fully functional GF104?

    Nvidia could call it the 560, with 5="Not Gimped".
    Reply
  • Sihastru - Tuesday, November 09, 2010 - link

    I guess once GTX470 goes EOL. If GTX460 had all it's shaders enabled then the overclocked versions would have canibalized GTX470 sales. Even so, it will happen on occasion. Reply
  • tomoyo - Tuesday, November 09, 2010 - link

    My guess is there will be GTX 580 derivatives with less cores enabled as usual, probably a GTX 570 or something. And then an improved GTX 460 with all cores enabled as the GTX 560. Reply
  • tomoyo - Tuesday, November 09, 2010 - link

    Good to see nvidia made a noticeable improvement over the overly hot and power hungry GTX 480. Unfortunately way above my power and silence needs, but competition is a good thing. Now I'm highly curious how close the Radeon 69xx will come in performance or if it can actually beat the GTX 580 in some cases.
    Of course the GTX 480 is completely obsolete now, more power, less speed, more noise, ugly to look at.
    Reply
  • 7eki - Tuesday, November 09, 2010 - link

    What we got here today is a higher clocked, better cooled GTX 480 with a slightly better power consumption. All of that for only 80$ MORE ! Any first served version of non referent GTX 480 is equipped with a much better cooling solution that gives higher OC possibilites and could kick GTX 580's ass. If we compare GTX 480 to a GTX580 clock2clock we will get about 3% of a difference in performance. All thanks to 32 CUDA processors, and a few more TMU's. How come the reviewers are NOW able to find pros of something that they used to criticise 7 months ago ? Don't forget that AMD's about to break their Sweet Spot strategy just to cut your hypocrites tongues. I bet that 6990's going to be twice as fast as what we got here today . If we really got anything cause I can't really tell the difference. Reply
  • AnnonymousCoward - Tuesday, November 09, 2010 - link

    32W less for 15% more performance, still on 40nm, is a big deal. Reply
  • 7eki - Wednesday, November 10, 2010 - link

    32W and 15% you say ? No it isn't a big deal since AMD's Barts GPUs release. Have on mind that GTX580 still consumes more energy than a faster (in most cases) and one year older multi GPU HD5970. In that case even 60 would sound ridiculosly funny. It's not more than a few percent improvement over GTX480. If you don't believe it calculate how much longer will you have to play on your GTX580 just to get your ~$40 spent on power consumption compared to a GTX480 back. Not to mention (again) that a nonreferent GTX480 provides much better cooling solutions and OC possibilities. Nvidia's diggin their own grave. Just like they did by releasing GTX460. The only thing that's left for them right now is to trick the reviewers. But who cares. GTX 580 isn't going to make them sell more mainstream GPUs. It isn't nvidia whos cutting HD5970 prices right now. It was AMD by releasing HD6870/50 and announcing 6970. It should have been mentioned by all of you reviewers who treat the case seriously. Nvidia's a treacherous snake and the reviewers job is not to let such things happen. Reply
  • Sihastru - Wednesday, November 10, 2010 - link

    Have you heard about the ASUS GTX580 Voltage Tweak edition that can be clocked up to 1100 MHz, that's more then 40% OC? Have you seen the EVGA GTX580 FTW yet?

    The fact that a single GPU card is in some cases faster then a dual GPU card built with two of the fastest competing GPU's tells a lot of good things about that single GPU card.

    This "nVidia in the Antichrist" speech is getting old. Repeating it all over the interwebs doesn't make it true.
    Reply
  • AnnonymousCoward - Wednesday, November 10, 2010 - link

    I'm with you, that AMD still has a superior performance per power design. But with the 580, nvidia took Fermi from being outrageous to competitive in that category, and even wins by a wide margin with idle power. Looking at the charts, the 580 also has a vastly superior cooling system to the 5970. Mad props to nvidia for turning things around. Reply

Log in

Don't have an account? Sign up now