Power, Temperature, and Noise

Last but not least as always is our look at the power consumption, temperatures, and acoustics of the GTX 580. NVIDIA’s performance improvements were half of the GTX 580 story, and this is the other half.

Starting quickly with voltage, as we only have one card we can’t draw too much from what we know, but there are still some important nuggets. NVIDIA is still using multiple VIDs, so your mileage may vary. What’s clear from the start though is that NVIDIA’s operating voltages compared to the GTX 480 are higher for both idle and load. This is the biggest hint that leakage has been seriously dealt with, as low voltages are a common step to combat leakage. Even with these higher voltages running on a chip similar to GF100, overall power usage is still going to be lower. And on that note, while the voltages have changed the idle clocks have not; idle remains at 50.6MHz for the core.

GeForce GTX 480/580 Voltages
Ref 480 Load Ref 480 Idle Ref 580 Load Ref 580 Idle
0.959v 0.875v 1.037v 0.962v

Beginning with idle power, we’re seeing our second biggest sign that NVIDIA has been tweaking things specifically to combat leakage. Idle power consumption has dropped by 17W on our test system even though the idle clocks are the same and the idle voltage higher. NVIDIA doesn’t provide an idle power specification, but based on neighboring cards idle power consumption can’t be far off from 30-35W. Amusingly it still ends up being more than the 6870 CF however, thanks to the combination of AMD’s smaller GPUs and ULPS power saving mode for the slave GPU.

Looking at Crysis, we begin to see the full advantage of NVIDIA’s optimizations and where a single GPU is more advantageous over multiple GPUs. Compared to the GTX 480 NVIDIA’s power consumption is down 10% (never mind the 15% performance improvement), and power consumption comes in under all similar multi-GPU configurations. Interestingly the 5970 still draws less power here, a reminder that  we’re still looking at cards near the peak of the PCIe specifications.

As for FurMark, due to NVIDIA’s power throttling we’ve had to get a bit creative. FurMark is throttled to the point where the GTX 580 registers 360W, thanks to a roughly 40% reduction in performance under FurMark. As a result for the GTX 580 we’ve swapped out FurMark for another program that generates a comparable load, Program X. At this point we’re going to decline to name the program, as should NVIDIA throttle it we may be hard pressed to determine if and when this happened.

In any case, under FurMark & X we can see that once again NVIDIA’s power consumption has dropped versus the GTX 480, this time by 27W or around 6%. NVIDIA’s worst case scenario has notably improved, and in the process the GTX 580 is back under the Radeon HD 5970 in terms of power consumption. Thus it goes without saying that while NVIDIA has definitely improved power consumption, the GTX 580 is still a large, power hungry GPU.

With NVIDIA’s improvements in cooling and in idle power consumption, there’s not a result more dramatic than idle GPU temperatures. The GTX 580 isn’t just cooler, it’s cool period. 37C is one of the best results out of any of our midrange and high-end GPUs, and is a massive departure from the GTX 480 which was at least warm all the time. As we’ll see however, this kind of an idle temperature does come with a small price.

The story under load is much the same as idle: compared to the GTX 480 the GTX 580’s temperatures have dramatically dropped. At 79C it’s in the middle of the pack, beating a number of single and multi GPU setups, and really only losing to mainstream-class GPUs and the 6870 CF. While we’ve always worried about the GTX 480 at its load temperatures, the GTX 580 leaves us with no such concerns.

Meanwhile under FurMark and Program X, the gap has closed, though the GTX 580 remains in the middle of the pack. 87C is certainly toasty, but it’s still well below the thermal threshold and below the point where we’d be worried about it. Interestingly however, the GTX 580 is actually just a bit closer to its thermal threshold than the GTX 480 is; NVIDIA rated the 480 for 105C, while the 580 is rated for 97C. We’d like to say this vindicates our concerns about the GTX 480’s temperatures, but it’s more likely that this is a result of the transistors NVIDIA is using.

It’s also worth noting that NVIDIA seems to have done away with the delayed fan ramp-up found on the GTX 480. The fan ramping on the GTX 580 is as near as we can tell much more traditional, with the fan immediately ramping up with higher temperatures. For the purposes of our tests, this keeps the temperatures from spiking as badly.

Remember where we said there was a small price to pay for such low idle temperatures? This is it. At 44.4dB, the 580 is ever so slightly (and we do mean slightly) louder than the GTX 480; it also ends up being a bit louder than the 5970 or 6870CF. 44.4 is not by any means loud, but if you want a card that’s whisper silent at idle, the GTX 580 isn’t going to be able to deliver.

And last but not least is load noise. Between their improvements to power consumption and to cooling, NVIDIA put a lot of effort in to the amount of noise the GTX 580 generates. Where the GTX 480 set new records for a single GPU card, the GTX 580 is quieter than the GTX 285, the GTX 470, and even the Radeon HD 5870. In fact it’s only a dB off of the 5850, a card under most circumstances we’d call the epitome of balance between performance and noise. Graphs alone cannot demonstrate just how much of a difference there is between the GTX 480 and GTX 580 – the GTX 580 is not whisper quiet, but at no point in our testing did it ever get “loud”. It’s a truly remarkable difference; albeit one that comes at the price of pointing out just how lousy the GTX 480 was.

Often the mark of a good card is a balance between power, temperature, and noise, and NVIDIA seems to have finally found their mark. As the GTX 580 is a high end card the power consumption is still high, but it’s no longer the abnormality that was the GTX 480. Meanwhile GPU temperatures have left our self-proclaimed danger zone, and yet at the same time the GTX 580 has become a much quieter card under load than the GTX 480. If you had asked us in what NVIDIA needed to work on with the GTX 480, we would have said noise, temperature, and power consumption in that order; the GTX 580 delivers on just what we would have wanted.

Compute and Tessellation Final Thoughts
Comments Locked

160 Comments

View All Comments

  • spigzone - Tuesday, November 9, 2010 - link

    Any 'bandwagon' here belongs to Nvidia.
  • mac2j - Tuesday, November 9, 2010 - link

    Actually the new ATI naming makes a bit more sense.

    Its not a new die shrink but the 6xxx all do share some features not found at all in the 5xxx series such as Displayport 1.2 (which could become very important if 120 and 240Hz monitors ever catch on).

    Also the Cayman 69xx parts are in fact a significantly original design relative to the 58xx parts.

    Nvidia to me is the worst offender ... cause a 580 is just fully-enabled 480 with the noise and power problems fixed.
  • Sihastru - Tuesday, November 9, 2010 - link

    If you think that stepping up the spec on the output ports warrants skipping a generation when naming your product, see that mini-HDMI port on the 580, that's HDMI 1.4 compliant... the requirements for 120Hz displays are met.

    The GF110 in not a GF100 with all the shaders enabled. It looks that way to the uninitiated. GF110 has much more in common with GF104.

    GF110 has three types of tranzistors, graded by leakage, while the GF100 has just two. This gives you the ability to clock the core higher, while having a lower TDP. It is smaller in size then GF100 is, while maintaining the 40nm fab node. GTX580 has a power draw limitation system on the board, the GTX480 does not...

    What else... support for full speed FP16 texture filtering which enhances performance in texture heavy applications. New tile formats which improve Z-cull efficiency...

    So how does displayport 1.2 warrant the 68x0 name for AMD but the few changes above do not warrant the 5x0 name for nVidia?

    I call BS.
  • Griswold - Wednesday, November 10, 2010 - link

    I call your post bullshit.

    The 580 comes with the same old video engine as the GF100 - if it was so close to GF104, it would have that video engine and all the goodies and improvements it brings over the one in the 480 (and 580).

    No, GT580 is a fixed GF100 and most of what you listed there supports that because it fixes what was broken with the 480. Thats all.
  • Sihastru - Wednesday, November 10, 2010 - link

    I'm not sure what you mean... maybe you're right... but I'm not sure... If you're referring to bitstreaming support, just wait for a driver update, the hardware supports it.

    See: http://www.guru3d.com/article/geforce-gtx-580-revi...

    "What is also good to mention is that HDMI audio has finally been solved. The stupid S/PDIF cable to connect a card to an audio codec, to retrieve sound over HDMI is gone. That also entails that NVIDIA is not bound to two channel LPCM or 5.1 channel DD/DTS for audio.

    Passing on audio over the PCIe bus brings along enhanced support for multiple formats. So VP4 can now support 8 channel LPCM, lossless format DD+ and 6 channel AAC. Dolby TrueHD and DTS Master Audio bit streaming are not yet supported in software, yet in hardware they are (needs a driver update)."

    NEVER rely just on one source of information.

    Fine, if a more powerful card then the GTX480 can't be named the GTX580 then why is a lower performing then the HD5870 card is ok to be named HD6870... screw technology, screw refinements, talk numbers...

    Whatever...
  • Ryan Smith - Wednesday, November 10, 2010 - link

    To set the record straight, the hardware does not support full audio bitstreaming. I had NV themselves confirm this. It's only HDMI 1.4a video + the same audio formats that GTX 480 supported.
  • B3an - Wednesday, November 10, 2010 - link

    You can all argue all you want, but at the end of the day, for marketing reasons alone, NV really didn't have much of a choice but to name this card the 580 instead of 485 after ATI gave there cards the 6xxx series names. Which dont deserve a new series name either.
  • chizow - Tuesday, November 9, 2010 - link

    No ATI's new naming convention makes no sense at all. Their x870 designation has always been reserved for their Single-GPU Flagship part ever since the HD3870, and this naming convention has held true through both the HD4xxx and HD5xxxx series. But the 6870 clearly isn't the flagship of this generation, in fact, its slower than the 5870 while the 580 is clearly faster than the 480 in every aspect.

    To further complicate matters, ATI also launched the 5970 as a dual-GPU part, so single-GPU Cayman being a 6970 will be even more confusing and will also be undoubtedly slower than the 5970 in all titles that have working CF profiles.

    If anything, Cayman should be 5890 and Barts should be 5860, but as we've seen from both caps, marketing names are often inconvenient and short-sighted when they are originally designated......
  • Galid - Tuesday, November 9, 2010 - link

    We're getting into philosophy there. Know what's a sophism? An argument that seems strong but isn't because there's a fail in it. The new honda 2011 ain't necessarily better than the 2010 because it's newer.

    They name it differently because it's changed and wanna make you believe it's better but history proved it's not always the case. So the argument of newer generation means better is a false argument. Not everything new ''gotta'' be better in every way to live up to it's name.

    But it's my opinion.
  • Galid - Tuesday, November 9, 2010 - link

    It seems worse but that rebranding is all ok in my mind as it comes the 6870 comes in at a cheaper price than the 5870. So everyone can be happy about it. Nvidia did worse rebranding some of the 8xxx series into 9xxx chips for higher price but almost no change and no more performance. 9600gt comes to my mind...

    What is 9xxx series? a remake of a ''better'' 8xxx series. What is GTS3xx series, remake of GTx2xx, what is GTX5xx, .... and so on. Who cares? If it's priced well it's all ok. When I see someone going at staples to get a 9600gt at 80$ and I know I can get a 4850 for almost the same price, I say WTF!!!

    GTX580 deserve the name they want to give it. Whoever tries to understand all that naming is up to him. But whoever wants to pay example 100$ for a card should get performance according to that and it seems more important than everything else to me!

Log in

Don't have an account? Sign up now