Final Thoughts

Even though NVIDIA is only launching a single card today there’s a lot to digest, so let’s get to it.

Since the GeForce GTX 580 arrived in our hands last week, we’ve been mulling over how to approach it. It boils down to two schools of thought: 1) Do we praise NVIDIA for delivering a high performance single GPU card that strikes the right balance of performance and temperature/noise, or 2) Do we give an indifferent thumbs-up to NVIDIA for only finally delivering the card that we believe the GTX 480 should have been.

The answer we’ve decided is one of mild, but well earned praise. The GTX 580 is not the true next-generation successor to the GTX 480; it’s the GTX 480 having gone back in the womb for 7 months of development. Much like AMD, NVIDIA faced a situation where they were going to do a new product without a die shrink, and had limited options as a result. NVIDIA chose wisely, and came back with a card that is both decently faster and a refined GTX 480 at the same time.

With the GTX 480 we could recognize it as being the fastest single GPU card on the market, but only by recognizing the fact that it was hot and loud at the same time. For buyers the GTX 480 was a tradeoff product – sure it’s fast, but is it too hot/too loud for me? The GTX 580 requires no such tradeoff. We can never lose sight of the fact that it’s a high-end card and is going to be more power hungry, louder, and hotter than many other cards on the market, but it’s not the awkward card that the GTX 480 was. For these reasons our endorsement of the GTX 580 is much more straightforward, at least as long as we make it clear that GTX 580 is less an upgrade for GTX 480, and more a better upgrade for the GTX 285 and similar last-generation cards.

What we’re left with today is something much closer to the “traditional” state of the GPU market: NVIDIA has the world’s fastest single-GPU card, while AMD is currently nipping at their heels with multi-GPU products. Both the Radeon HD 5970 and Radeon HD 6870 CF are worthy competitors to the GTX 580 – they’re faster and in the case of the 6870 CF largely comparable in terms of power/temperature/noise. If you have a board capable of supporting a pair of 6870s and don’t mind the extra power it’s hard to go wrong, but only if you’re willing to put up with the limitations of a multi-GPU setup. It’s a very personal choice – we’d be willing to trade the performance for the simplicity of avoiding a multi-GPU setup, but we can’t speak for everyone.

So what’s next? A few different things. From the NVIDIA camp, NVIDIA is promising a quick launch of the rest of the GeForce 500 series. Given the short development cycles for NVIDIA we’d expect more refined GF10x parts, but this is very much a shot in the dark. Much more likely is a 3GB GTX 580, seeing as how NVIDIA's official product literature calls the GTX 580 the "GeForce GTX 580 1.5GB", a distinction that was never made for the GTX 480.

More interesting however  will be what NVIDIA does with GF110 since it’s a more capable part than GF100 in every way. The GF100 based Quadros and Teslas were only launched in the last few months, but they’re already out of date. With NVIDIA’s power improvements in particular, this seems like a shoo-in for at least one improved Quadro and Tesla card. We also expect 500 series replacements for some of the GF100-based cards (with the GTX 465 likely going away permanently).

Meanwhile the AMD camp is gearing up for their own launches. The 6900 series is due to launch before the year is out, bringing with it AMD’s new Cayman GPU. There’s little we know or can say at this point, but as a part positioned above the 6800 series we’re certainly hoping for a slugfest. At $500 the GTX 580 is pricey (much like the GTX 480 before it), and while this isn’t unusual for the high-end market we wouldn’t mind seeing NVIDIA and AMD bring a high-intensity battle to the high-end, something that we’ve been sorely missing for the last year. Until we see the 6900 series we wouldn’t make any bets, but we can certainly look forward to it later this year.

Power, Temperature, and Noise
Comments Locked

160 Comments

View All Comments

  • wtfbbqlol - Thursday, November 11, 2010 - link

    Most likely an anomaly. Just compare the GTX480 to the GTX470 minimum framerate. There's no way the GTX480 is twice as fast as the GTX470.
  • Oxford Guy - Friday, November 12, 2010 - link

    It does not look like an anomaly since at least one of the few minimum frame rate tests posted by Anandtech also showed the 480 beating the 580.

    We need to see Unigine Heaven minimum frame rates, at the bare minimum, from Anandtech, too.
  • Oxford Guy - Saturday, November 13, 2010 - link

    To put it more clearly... Anandtech only posted minimum frame rates for one test: Crysis.

    In those, we see the 480 SLI beating the 580 SLI at 1920x1200. Why is that?

    It seems to fit with the pattern of the 480 being stronger in minimum frame rates in some situations -- especially Unigine -- provided that the resolution is below 2K.

    I do hope someone will clear up this issue.
  • wtfbbqlol - Wednesday, November 10, 2010 - link

    It's really disturbing how the throttling happens without any real indication. I was really excited reading about all the improvements nVidia made to the GTX580 then I read this annoying "feature".

    When any piece of hardware in my PC throttles, I want to know about it. Otherwise it just adds another variable when troubleshooting performance problem.

    Is it a valid test to rename, say, crysis.exe to furmark.exe and see if throttling kicks in mid-game?
  • wtfbbqlol - Wednesday, November 10, 2010 - link

    Well it looks like there is *some* official information about the current implementation of the throttling.

    http://nvidia.custhelp.com/cgi-bin/nvidia.cfg/php/...

    Copy and paste of the message:
    "NVIDIA has implemented a new power monitoring feature on GeForce GTX 580 graphics cards. Similar to our thermal protection mechanisms that protect the GPU and system from overheating, the new power monitoring feature helps protect the graphics card and system from issues caused by excessive power draw.

    The feature works as follows:
    • Dedicated hardware circuitry on the GTX 580 graphics card performs real-time monitoring of current and voltage on each 12V rail (6-pin, 8-pin, and PCI-Express).
    • The graphics driver monitors the power levels and will dynamically adjust performance in certain stress applications such as Furmark 1.8 and OCCT if power levels exceed the card’s spec.
    • Power monitoring adjusts performance only if power specs are exceeded AND if the application is one of the stress apps we have defined in our driver to monitor such as Furmark 1.8 and OCCT.
    - Real world games will not throttle due to power monitoring.
    - When power monitoring adjusts performance, clocks inside the chip are reduced by 50%.

    Note that future drivers may update the power monitoring implementation, including the list of applications affected."
  • Sihastru - Wednesday, November 10, 2010 - link

    I never heard anyone from the AMD camp complaining about that "feature" with their cards and all current AMD cards have it. And what would be the purpose of renaming your Crysis exe? Do you have problems with the "Crysis" name? You think the game should be called "Furmark"?

    So this is a non issue.
  • flyck - Wednesday, November 10, 2010 - link

    the use of renaming is that nvidia uses name tags to identify wether it should throttle or not.... suppose person x creates a program and you use an older driver that does not include this name tag, you can break things.....
  • Gonemad - Wednesday, November 10, 2010 - link

    Big fat YES. Please do rename the executable from crysis.exe to furmark.exe, and tell us.

    Get furmark and go all the way around, rename it to Crysis.exe, but be sure to have a fire extinguisher in the premises. Caveat Emptor.

    Perhaps just renaming in not enough, some checksumming is involved. It is pretty easy to change checksum without altering the running code, though. When compiling source code, you can insert comments in the code. When compiling, the comments are not dropped, they are compiled together with the running code. Change the comment, change the checksum. But furmark alone can do that.

    Open the furmark on a hex editor and change some bytes, but try to do that in a long sequence of zeros at the end of the file. Usually compilers finish executables in round kilobytes, filling with zeros. It shouldn't harm the running code, but it changes the checksum, without changing byte size.

    If it works, rename it Program X.

    Ooops.
  • iwodo - Wednesday, November 10, 2010 - link

    The good thing about GPU is that it scales VERY well ( if not linearly ) with transistors. 1 Node Die Shrink, Double the transistor account, double the performance.

    Combined there are not bottleneck with Memory, which GDDR5 still have lots of headroom, we are very limited by process and not the design.
  • techcurious - Wednesday, November 10, 2010 - link

    I didnt read through ALL the comments, so maybe this was already suggested. But, can't the idle sound level be reduced simply by lowering the fan speed and compromising idle temperatures a bit? I bet you could sink below 40db if you are willing to put up with an acceptable 45 C temp instead of 37 C temp. 45 C is still an acceptable idle temp.

Log in

Don't have an account? Sign up now