The Test

For our look at the GTX 580 we will only be looking at single card performance. As a measure of promotion for their OEM partners, NVIDIA would only make a second GTX 580 available to us if we also agreed to review a high-end gaming system. Because the high-end system was completely unnecessary for a GPU review we declined NVIDIA’s offer, and as a result we were only offered 1 GTX 580 which you’ll be seeing here today. We will be looking at SLI performance once we can acquire a second GTX 580 farther down the line.

For our testing we’ll be using the latest version of our GPU benchmark suite, which was introduced back in our Radeon HD 6800 series review two weeks ago. We’re using the latest drivers from both AMD and NVIDIA here – Catalyst Hotfix 10.10d for AMD, and Forceware 262.99 for the NVIDIA cards.

Finally, as we mentioned earlier, AMD doesn’t have a direct competitor to the GTX 580. The closest competitors they have are dual-GPU setups in the form of the closeout 5970 and the 6870 in Crossfire. Meanwhile NVIDIA has cut GTX 470 prices so far to the bone that you can pick up a pair of them for as much as a single GTX 580. Two slightly crippled GF100 cards versus one GF110 card will not be a fair fight…

CPU: Intel Core i7-920 @ 3.33GHz
Motherboard: Asus Rampage II Extreme
Chipset Drivers: Intel 9.1.1.1015 (Intel)
Hard Disk: OCZ Summit (120GB)
Memory: Patriot Viper DDR3-1333 3 x 2GB (7-7-7-20)
Video Cards: AMD Radeon HD 6870
AMD Radeon HD 6850
AMD Radeon HD 5970
AMD Radeon HD 5870
AMD Radeon HD 5850
AMD Radeon HD 5770
AMD Radeon HD 4870
NVIDIA GeForce GTX 580
NVIDIA GeForce GTX 480
NVIDIA GeForce GTX 470
NVIDIA GeForce GTX 460 1GB
NVIDIA GeForce GTX 460 768MB
NVIDIA GeForce GTX 285
NVIDIA GeForce GTX 260 Core 216
Video Drivers: NVIDIA ForceWare 262.99
AMD Catalyst 10.10d
OS: Windows 7 Ultimate 64-bit
Meet the GTX 580 Crysis: Warhead
Comments Locked

160 Comments

View All Comments

  • wtfbbqlol - Thursday, November 11, 2010 - link

    Most likely an anomaly. Just compare the GTX480 to the GTX470 minimum framerate. There's no way the GTX480 is twice as fast as the GTX470.
  • Oxford Guy - Friday, November 12, 2010 - link

    It does not look like an anomaly since at least one of the few minimum frame rate tests posted by Anandtech also showed the 480 beating the 580.

    We need to see Unigine Heaven minimum frame rates, at the bare minimum, from Anandtech, too.
  • Oxford Guy - Saturday, November 13, 2010 - link

    To put it more clearly... Anandtech only posted minimum frame rates for one test: Crysis.

    In those, we see the 480 SLI beating the 580 SLI at 1920x1200. Why is that?

    It seems to fit with the pattern of the 480 being stronger in minimum frame rates in some situations -- especially Unigine -- provided that the resolution is below 2K.

    I do hope someone will clear up this issue.
  • wtfbbqlol - Wednesday, November 10, 2010 - link

    It's really disturbing how the throttling happens without any real indication. I was really excited reading about all the improvements nVidia made to the GTX580 then I read this annoying "feature".

    When any piece of hardware in my PC throttles, I want to know about it. Otherwise it just adds another variable when troubleshooting performance problem.

    Is it a valid test to rename, say, crysis.exe to furmark.exe and see if throttling kicks in mid-game?
  • wtfbbqlol - Wednesday, November 10, 2010 - link

    Well it looks like there is *some* official information about the current implementation of the throttling.

    http://nvidia.custhelp.com/cgi-bin/nvidia.cfg/php/...

    Copy and paste of the message:
    "NVIDIA has implemented a new power monitoring feature on GeForce GTX 580 graphics cards. Similar to our thermal protection mechanisms that protect the GPU and system from overheating, the new power monitoring feature helps protect the graphics card and system from issues caused by excessive power draw.

    The feature works as follows:
    • Dedicated hardware circuitry on the GTX 580 graphics card performs real-time monitoring of current and voltage on each 12V rail (6-pin, 8-pin, and PCI-Express).
    • The graphics driver monitors the power levels and will dynamically adjust performance in certain stress applications such as Furmark 1.8 and OCCT if power levels exceed the card’s spec.
    • Power monitoring adjusts performance only if power specs are exceeded AND if the application is one of the stress apps we have defined in our driver to monitor such as Furmark 1.8 and OCCT.
    - Real world games will not throttle due to power monitoring.
    - When power monitoring adjusts performance, clocks inside the chip are reduced by 50%.

    Note that future drivers may update the power monitoring implementation, including the list of applications affected."
  • Sihastru - Wednesday, November 10, 2010 - link

    I never heard anyone from the AMD camp complaining about that "feature" with their cards and all current AMD cards have it. And what would be the purpose of renaming your Crysis exe? Do you have problems with the "Crysis" name? You think the game should be called "Furmark"?

    So this is a non issue.
  • flyck - Wednesday, November 10, 2010 - link

    the use of renaming is that nvidia uses name tags to identify wether it should throttle or not.... suppose person x creates a program and you use an older driver that does not include this name tag, you can break things.....
  • Gonemad - Wednesday, November 10, 2010 - link

    Big fat YES. Please do rename the executable from crysis.exe to furmark.exe, and tell us.

    Get furmark and go all the way around, rename it to Crysis.exe, but be sure to have a fire extinguisher in the premises. Caveat Emptor.

    Perhaps just renaming in not enough, some checksumming is involved. It is pretty easy to change checksum without altering the running code, though. When compiling source code, you can insert comments in the code. When compiling, the comments are not dropped, they are compiled together with the running code. Change the comment, change the checksum. But furmark alone can do that.

    Open the furmark on a hex editor and change some bytes, but try to do that in a long sequence of zeros at the end of the file. Usually compilers finish executables in round kilobytes, filling with zeros. It shouldn't harm the running code, but it changes the checksum, without changing byte size.

    If it works, rename it Program X.

    Ooops.
  • iwodo - Wednesday, November 10, 2010 - link

    The good thing about GPU is that it scales VERY well ( if not linearly ) with transistors. 1 Node Die Shrink, Double the transistor account, double the performance.

    Combined there are not bottleneck with Memory, which GDDR5 still have lots of headroom, we are very limited by process and not the design.
  • techcurious - Wednesday, November 10, 2010 - link

    I didnt read through ALL the comments, so maybe this was already suggested. But, can't the idle sound level be reduced simply by lowering the fan speed and compromising idle temperatures a bit? I bet you could sink below 40db if you are willing to put up with an acceptable 45 C temp instead of 37 C temp. 45 C is still an acceptable idle temp.

Log in

Don't have an account? Sign up now