Normalized Clocks: Separating Architecture & SMs from Clockspeed Increases

While we were doing our SLI benchmarking we got several requests for GTX 580 results with normalized clockspeeds in order to better separate what performance improvements were due to NVIDIA’s architectural changes and enabling the 16th SM, and what changes are due to the 10% higher clocks. So we’ve quickly run a GTX 580 at 2560 with GTX 480 clockspeeds (700Mhz core, 924Mhz memory) in order to capture this data. Games that benefit most from the clockspeed bump are going to be memory bandwidth or ROP limited, while games showing the biggest improvements in spite of the normalized clockspeeds are games that are shader/texture limited or benefit from the texture and/or Z-cull improvements.

We’ll put 2 charts here, one with the actual framerates and a second with all performance numbers normalized to the GTX 480’s performance.

Games showing the lowest improvement in performance with normalized clockspeeds are BattleForge, STALKER, and Civilization V (which is CPU limited anyhow). At the other end are HAWX, DIRT 2, and Metro 2033.

STALKER and BattleForge hold consistent with our theory that games that benefit the least when normalized are ROP or memory bandwidth limited, as both games only see a pickup in performance once we ramp up the clocks. And on the other end HAWX, DIRT 2, and Metro 2033 still benefit from the clockspeed boost on top of their already hefty boost thanks to architectural improvements and the extra SMs. Interestingly Crysis looks to be the paragon game for the average situation, as it benefits some from the arch/SM improvements, but not a ton.

A subset of our compute benchmarks is much more straightforward here; Folding@Home and SmallLuxGPU improve 6% and 7% respectively from the increase in SMs (theoretical improvement, 6.6%), and then after the clockspeed boost reach 15% faster. From this it’s a safe bet that when GF110 reaches Tesla cards that the performance improvement for Telsa won’t be as great as it was for GeForce since the architectural improvements were purely for gaming purposes. On the flip side with so many SMs currently disabled, if NVIDIA can get a 16 SM Tesla out, the performance increase should be massive.

GTX 580 SLI: Setting New Dual-GPU Records
Comments Locked

82 Comments

View All Comments

  • slick121 - Thursday, November 11, 2010 - link

    Wow so true, this is a slap in the face, off to other sites for a more unbiased review.
  • Gonemad - Wednesday, November 10, 2010 - link

    ...blow a $1100 hole in your pocket? Yes it can!

    Can it make you consider purchasing a 1200W power supply as something not THAT preposterous? Yes it can! (well, the 480 pair already did, so...)

    Considering a caseless or vapor mod case also not that insane? That too!

    I guess waiting until this card reaches the price/performance charts will take a while. On the other hand as far as performance goes...
  • iwodo - Wednesday, November 10, 2010 - link

    One of the thing i never liked SLI or Crossfire like, it needs Drivers support for the specific game to take advantage of 2nd Gfx card.

    Have we solve this problem yet?
  • Spazweasel - Wednesday, November 10, 2010 - link

    I've never had to install game-specific drivers to take advantage of SLI in the games I play, and I've had an SLI rig for nearly three years (2x 8800GT). I just update my vid drivers once every few months. It's true that there are often performance tweaks for individual games in a given driver version, but I've never found a game that just doesn't work under SLI with whatever the driver version at hand. When did you last try?

    As for Crossfire... my "guest" PC is all-AMD (Athlon II X4 620, 4870, 785 chipset) and is a fine machine. Every time I consider going Crossfire on that rig, I check the various tech sites and game support sites, and see issues with Crossfire being reported far more frequently than SLI. This points to a situation that has existed for a while: AMD makes faster hardware for the money, but nVidia overall does a better job with drivers, particularly in multi-GPU scenarios, and from the game developers I know, seems more interested in working closely with game devs.

    Which is why when friends ask me about gaming builds, my usual answer (depending upon the products both vendors have at the time) is "Single vid card, go with AMD, dual vid card, go with nVidia". There have been exceptions: 8800GT in its day was just plain the best, and 460 GTX until very recently was also the best single-GPU solution in its price bracket. The overall trend seems pretty steady with regard to single-GPU vs. multi-GPU, though.
  • Finally - Wednesday, November 10, 2010 - link

    "Single vid card, go with AMD, dual vid card, go download a proper brain"

    Who drops in another card after 2 years, if there is a new card available that's not only about 100% faster but also brings new features to the table? (E.g. DirectX11, Tesselation, Eyefinity etc.)
  • Sufo - Thursday, November 11, 2010 - link

    Um, i got 2 5850s for less than the price of a single gtx 580, which they consistently outperform. Dual gpu is a legitimate solution in the short term at least.

    You're right that it becomes a less sensible option after a fair amount of time, assuming the tech has moved on significantly - however, expect pc gpu tech to stagnate for a while (as evidenced by the very marginal improvements displayed by the 6xxx and 5xx series) at least until the next round of consoles are out.

    Right now is a great time to buy a top of the line system.
  • Finally - Thursday, November 11, 2010 - link

    You DO know that the marginal improvements from 58xx to 68xx stem from the fact that the new top of the line 69xx are yet to be launched?

    Yes, GPU tech will stagnate because all they have to master are some 3rd grade console ports that only turn out so few fps because the process of porting them over to the pc is done as quickly and cheaply as possible?

    If there was such a thing as a native PC game anymore, you probably would see all those DX11 features put into practice.

    Right now it's simply ridiculous. A HD4870 or a GTX580 will play any console-ported crap you throw at it... more performance has become irrelevant as there is no game to request it.

    Oh, there is Crysis, right.
    And it's out since when exactly?
    I'm really not in the mood to pick up this vegetation benchmark in-disguise and look at it again...

    And then there are games that run with 200+ fps instead of 60+ fps. *yawn* Please wake me up, when you reach 500+ fps with your GTX580 SLI so I can walk over to my bed for some real deep sleep...
  • mapesdhs - Wednesday, November 10, 2010 - link


    Spazweasel, please see my site for useful info, comparing 8800GT SLI vs.
    4890 CF vs. GTX 460 SLI:

    http://www.sgidepot.co.uk/misc/pctests.html
    http://www.sgidepot.co.uk/misc/stalkercopbench.txt

    and these new pages under construction:

    http://www.sgidepot.co.uk/misc/uniginebench.txt
    http://www.sgidepot.co.uk/misc/x3tcbench.txt

    Hope this helps! :)

    Ian.
  • Spazweasel - Wednesday, November 10, 2010 - link

    Thanks, Ian... still happy with my 8800GT SLI setup though. :) It's been nothing but amazing for me. Not looking to upgrade just yet. Let's see what the 560 GTX looks like...
  • mapesdhs - Wednesday, November 10, 2010 - link


    Yep, 8800 GT SLI does run rather well, though as my results show
    they fall behind for newer games at higher res/detail.

    Summaries I've posted elsewhere show that if one is playing older
    games at lesser resolutions, then using a newer card to replace
    an older SLI setup (or instead of adding an extra older card) will
    not give that much of a speed boost, if any (look at the 4890 data
    vs. 8800GT). For older games, newer cards only help if one also
    switches to a higher res/detail mode. Newer cards' higher performance
    is focused on newer features, (eg. SM3, etc.); performance levels
    for older features are often little changed.

    Ian.

Log in

Don't have an account? Sign up now