Normalized Clocks: Separating Architecture & SMs from Clockspeed Increases

While we were doing our SLI benchmarking we got several requests for GTX 580 results with normalized clockspeeds in order to better separate what performance improvements were due to NVIDIA’s architectural changes and enabling the 16th SM, and what changes are due to the 10% higher clocks. So we’ve quickly run a GTX 580 at 2560 with GTX 480 clockspeeds (700Mhz core, 924Mhz memory) in order to capture this data. Games that benefit most from the clockspeed bump are going to be memory bandwidth or ROP limited, while games showing the biggest improvements in spite of the normalized clockspeeds are games that are shader/texture limited or benefit from the texture and/or Z-cull improvements.

We’ll put 2 charts here, one with the actual framerates and a second with all performance numbers normalized to the GTX 480’s performance.

Games showing the lowest improvement in performance with normalized clockspeeds are BattleForge, STALKER, and Civilization V (which is CPU limited anyhow). At the other end are HAWX, DIRT 2, and Metro 2033.

STALKER and BattleForge hold consistent with our theory that games that benefit the least when normalized are ROP or memory bandwidth limited, as both games only see a pickup in performance once we ramp up the clocks. And on the other end HAWX, DIRT 2, and Metro 2033 still benefit from the clockspeed boost on top of their already hefty boost thanks to architectural improvements and the extra SMs. Interestingly Crysis looks to be the paragon game for the average situation, as it benefits some from the arch/SM improvements, but not a ton.

A subset of our compute benchmarks is much more straightforward here; Folding@Home and SmallLuxGPU improve 6% and 7% respectively from the increase in SMs (theoretical improvement, 6.6%), and then after the clockspeed boost reach 15% faster. From this it’s a safe bet that when GF110 reaches Tesla cards that the performance improvement for Telsa won’t be as great as it was for GeForce since the architectural improvements were purely for gaming purposes. On the flip side with so many SMs currently disabled, if NVIDIA can get a 16 SM Tesla out, the performance increase should be massive.

GTX 580 SLI: Setting New Dual-GPU Records
Comments Locked

82 Comments

View All Comments

  • anactoraaron - Wednesday, November 10, 2010 - link

    They must be under new management as of this year because the best deals there these days are on watches and coffee makers LOL. Seriously when Best Buy starts to beat you on components and not just on laptops (BB has been beating newegg on laptop prices for at least 2 years now) something is wrong. Newegg is not the company we have grown to love anymore, and it's sad. I haven't bought anything there (but continue to buy tech items) this year... and this comment made me laugh:
    When we first saw Newegg post their GTX 580s for sale our jaw dropped as they were all $50-$80 over NVIDIA’s MSRP
    -AND-
    However after checking out MWave, Tiger Direct, the EVGA Store, and others, we saw at least 1 card at MSRP at each store...
    So it's official
    NEWEGG = FAIL
  • Hauk - Wednesday, November 10, 2010 - link

    They run graphics cards sales like the stock market. There's someone inside very skilled and knowledgeable about the industry. They're an 800,000 pound gorilla when it comes to graphics cards.
  • AnnonymousCoward - Thursday, November 11, 2010 - link

    I use newegg for the reviews, and Google Shopping to buy.
  • alha - Wednesday, November 10, 2010 - link

    Looking to upgrade a folding rig, and have been watching the new Nvidia cards with interest, since the 4xx rolled out. I read an early review of the 580, and one thing mentioned was like 380 million circuits (?) were disabled to get the heat down, and gaming perf up. If main concern is folding perf, heat and power consumption be damned, would the full bore, un-gimped 480 be theoretically better for this? Seems that the price currently for either version "superclocked" is somewhat close, so $ isn't really a dealbreaker, just want the best perf. Thoughts?
  • AnnonymousCoward - Thursday, November 11, 2010 - link

    I think the last page shows that Stream Processors perform the same, whether in a 580 or 480. As Ryan pointed out, Folding and SmallLux performed 6-7% higher, the same increase in SP's that the 580 has. To get an index of performance for any Fermi part, just multiply the number of SP's with the core clock.
  • IceDread - Thursday, November 11, 2010 - link

    Why are there no 5970 in crossfire in the tests?
  • piroroadkill - Thursday, November 11, 2010 - link

    Yeah, I already commented on that much earlier. It's said they didn't have a second 5970.
  • medi01 - Friday, November 12, 2010 - link

    The problem is, not only is "oh, we didn't have second 5970", (card that is out for how many month?) a rather strange excuse, but even it still doesn't justify claiming nVidia got a "new record" in dual config.
  • Haydyn323 - Friday, November 12, 2010 - link

    They said "dual gpu" config. A 5970 has 2 gpus in it already. Thus one single 5970 vs 580 SLI is comparing 2 gpus to 2 gpus.

    Crossfire 5970 would be quad gpu vs 2x 580 gpus in SLI.
  • piroroadkill - Friday, November 12, 2010 - link

    This is about price points, not the technical details.

Log in

Don't have an account? Sign up now