Compute & Normalized Numbers

Moving on from our look at gaming performance, we have our customary look at compute performance, bundled with a look at theoretical tessellation performance. Unlike our gaming benchmarks where NVIDIA’s architectural enhancements could have an impact, everything here should be dictated by the core clock and SMs, with the GTX 570’s slight core clock advantage over the GTX 480 defining most of these tests.

Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes.

The core clock advantage for the GTX 570 here is 4.5%; in practice it leads to a difference of less than 2% for Civilization V’s texture decompression test. Even the lead over the GTX 470 is a bit less than usual, at 23%. Nor should the lack of a competitive placement from an AMD product be a surprise, as NVIDIA’s cards consistently do well at this test, lending credit to the idea that it’s a compute application better suited for NVIDIA’s scalar processor design.

Our second GPU compute benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. While it’s still in beta, SmallLuxGPU recently hit a milestone by implementing a complete ray tracing engine in OpenCL, allowing them to fully offload the process to the GPU. It’s this ray tracing engine we’re testing.

SmallLuxGPU is rather straightforward in its requirements: compute and lots of it. The GTX 570’s core clock advantage over the GTX 480 drives a fairly straightforward 4% performance improvement, roughly in line with the theoretical maximum. The reduction in memory bandwidth and L2 cache does not seem to impact SmallLuxGPU. Meanwhile the advantage over the GTX 470 doesn’t quite reach its theoretical maximum, but the GTX 570 is still 27% faster.

However as was the case with the GTX 580, all of the NVIDIA cards fall to AMD’s faster cards here; the GTX 570 is only between the 6850 and 6870 in performance, thanks to AMD’s compute-heavy VLIW5 design that SmallLuxGPU excels at. The situation is quite bad for the GTX 570 as a result, with the top card being the Radeon 5870, which the GTX 570 underperforms by 27%.

Our final compute benchmark is a Folding @ Home benchmark. Given NVIDIA’s focus on compute for Fermi and in particular GF110 and GF100, cards such as the GTX 580 can be particularly interesting for distributed computing enthusiasts, who are usually looking for the fastest card in the coolest package.

Once more the performance advantage for the GTX 570 matches its core clock advantage. If not for the fact that a DC project like F@H is trivial to scale to multi-GPU configurations, the GTX 570 would likely be the sweet spot for price, performance, and power/noise.

Finally, to take another look at GTX 570’s performance, we have the return of our normalized data view that we first saw with our look at the GTX 580. Unlike the GTX 580 which had similar memory/ROP abilities as the GTX 480 but more SMs, the GTX 570 contains the same number of SMs with fewer ROPs and a narrower memory bus. As such while a normalized dataset for the GTX 580 shows the advantage of the GF110’s architectural enhancements and the highter SM count, the normalized dataset for the GTX 570 shows the architectural enhancements alongside the impact of lost memory bandwidth, ROPs, and L2 cache.

The results certainly paint an interesting picture. Just about everything is ultimately affected by the lack of memory bandwidth, L2 cache, and ROPs; if the GTX 570 didn’t normally have its core clock advantage, it would generally lose to the GTX 480 by small amounts. The standouts here include STALKER, Mass Effect 2, and BattleForge which are all clearly among the most memory-hobbled titles.

On the other hand we have DIRT 2 and HAWX, both of which show a 4% improvement even though our normalized GTX 570 is worse compared to a GTX 480 in every way except architectural advantages. Clearly these were some of the games NVIDIA had in mind when they were tweaking GF110.

Wolfenstein Power, Temperature, & Noise
POST A COMMENT

54 Comments

View All Comments

  • Oxford Guy - Wednesday, December 08, 2010 - link

    Does Nvidia not want people to use Unigine since it showed the 480 beating the pants off the 580 in minimum frame rate at 1920x1200 and lower resolutions?

    I've noticed a definite lack of Unigine on review sites for the 570.
    Reply
  • stangflyer - Tuesday, December 07, 2010 - link

    Any idea why the 580 sli takes such a huge dump going from 1920 res to the 2560 res. It loses half its framerate! I has 1.5 gigs of memory vs the 5870 1 gig and the 5870 crossfire goes from 50 fps at 1920 and 37 at 2560. The 580 sli goes from 72 fps at 1920 to 36 at 2560.

    Any ideas??
    Reply
  • SmCaudata - Tuesday, December 07, 2010 - link

    It seems that AMD is finally getting cross-fire scaling well. The new 68xx cars are better than the old, but the 5870 is scaling as well as the Nvidia cards in a lot of cases. My guess is that with cross-fire or SLI the memory bandwidth is less of an issue. You don't fully double your framerate afterall. It is likely more dependant on the GPU clock speed..which is an advantage for AMD.

    I am really just taking a guess here. The other option is that it is simply an immature driver and will be fixed later.
    Reply
  • nitrousoxide - Wednesday, December 08, 2010 - link

    Only when you used a dual-AMD-card configuration you will realize how much you will suffer from its poor drivers. It's fast but buggy and I've been waiting too long for AMD to finally come up with a Catalyst that at least runs as stable as the nVidia driver. So please AMD, give us a nice driver! Reply
  • Anchen - Tuesday, December 07, 2010 - link

    Hey,
    Good review overall for an apples to apples comparison. I would have liked to see what it did overclocked as some have mentioned. On the Metro 2033 page the article says the following:

    "While Metro was an outstanding game for the GTX 580 to show off its performance advantage, the situation is quite different for the GTX 470. Here it once again fulfills its role as a GTX 480 replacement, but it’s far more mortal when it comes to being compared to other cards. "

    In the first sentence shouldn't it be "...the situation is quite different for the GTX570." and not the 470?
    Reply
  • sanityvoid - Tuesday, December 07, 2010 - link

    Much as I love this site, the color schemes for the charts is really getting old. Why can't all the colors be the same EXCEPT for the one being reviewed. We're mostly all adults and can read so the other GPU's in the charts could be left all one color.

    Some other sites do this and it is much easier to read what is actually being reviewed, even if the review color is always the same on each chart. It still adds to the clutter of the charts. The human eye/brain gets distracted easy.

    Other than that, another good job on the article.
    Reply
  • Ryan Smith - Wednesday, December 08, 2010 - link

    Thanks for the feedback.

    The colors are still a work in progress. We had some requests for additional colors in GPU articles to highlight the products we're immediately comparing the reviewed product to, which is what I did for this article. Certainly if you guys this this is too much, we can go back to fewer colors.
    Reply
  • ATimson - Wednesday, December 08, 2010 - link

    Personally, my problem isn't so much that there are other colors, as that there's no good way to tell what they mean.

    Maybe one color for "other cards with benchmarks", one color for "immediate competition" (instead of each their own color), and a third for the product proper?
    Reply
  • sanityvoid - Thursday, December 09, 2010 - link

    I really like this idea. All one color for 'set' of reviews (if multiple), and one color for primary.

    BTW, I didn't know others were asking for more colors. I guess do what others want. For me, personally, I like the one color for primary and one color for all others. It is just the easiest for 'first glance' to be easily distinguishable.

    Peace.
    Reply
  • kirankowshik - Wednesday, December 08, 2010 - link

    I dont know why I should go for the Nvidia GTX 580 / 570 series when I am getting the same (almost or more than) performance with ATI Radeon cards for a lower price. ATI HD 5970 is almost 30$ cheaper than GTX 580 but outperforms it in every single test. 5870 is not very close but atleast some what close and the performance of GTX 570 over 5870 does not justify a $100 gap between these two. Anyways, I think NVIDIA is just producing cards for name sake..with HD6900 series coming up, I will not be surprised if they offer huge performance leap over the GTX 580/570 for the same price...Again it will be what NVIDIA was when ATI released their batch of first DX11 cards and NVIDIA was struggling hard to get an answer to those... Reply

Log in

Don't have an account? Sign up now