Closing Thoughts

Unlike our normal GPU reviews, looking at multi-GPU scaling in particular is much more about the tests than it is architectures. With AMD and NVIDIA both using the same basic alternate frame rendering strategy, there's not a lot to separate the two on the technology side. Whether a game scales poorly or well has much more to do with the game than the GPU.

  Radeon HD 6970 GeForce GTX 580
GPUs 1->2 2->3 1->3 1->2 2->3 1->3
Average Avg. FPS Gain 185% 127% 236% 177% 121% 216%
Average Min. FPS Gain 196% 140% 274% 167% 85% 140%

In terms of average FPS gains for two GPUs, AMD has the advantage here. It’s not much of an advantage at under 10%, but it is mostly consistent. The same can be said for three GPU setups, where the average gain for a three GPU setup versus a two GPU setup nets AMD a 127% gain versus 121% for NVIDIA. The fact that the Radeon HD 6970 is normally the weaker card in a single-GPU configuration makes things all the more interesting though. Are we seeing AMD close the gap thanks to CPU bottlenecks, or are we really looking at an advantage for AMD’s CrossFire scaling? One thing is for certain, CrossFire scaling has gotten much better over the last year – at the start of 2010 these numbers would not have been nearly as close.

Overall the gains for SLI or CrossFire in a dual-GPU configuration are very good, which fits well with the fact that most users will never have more than two GPUs. Scaling is heavily game dependent, but on average it’s good enough that you’re getting your money’s worth from a second video card. Just don’t expect perfect scaling in more than a handful of games.

As for triple-GPU setups, the gains are decent, but on average it’s not nearly as good. A lot of this has to do with the fact that some games simply don’t scale beyond two GPUs at all – Civilization V always comes out as a loss, and the GPU-heavy Metro 2033 only makes limited gains at best. Under a one monitor setup it’s hard to tell if this is solely due to poor scaling or due to CPU limitations, but CPU limitations alone do not explain it all. There are a couple of cases where a triple-GPU setup makes sense when paired with a single monitor, particularly in the case of Crysis, but elsewhere framerates are quite high after the first two GPUs with little to gain from a 3rd GPU. I believe super sample anti-aliasing is the best argument for a triple-GPU setup with one monitor, but at the same time that restricts our GPU options to NVIDIA as they’re the only one with DX10/DX11 SSAA.

Minimum framerates with three GPUs does give us a reason to pause for a moment and ponder some things. For the games we do collect minimum framerate data for – Crysis and Battlefield: Bad Company 2 – AMD has a massive lead in minimum framerates. In practice I don’t completely agree with the numbers, and it’s unfortunate that most games don’t generate consistent enough minimum framerates to be useful. From the two games we do test AMD definitely has an advantage, but having watched and played a number of games I don’t believe this is consistent for every game. I suspect the games we can generate consistent data for are the ones that happen to favor the 6970, and likely because of the VRAM advantage at that.

Ultimately triple-GPU performance and scaling cannot be evaluated solely on a single monitor, which is why we won’t be stopping here. Later this month we’ll be looking at triple-GPU performance in a 3x1 multi-monitor configuration, which should allow us to put more than enough load on these setups to see what flies, what cracks under the pressure, and whether multi-GPU scaling can keep pace with such high resolutions. So until then, stay tuned.

Mass Effect 2, Wolfenstein, and Civ V Compute
Comments Locked

97 Comments

View All Comments

  • taltamir - Sunday, April 3, 2011 - link

    wouldn't it make more sense to use a Radeon 6970 + 6990 together to get triple GPU?

    nVidia triple GPU seems to lower min FPS, that is just fail.

    Finally: Where are the eyefinity tests? none of the results were relevant since all are over 60fps with dual SLI.
    Triple monitor+ would be actually interesting to see
  • semo - Sunday, April 3, 2011 - link

    Ryan mentions in the conclusion that a triple monitor setup article is coming.

    ATI seems to be the clear winner here but the conclusion seems to downplay this fact. Also, the X58 platform isn't the only one that has more than 16 PCIe lanes...
  • gentlearc - Sunday, April 3, 2011 - link

    If you're considering going triple-gpu, I don't see how scaling matters other than an FYI. There isn't a performance comparison, just more performance. You're not going to realistically sell both your 580s and go and get three 6970s. I'd really like if you look at lower end cards capable of triple-gpu and their merit. The 5770 crossfired was a great way of extending the life of one 5770. Two 260s was another sound choice by enthusiasts looking for a low price tag upgrade.

    So, the question I would like answered is if triple gpu is a viable option for extending the life of your currently compatible mobo. Can going triple gpus extend the life of your i7 920 as a competent gaming machine until a complete upgrade makes more sense?

    SNB-E will be the cpu upgrade path, but will be available around the time the next generation of gpus are out. Is picking up a 2nd and/or 3rd gpu going to be a worthy upgrade or is the loss of selling three gpus to buy the next gen cards too much?
  • medi01 - Sunday, April 3, 2011 - link

    Besides, 350$ GPU is compared to 500$ GPU. Or so it was last time I've checked on froogle (and that was today, 3d of April 2011)
  • A5 - Sunday, April 3, 2011 - link

    AT's editorial stance has always been that SLI/XFire is not an upgrade path, just an extra option at the high end, and doubly so for Tri-fire and 3x SLI.

    I'd think buying a 3rd 5770 would not be a particularly wise purchase unless you absolutely didn't have the budget to get 1 or 2 higher-end cards.
  • Mr Alpha - Sunday, April 3, 2011 - link

    I use RadeonPro to setup per application crossfire settings. While it is a bummer it doesn't ship with AMD's drivers, per application settings is not an insurmountable obstacle for AMD users.
  • BrightCandle - Sunday, April 3, 2011 - link

    I found this program recently and it has been a huge help. While Crysis 2 has flickering lights (don't get me started on this games bugs!) using Radeon Pro I could fix the CF profile and play happily, without shouting at ATI to fix their CF profiles, again.
  • Pirks - Sunday, April 3, 2011 - link

    I noticed that you guys never employ useful distributed computing/GPU computing tests in your GPU reviews. You tend to employ some useless GPU computing benchmarks like some weird raytracers or something, I mean stuff people would not normally use. But you never employ really useful tests like say distributed.net's GPU computation clients, AKA dnetc. Those dnetc clients exist in AMD Stream and nVidia CUDA versions (check out http://www.distributed.net/Download_clients - see, they have CUDA 2.2, CUDA 3.1 and Stream versions too) and I thought you'd be using them in your benchmarks, but you don't, why?

    Also check out their GPU speed database at http://n1cgi.distributed.net/speed/query.php?cputy...

    So why don't you guys use this kind of benchmark in your future GPU computing speed tests instead of useless raytracer? OK if you think AT readers really bother with raytracers why don't you just add these dnetc GPU clients to your GPU computing benchmark suite?

    What do you think Ryan? Or is it someone else doing GPU computing tests in your labs? Is it Jarred maybe?

    I can help with setting up those tests but I don't know who to talk to among AT editors

    Thanks for reading my rant :)

    P.S. dnetc GPU client scales 100% _always_, like when you get three GPUs in your machine your keyrate in RC5-72 is _exactly_ 300% of your single GPU, I tested this setup myself once at work, so just FYI...
  • Arnulf - Sunday, April 3, 2011 - link

    "P.S. dnetc GPU client scales 100% _always_, like when you get three GPUs in your machine your keyrate in RC5-72 is _exactly_ 300% of your single GPU, I tested this setup myself once at work, so just FYI... "

    So you are essentially arguing running dnetc tests make no sense since they scale perfectly proportionally with the number of GPUs ?
  • Pirks - Sunday, April 3, 2011 - link

    No, I mean the general GPU reviews here, not this particular one about scaling

Log in

Don't have an account? Sign up now