Mass Effect 2, Wolfenstein, and Civ V Compute

Mass Effect 2 is a game we figured would be GPU limited by three GPUs, so it’s quite surprising that it’s not. It does look like there’s a limit at around 200fps, but we can’t hit that at 2560 even with three GPUs. You can be quite confident with two or more GPUs however that your framerates will be nothing short of amazing.

For that reason, and because ME2 is a DX9-only game, we also gave it a shot with SSAA on both the AMD and NVIDIA setups at 1920. Surprisingly it’s almost fluid in this test even with one GPU. Move to two GPUs and we’re looking at 86fps – again this is with 4x super sampling going on. I don’t think we’re too far off from being able to super sample a number of games (at least the console ports) with this kind of performance.

Wolfenstein is quite CPU limited even with two GPUs, so we didn’t expect much with three GPUs. In fact the surprising bit wasn’t the performance, it was the fact that AMD’s drivers completely blew a gasket with this game. It runs fine with two GPUs, but with three GPUs it will crash almost immediately after launching it. Short of a BSOD, this is the worst possible failure mode for an AMD setup, as AMD does not provide individual game settings for CF, unlike NVIDIA who allows for the enabling/disabling of SLI on a game-specific basis. As a result the only way to play Wolfenstein if you had a triple-GPU setup is to change CrossFire modes globally, which requires a hardware reconfiguration that takes several seconds and a couple of blank screens.

We only have one OpenGL game in our suite so we can’t isolate this as an AMD OpenGL issue or solely an issue with Wolfenstein. It’s disappointing to see AMD have this problem though.

We don’t normally look at multi-GPU numbers with our Civilization V compute test, but in this case we had the data so we wanted to throw it out there as an example of where SLI/CF and the concept of alternate frame rendering just doesn’t contribute much to a game. Texture decompression needs to happen on each card, so it can’t be divided up as rendering can. As a result additional GPUs reduce NVIDIA’s score, while two GPUs does end up helping AMD some only for a 3rd GPU to bring scores crashing down. None of this scores are worth worrying about – it’s still more than fast enough for the leader scenes the textures are for, but it’s a nice theoretical example.

  Radeon HD 6970 GeForce GTX 580
GPUs 1->2 2->3 1->3 1->2 2->3 1->3
Mass Effect 2 180% 142% 158% 195% 139% 272%
Mass Effect 2 SSAA 187% 148% 280% 198% 138% 284%
Wolfenstein 133% 0% 0% 151% 96% 145%

Since Wolfenstein is so CPU limited, the scaling story out of these games is really about Mass Effect 2. Again dual-GPU scaling is really good, both with MSAA and SSAA; NVIDIA in particular achieves almost perfect scaling. What makes this all the more interesting is that with three GPUs the roles are reversed, scaling is still strong but now it’s AMD achieving almost perfect scaling on Mass Effect 2 with SSAA, which is quite a feat given the uneven scaling of triple-GPU configurations overall. It’s just a shame that AMD doesn’t have a SSAA mode for DX10/DX11 games; if it was anything like their DX9 SSAA mode, it could certainly sell the idea of a triple GPU setup to users looking to completely eliminate all forms of aliasing at any price.

As for Wolfenstein, with two GPUs NVIDIA has the edge, but they also had the lower framerate in the first place. Undoubtedly being CPU limited even with two GPUs, there’s not much to draw from here.

Civ V, Battlefield, STALKER, and DIRT 2 Closing Thoughts
Comments Locked

97 Comments

View All Comments

  • taltamir - Sunday, April 3, 2011 - link

    wouldn't it make more sense to use a Radeon 6970 + 6990 together to get triple GPU?

    nVidia triple GPU seems to lower min FPS, that is just fail.

    Finally: Where are the eyefinity tests? none of the results were relevant since all are over 60fps with dual SLI.
    Triple monitor+ would be actually interesting to see
  • semo - Sunday, April 3, 2011 - link

    Ryan mentions in the conclusion that a triple monitor setup article is coming.

    ATI seems to be the clear winner here but the conclusion seems to downplay this fact. Also, the X58 platform isn't the only one that has more than 16 PCIe lanes...
  • gentlearc - Sunday, April 3, 2011 - link

    If you're considering going triple-gpu, I don't see how scaling matters other than an FYI. There isn't a performance comparison, just more performance. You're not going to realistically sell both your 580s and go and get three 6970s. I'd really like if you look at lower end cards capable of triple-gpu and their merit. The 5770 crossfired was a great way of extending the life of one 5770. Two 260s was another sound choice by enthusiasts looking for a low price tag upgrade.

    So, the question I would like answered is if triple gpu is a viable option for extending the life of your currently compatible mobo. Can going triple gpus extend the life of your i7 920 as a competent gaming machine until a complete upgrade makes more sense?

    SNB-E will be the cpu upgrade path, but will be available around the time the next generation of gpus are out. Is picking up a 2nd and/or 3rd gpu going to be a worthy upgrade or is the loss of selling three gpus to buy the next gen cards too much?
  • medi01 - Sunday, April 3, 2011 - link

    Besides, 350$ GPU is compared to 500$ GPU. Or so it was last time I've checked on froogle (and that was today, 3d of April 2011)
  • A5 - Sunday, April 3, 2011 - link

    AT's editorial stance has always been that SLI/XFire is not an upgrade path, just an extra option at the high end, and doubly so for Tri-fire and 3x SLI.

    I'd think buying a 3rd 5770 would not be a particularly wise purchase unless you absolutely didn't have the budget to get 1 or 2 higher-end cards.
  • Mr Alpha - Sunday, April 3, 2011 - link

    I use RadeonPro to setup per application crossfire settings. While it is a bummer it doesn't ship with AMD's drivers, per application settings is not an insurmountable obstacle for AMD users.
  • BrightCandle - Sunday, April 3, 2011 - link

    I found this program recently and it has been a huge help. While Crysis 2 has flickering lights (don't get me started on this games bugs!) using Radeon Pro I could fix the CF profile and play happily, without shouting at ATI to fix their CF profiles, again.
  • Pirks - Sunday, April 3, 2011 - link

    I noticed that you guys never employ useful distributed computing/GPU computing tests in your GPU reviews. You tend to employ some useless GPU computing benchmarks like some weird raytracers or something, I mean stuff people would not normally use. But you never employ really useful tests like say distributed.net's GPU computation clients, AKA dnetc. Those dnetc clients exist in AMD Stream and nVidia CUDA versions (check out http://www.distributed.net/Download_clients - see, they have CUDA 2.2, CUDA 3.1 and Stream versions too) and I thought you'd be using them in your benchmarks, but you don't, why?

    Also check out their GPU speed database at http://n1cgi.distributed.net/speed/query.php?cputy...

    So why don't you guys use this kind of benchmark in your future GPU computing speed tests instead of useless raytracer? OK if you think AT readers really bother with raytracers why don't you just add these dnetc GPU clients to your GPU computing benchmark suite?

    What do you think Ryan? Or is it someone else doing GPU computing tests in your labs? Is it Jarred maybe?

    I can help with setting up those tests but I don't know who to talk to among AT editors

    Thanks for reading my rant :)

    P.S. dnetc GPU client scales 100% _always_, like when you get three GPUs in your machine your keyrate in RC5-72 is _exactly_ 300% of your single GPU, I tested this setup myself once at work, so just FYI...
  • Arnulf - Sunday, April 3, 2011 - link

    "P.S. dnetc GPU client scales 100% _always_, like when you get three GPUs in your machine your keyrate in RC5-72 is _exactly_ 300% of your single GPU, I tested this setup myself once at work, so just FYI... "

    So you are essentially arguing running dnetc tests make no sense since they scale perfectly proportionally with the number of GPUs ?
  • Pirks - Sunday, April 3, 2011 - link

    No, I mean the general GPU reviews here, not this particular one about scaling

Log in

Don't have an account? Sign up now