Crysis, BattleForge, Metro 2033, and HAWX

For the sake of completeness we have included both 2560x1600 and 1920x1200 results in our charts. However with current GPU performance a triple-GPU setup only makes sense at 2560, so that’s the resolution we’re going to be focusing on for commentary and scaling purposes.

As we normally turn to Crysis as our first benchmark it ends up being quite amusing when we have a rather exact tie on our hands. The triple GTX 580 setup ends up exactly tying the triple 6970 setup at 2560x1600 with full enthusiast settings at 65.6fps. This is quite an appropriate allegory for AMD and NVIDIA’s relative performance as of late, as the two are normally very close when it comes to cards at the same price. It’s also probably not the best start for the triple GTX 580 though, as it means NVIDIA’s lead at one and two cards has melted away by the 3rd.

We have however finally established what it takes to play Crysis at full resolution on a single monitor with every setting turned up – it takes no fewer than three GPUs to do the job. Given traditional GPU performance growth curves, it should be possible to do this on a single GPU by early 2014 or so, only some 7 years after the release of Crysis: Warhead. If you want SSAA though, you may as well throw in another few years.

Moving on, it’s interesting to note that while we had a tie at 2560 with Enthusiast settings for the average framerate, the same cannot be said of the minimums.  At 2560, no matter the quality, AMD has a distinct edge in the minimum framerate. This is particularly pronounced at 2560E, where moving from two to three GPUs causes a drop in the framerate on the GTX 580. This is probably a result of the differences in the cards’ memory capacity – additional GPUs require additional memory, and it seems the GTX 580 and its 1.5GB has reached its limit. We never seriously imagined we’d find a notable difference between 1.5GB and 2GB at this point in time, but here we are.

BattleForge is a shader-bound game that normally favors NVIDIA, and this doesn’t change with three GPUs. However even though it’s one of our more intensive games, three GPUs is simply overkill for one monitor.

Metro 2033 is the only other title in our current lineup that can challenge Crysis for the title of the most demanding game, and here that’s a bout it would win. Even with three GPUs we can’t crack 60fps, and we still haven’t enabled a few extra features such as Depth of Field. The 6970 and GTX 580 are normally close with one and two GPUs, and we see that relationship extend to three GPUs. The triple GTX 580 setup has the lead by under 2fps, but it’s not the lead one normally expects from the GTX 580.

Our next game is HAWX, a title that shifts us towards games that are CPU bound. Even with that it’s actually one of the most electrically demanding games in our test suite, which is why we use it as a backup for our power/temperature/noise testing. Here we see both the triple GTX 580 and triple 6970 crack 200fps at 2560, with the GTX 580 taking top honors.

  Radeon HD 6970 GeForce GTX 580
GPUs 1->2 2->3 1->3 1->2 2->3 1->3
Crysis G+E Avg
185%
134%
249%
181%
127%
230%
Crysis E
188%
142%
268%
184%
136%
252%
Crysis G+E Min
191%
141%
270%
181%
116%
212%
Crysis E Min
186%
148%
277%
185%
83%
155%
BattleForge
194%
135%
263%
199%
135%
269%
Metro 2033
180%
117%
212%
163%
124%
202%
HAWX
190%
115%
219%
157%
117%
185%

Having taken a look at raw performance, what does the scaling situation look like? All together it’s very good. For a dual-GPU configuration the weakest game for both AMD and NVIDIA is Metro 2033, where AMD gets 180% while NVIDIA manages 163% a single video card’s performance respectively. At the other end, NVIDIA manages almost perfect scaling for BattleForge at 199%, while AMD’s best showing is in the same game at 194%.

Adding in a 3rd GPU significantly shakes things up however. The best case scenario for going from two GPUs to three GPUs is 150%, which appears to be a harder target to reach. At 142% under Crysis with Enthusiast settings AMD does quite well, which is why they close the overall performance gap there. NVIDIA doesn’t do as quite well however, managing 136%. The weakest for both meanwhile is HAWX, which is what we’d expect for a game passing 200fps and almost assuredly running straight into a CPU bottleneck.

The Crysis minimum framerate gives us a moment’s pause though. AMD gets almost perfect scaling moving from two to three GPUs when it comes to minimum framerates in Crysis, meanwhile NVIDIA ends up losing performance here with Enthusiast settings. This is likely not a story of GPU scaling and more a story about GPU memory, but regardless the outcome is a definite hit in performance. Thus while minimum framerate scaling from one to two GPUs is rather close between NVIDIA and AMD with full enthusiast settings and slightly in AMD’s favor with gamer + enthusiast, AMD has a definite advantage going from two to three GPUs all of the time out of this batch of games.

Sticking with average framerates and throwing out a clearly CPU limited HAWX, neither side seems to have a strong advantage moving from two GPUs to three GPUs; the average gain is 131%, or some 62% the theoretical maximum. AMD does have a slight edge here, but keep in mind we’re looking at percentages, so AMD’s edge is often a couple of frames per second at best.

Going from one GPU to two GPUs also gives AMD a minor advantage, with the average performance being 186% for for AMD versus 182% for NVIDIA. Much like we’ve seen in our individual GPU reviews though, this almost constantly flip-flops based on the game being tested, which is why in the end the average gains are so close.

The Test, Power, Temps, and Noise Civ V, Battlefield, STALKER, and DIRT 2
Comments Locked

97 Comments

View All Comments

  • marc1000 - Monday, April 4, 2011 - link

    Ryan, if at all possible, please include a reference card for the "low-point" of performance. We rarely see good tests with mainstream cards, only the top tier ones.

    So if you can, please include a radeon-5770 or GTX460 - 2 of these cards should have the same performance as one of the big ones, so it would be nice to see how well they work by now.
  • Ryan Smith - Wednesday, April 6, 2011 - link

    These charts were specifically cut short as the focus was on multi-GPU configurations, and so that I could fit more charts on a page. The tests are the same tests we always run, so Bench or a recent article ( http://www.anandtech.com/show/4260/amds-radeon-hd-... ) is always your best buddy.
  • Arbie - Monday, April 4, 2011 - link

    Looking at your results, it seems that at least 99.9% of gaming enthusiasts would need nothing more than a single HD 6970. Never mind the wider population of PC-centric folk who read Anandtech.

    More importantly, this isn't going to change for several years. PC game graphics are now bounded by console capabilities, and those advance only glacially. In general, gamers with an HD 6850 (not a typo) or better will have no compelling reason to upgrade until around 2014! I'm very sad to say that, but think it's true.

    Of course there is some technical interest in how many more FPS this or that competing architecture can manage, but most of that is a holdover from previous years when these things actually mattered on your desktop. I'm not going to spend $900 to pack two giant cooling and noise problems into my PC for no perceptible benefit. Nor will anyone else, statistically speaking.

    The harm in producing such reports is that it spreads the idea that these multi-board configurations still matter. So every high-end motherboard that I consider for my next build packs in slots for two or even three graphics boards, and an NF-200 chip to make sure that third card (!) gets enough bandwidth. The mobos are bigger, hotter, and more expensive than they need to be, and often leave out stuff I would much rather have. Look at the Gigabyte P67A-UD7, for example. Full accommodation for pointless graphics overkill (praised in reviews), but *no* chassis fan controls (too mundane for reviewers to mention).

    I'd rather see Anandtech spend time on detailed high-end motherboard comparisons (eg. Asus Maximus IV vs. others) and components that can actually improve my enthusiast PC experience. Sadly, right now that seems to be limited to SSDs and you already try hard on those. Are we reduced to... fan controllers?

    Thanks,

    Arbie
  • erple2 - Tuesday, April 5, 2011 - link

    There are still several games that are not Console Ports (or destined to be ported to a console) that are still interesting to read about and subsequently benchmark. People will continue to complain that PC Gaming has been a steady stream of Console Ports, just like they have been since the PSX came out in late '95. The reality is that PC Gaming isn't dead, and probably won't die for a long while. While it may be true that EA and company generate most of their revenue from lame console rehash after lame console rehash, and therefore focus almost single-mindedly on that endeavor, there are plenty of other game publishers that aren't following that trend, thereby continuing to make PC Gaming relevant.

    The last several tests I've seen of Motherboard reviews has more or less convinced me that they just don't matter at all any more. Most (if not all) motherboards of a given chipset don't offer anything performance wise over other competing motherboards.

    There are nice features here and there (Additional Fan Headers, more USB ports, more SATA Ports), but on the whole, there's nothing significant to differentiate one Motherboard from another, at least from a performance perspective.
  • 789427 - Monday, April 4, 2011 - link

    I would have thought that someone would pay attention to if throttling was occurring on any of the cards due to thermal overload.

    The reason is that due to differences in ventilation in the case, layout and physical card package, you'll have throttling at different times.

    e.g. if the room was at a stinking hot 50C, the more aggressive the throttling,the greater the disadvantage to the card.

    Conversely, operating the cards at -5C would provide a huge advantage to the card with the worst heat/fan efficiency ratio.

    cb
  • TareX - Monday, April 4, 2011 - link

    I'm starting to think it's really getting less and less compelling to be a PC gamer, with all the good games coming out for consoles exclusively.

    Thank goodness for Arkham Asylum.
  • Golgatha - Monday, April 4, 2011 - link

    I'd like to see some power, heat, and PPD numbers for running Folding@Home on all these GPUs.
  • Ryan Smith - Monday, April 4, 2011 - link

    The last time I checked, F@H did not having a modern Radeon client. If they did we'd be using it much more frequently.
  • karndog - Monday, April 4, 2011 - link

    Cmon man, you have an enthusiast rig with $1000 worth of video cards yet you use a stock i7 at 3.3ghz??

    "As we normally turn to Crysis as our first benchmark it ends up being quite amusing when we have a rather exact tie on our hands."

    Ummm probably because your CPU limited! Update to even a 2500k at 4.5ghz and i bet you'll see the Crossfire setup pull away from the SLI.
  • karndog - Monday, April 4, 2011 - link

    Not trying to make fun of your test rig, if that's all you have access too. Im just saying that people who are thinking about buying the Tri SLI / Xfire video card setups reviewed here arent running their CPU at stock clock speeds, especially such low ones, which skew the results shown here.

Log in

Don't have an account? Sign up now