Crysis, BattleForge, Metro 2033, and HAWX

For the sake of completeness we have included both 2560x1600 and 1920x1200 results in our charts. However with current GPU performance a triple-GPU setup only makes sense at 2560, so that’s the resolution we’re going to be focusing on for commentary and scaling purposes.

As we normally turn to Crysis as our first benchmark it ends up being quite amusing when we have a rather exact tie on our hands. The triple GTX 580 setup ends up exactly tying the triple 6970 setup at 2560x1600 with full enthusiast settings at 65.6fps. This is quite an appropriate allegory for AMD and NVIDIA’s relative performance as of late, as the two are normally very close when it comes to cards at the same price. It’s also probably not the best start for the triple GTX 580 though, as it means NVIDIA’s lead at one and two cards has melted away by the 3rd.

We have however finally established what it takes to play Crysis at full resolution on a single monitor with every setting turned up – it takes no fewer than three GPUs to do the job. Given traditional GPU performance growth curves, it should be possible to do this on a single GPU by early 2014 or so, only some 7 years after the release of Crysis: Warhead. If you want SSAA though, you may as well throw in another few years.

Moving on, it’s interesting to note that while we had a tie at 2560 with Enthusiast settings for the average framerate, the same cannot be said of the minimums.  At 2560, no matter the quality, AMD has a distinct edge in the minimum framerate. This is particularly pronounced at 2560E, where moving from two to three GPUs causes a drop in the framerate on the GTX 580. This is probably a result of the differences in the cards’ memory capacity – additional GPUs require additional memory, and it seems the GTX 580 and its 1.5GB has reached its limit. We never seriously imagined we’d find a notable difference between 1.5GB and 2GB at this point in time, but here we are.

BattleForge is a shader-bound game that normally favors NVIDIA, and this doesn’t change with three GPUs. However even though it’s one of our more intensive games, three GPUs is simply overkill for one monitor.

Metro 2033 is the only other title in our current lineup that can challenge Crysis for the title of the most demanding game, and here that’s a bout it would win. Even with three GPUs we can’t crack 60fps, and we still haven’t enabled a few extra features such as Depth of Field. The 6970 and GTX 580 are normally close with one and two GPUs, and we see that relationship extend to three GPUs. The triple GTX 580 setup has the lead by under 2fps, but it’s not the lead one normally expects from the GTX 580.

Our next game is HAWX, a title that shifts us towards games that are CPU bound. Even with that it’s actually one of the most electrically demanding games in our test suite, which is why we use it as a backup for our power/temperature/noise testing. Here we see both the triple GTX 580 and triple 6970 crack 200fps at 2560, with the GTX 580 taking top honors.

  Radeon HD 6970 GeForce GTX 580
GPUs 1->2 2->3 1->3 1->2 2->3 1->3
Crysis G+E Avg
185%
134%
249%
181%
127%
230%
Crysis E
188%
142%
268%
184%
136%
252%
Crysis G+E Min
191%
141%
270%
181%
116%
212%
Crysis E Min
186%
148%
277%
185%
83%
155%
BattleForge
194%
135%
263%
199%
135%
269%
Metro 2033
180%
117%
212%
163%
124%
202%
HAWX
190%
115%
219%
157%
117%
185%

Having taken a look at raw performance, what does the scaling situation look like? All together it’s very good. For a dual-GPU configuration the weakest game for both AMD and NVIDIA is Metro 2033, where AMD gets 180% while NVIDIA manages 163% a single video card’s performance respectively. At the other end, NVIDIA manages almost perfect scaling for BattleForge at 199%, while AMD’s best showing is in the same game at 194%.

Adding in a 3rd GPU significantly shakes things up however. The best case scenario for going from two GPUs to three GPUs is 150%, which appears to be a harder target to reach. At 142% under Crysis with Enthusiast settings AMD does quite well, which is why they close the overall performance gap there. NVIDIA doesn’t do as quite well however, managing 136%. The weakest for both meanwhile is HAWX, which is what we’d expect for a game passing 200fps and almost assuredly running straight into a CPU bottleneck.

The Crysis minimum framerate gives us a moment’s pause though. AMD gets almost perfect scaling moving from two to three GPUs when it comes to minimum framerates in Crysis, meanwhile NVIDIA ends up losing performance here with Enthusiast settings. This is likely not a story of GPU scaling and more a story about GPU memory, but regardless the outcome is a definite hit in performance. Thus while minimum framerate scaling from one to two GPUs is rather close between NVIDIA and AMD with full enthusiast settings and slightly in AMD’s favor with gamer + enthusiast, AMD has a definite advantage going from two to three GPUs all of the time out of this batch of games.

Sticking with average framerates and throwing out a clearly CPU limited HAWX, neither side seems to have a strong advantage moving from two GPUs to three GPUs; the average gain is 131%, or some 62% the theoretical maximum. AMD does have a slight edge here, but keep in mind we’re looking at percentages, so AMD’s edge is often a couple of frames per second at best.

Going from one GPU to two GPUs also gives AMD a minor advantage, with the average performance being 186% for for AMD versus 182% for NVIDIA. Much like we’ve seen in our individual GPU reviews though, this almost constantly flip-flops based on the game being tested, which is why in the end the average gains are so close.

The Test, Power, Temps, and Noise Civ V, Battlefield, STALKER, and DIRT 2
Comments Locked

97 Comments

View All Comments

  • Sabresiberian - Tuesday, April 5, 2011 - link

    I've been thinking for quite awhile that we need something different, and this is the primary reason why - I can't get all I want to install on any ATX mainboard I know of.

    ;)
  • Sabresiberian - Tuesday, April 5, 2011 - link

    I've always thought minimum frame rate is where the focus should be in graphics card tests (when looking at the frame rate performance aspect), instead of the average. It's the minimum frame rate that bothers people or even makes a game unplayable.

    Thanks!

    ;)
  • mapesdhs - Wednesday, April 6, 2011 - link


    I hate to say it but with the CPU at only 3.33, the results don't really mean that much. I know
    the 920 used can't go higher, but it just seems a bit pointless to do all these tests when the
    results can't really be used as the basis for making a purchasing decision because of a very
    probably CPU bottleneck. Surely it would have been sensible for an article like this to replace
    the 920 with a 950 and redo the oc to 4+. The 950 is good value now aswell. Or even the
    entry 6-core.

    Re slot spacing, perhaps if one insists on using P67 it can be hard to sort that out, but there
    *are* X58 boards which provide what one needs, eg. the Asrock X58 Extreme6 does have
    double-slot spacing between each PCIe slot, so 3 dual-slot cards would have a fully empty
    slot between each card for better cooling. Do other vendors make a board like this? I couldn't
    find one after a quick check on the Gigabyte or ASUS sites. Only down side is with all 3 slots
    used the Extreme6 operates slots 2 and 3 at 8x/8x; for many games this isn't an issue (depends
    on the game), but I'm sure some would moan nonetheless.

    Would be interesting to know how that would compare though, ie. a 4GHz 950 on an Extreme6
    for these tests.

    Unless I missed it somehow, I'm a tad surprised Gigabyte don't make an X58 board with this type
    of slot spacing, or do they?

    Ian.
  • xAlex79 - Thursday, April 14, 2011 - link

    I am a bit disapointed Ryan in the way you put your conclusions.

    At the start of the article you highlight how you are going to look at Trifire and Tri-Sli and compare how it does for the value.

    Yet at the end in your conclusion there isnt a single mention or even adjusted scores considering value at all. And that makes Nvidia look alot better than they should. It is as you completely forget that three 580s costs you 1500$ and that three 6970s costs you 900$.

    Based on that and the fact YOU stated you would take value into account (And personally I think posting any kind of review without value nowdays is just irresponsible and biased) I am very disapointed with an otherwise very good set of tests.

    I also understand that this is labeled "Part 1" and that the value might come into "Part 2" but you should have CLEARLY outlined that in your conclusion were that the case. And given the quality of reviews that we have come to expect from Anantech, the final numbers should ALWAYS include a value perspective.

    I will jsut outline that it is poor form and not very professional and that in the end the people you should care about are us, your readers. Not how you look or try to look for hardware manifacturers. If this was a mistake, you should correct it asap. It does not make you look good.
  • L1qu1d - Friday, April 15, 2011 - link

    I wonder why they didn't opt for the 270.51 Drivers and went with 3 month old drivers?

    Compared to the tested drivers:

    GeForce GTX 580:

    Up to 516% in Dragon Age 2 (SLI 2560x1600 8xAA/16xAF Very High, SSAO on)
    Up to 326% in Dragon Age 2 (1920x1200 8xAA/16xAF Very High, SSAO on)
    Up to 11% in Just Cause 2 (1920x1200 8xAA/16xAF, Concrete Jungle)
    Up to 11% in Just Cause 2 (SLI 2560x1600 8xAA/16xAF, Concrete Jungle)
    Up to 7% in Civilization V (1920x1200 4xAA/16xAF, Max settings)
    Up to 6% in Far Cry 2 (SLI 2560x1600 8xAA/16xAF, Max settings)
    Up to 5% in Civilization V (SLI 1920x1200 8xAA/16xAF, Max settings)
    Up to 5% in Left 4 Dead 2 (1920x1200 noAA/AF, Outdoor)
    Up to 5% in Left 4 Dead 2 (SLI 2560x1600 4xAA/16xAF, Outdoor)
    Up to 4% in H.A.W.X. 2 (SLI 1920x1200 8xAA/16xAF, Max settings)
    Up to 4% in Mafia 2 (SLI 2560x1600 AA on/16xAF, PhysX = High)
  • Fony - Thursday, April 28, 2011 - link

    taking forever for the Eyefinity/Surround testing.
  • vipergod2000 - Thursday, May 5, 2011 - link

    The one thing that erks me is that the i7-920 OCed to ~3.3Ghz - causing the scaling of 3 cards being greatly reduced as opposed to other forum users that have 3 or 4 cards in CFX or SLI but with fantastic scaling - but assured with a coupling a i7-2600k at 5ghz minimum or a 980x/990x at 4.6ghz+

Log in

Don't have an account? Sign up now