Back to Article

  • masterchi - Sunday, April 03, 2011 - link

    "Going from 1 GPU to 2 GPUs also gives AMD a minor advantage, with the average gain being 182% for NVIDIA versus 186% for AMD"

    This should be 78.6 and 87.7, respectively.
  • Ryan Smith - Sunday, April 03, 2011 - link

    To be clear, there I'm only including the average FPS (and not the min FPS) of all the games except HAWX (CPU limited). Performance is as the numbers indicate, 182% and 186% of a single card's performance respectively. Reply
  • eddman - Sunday, April 03, 2011 - link

    Doesn't make sense. Are you saying that, for example, 580 SLI is 2.77 times faster than a single 580? Reply
  • SagaciousFoo - Sunday, April 03, 2011 - link

    Think of it this way: A single 580 card is 100% performance. Two cards equals 186% performance. So the SLI setup is 14% shy of being 2x the performance of a single card. Reply
  • AnnihilatorX - Sunday, April 03, 2011 - link

    This is typical percentile jargon. It's author's miss really.
    When you say 186% average gain, you mean 2.86 times the performance.
    When you say 86% average gain, you mean 186% the performance, and that's what you mean.
    The keyword gain there ais unecessary and misleading.

    80% increase -> x 1.8
    180% increase -> x 2.8
  • SlyNine - Sunday, April 03, 2011 - link

    Correct, it should read, 186% of (multiplication) a single card.

    But I am making the assumption that gain means addition.
  • DaFox - Sunday, April 03, 2011 - link

    That's unfortunately how it works when it come to multi GPU scaling. Ryan is just continuing the standard trend set by everyone else in the industry. Reply
  • Bremen7000 - Sunday, April 03, 2011 - link

    No, he's just using misleading wording. The chart should either read "Performance: 186%" or "Performance gain: 86%", etc. This isn't hard. Reply
  • sigmatau - Monday, April 04, 2011 - link

    The OP is correct. You cannot use "gain" and include the 100% of the first card. This is simple percentages.

    If I have one gallon of gas in my car and add one gallon, I gain 100%.

    I also have a total of 200% of what I had at the start.
  • Kaboose - Sunday, April 03, 2011 - link

    Finally, it has taken awhile but finally and thank you!!! Multi-monitor is exactly what we need in these reviews, especially the 6990 and 590 reviews. Reply
  • Ryan Smith - Sunday, April 03, 2011 - link

    It took awhile, but we finally have 3 120Hz 1080P monitors on the way. So we'll be able to test Eyefinity, 3D Vision, and 3D Vision Surround; all of which have been neglected around here. Reply
  • Kaboose - Sunday, April 03, 2011 - link

    I await these tests with breathless anticipation! Reply
  • veri745 - Sunday, April 03, 2011 - link

    While this article was very well written, I think it is hardly worth it without the multi-monitor data. No-one (sane) is going to get 3x SLI/CF with a single monitor, so it's mostly irrelevant.

    The theoretical scaling comparison is interesting, but I'm a lot more interesting in the scaling at 3240x1920 or 5760x1080.
  • DanNeely - Sunday, April 03, 2011 - link

    This is definitely a step in the right direction; but with other sites having 3x 1920x1200 or even 3x 2560x1600 test setups you'll still be playing catchup. Reply
  • RK7 - Sunday, April 03, 2011 - link

    Finally! I created account just to write that comment :) That's what's missing and what definitely needs to be tested! Especially 3D Vision Surround - it's good to know if it's worth to put so much money into such setup, because single card may be on the edge of performance for modern games in stereoscopic mode with single monitor (good example is Metro 2033, that blows mind when in 3D, but I found with single GTX 570@900MHz is playable only at 1600x900 in 3D with maximum settings without DoF and AA, and even in such case it could drop for some action scenes with heavy lighting to ~12 fps...). So if three cards can achieve a good scaling and provide performance per monitor for 3 monitors setup close to single card for one monitor, then we're there and it's worth it definitely, but if numbers will be alike to those for single monitor scaling, then folks should be aware that there's no way for maximum visual quality gaming with current hardware on 3 monitors... Reply
  • Dustin Sklavos - Monday, April 04, 2011 - link

    Not completely neglected. I've added triple-monitor surround testing to my boutique desktop reviews whenever able. :) Reply
  • Crazymech - Sunday, April 03, 2011 - link

    I'm having my doubts about the capabilities of the 920 OC'd to 3.33 GHz matched up with 3 of the most powerful single GPUs.

    I understand straying away from SB because of the lanes, but you could at least have upped the OC to 3,8-4, which many people do (and I would think most that considers a tripple setup would use).

    To underline it I point to the small differences between the 4.5 GHz 2600K and the lower overclocked one in the boutique builds reviews, with the highest clocked CPU coupled with weaker GPU's nipping at the heels of the more powerful GPU.

    I suggest you at least experiment in a single test (say metro for example.. or battlefield) what a higher clocked X58 (or the 980's 6 cores) could do to the setup.
    If I'm wrong, it would at least be good to know that.
  • BrightCandle - Sunday, April 03, 2011 - link

    The fact that sandy Bridge has a PCI-E lanes problem is grounds for testing the impact.

    Still I would rather see the numbers on X58 and triple screen gaming before seeing the impact that SB makes the performance of SLI/CF setups.
  • Ryan Smith - Sunday, April 03, 2011 - link

    For what it's worth, 3.33GHz is actually where this specific 920 tops out. It won't take 3.5GHz or higher, unfortunately.

    We'll ultimately upgrade our testbed to SNB - this article is the impetus for that - but that's not going to happen right away.
  • Crazymech - Monday, April 04, 2011 - link

    It wont take 3.5? Really? That amazes me.
    Though very unfortunate for the purpose of this test.

    The main focus is (of course) always on how the new GPU in relation to a standard CPU improves the framerate, but once in a while it would be interesting what different CPU's do to the GPU and FPS aswell. Like the old Doom III articles of showing Athlon dominating PenIV.

    Thanks for the answer anyhows :).
  • taltamir - Sunday, April 03, 2011 - link

    wouldn't it make more sense to use a Radeon 6970 + 6990 together to get triple GPU?

    nVidia triple GPU seems to lower min FPS, that is just fail.

    Finally: Where are the eyefinity tests? none of the results were relevant since all are over 60fps with dual SLI.
    Triple monitor+ would be actually interesting to see
  • semo - Sunday, April 03, 2011 - link

    Ryan mentions in the conclusion that a triple monitor setup article is coming.

    ATI seems to be the clear winner here but the conclusion seems to downplay this fact. Also, the X58 platform isn't the only one that has more than 16 PCIe lanes...
  • gentlearc - Sunday, April 03, 2011 - link

    If you're considering going triple-gpu, I don't see how scaling matters other than an FYI. There isn't a performance comparison, just more performance. You're not going to realistically sell both your 580s and go and get three 6970s. I'd really like if you look at lower end cards capable of triple-gpu and their merit. The 5770 crossfired was a great way of extending the life of one 5770. Two 260s was another sound choice by enthusiasts looking for a low price tag upgrade.

    So, the question I would like answered is if triple gpu is a viable option for extending the life of your currently compatible mobo. Can going triple gpus extend the life of your i7 920 as a competent gaming machine until a complete upgrade makes more sense?

    SNB-E will be the cpu upgrade path, but will be available around the time the next generation of gpus are out. Is picking up a 2nd and/or 3rd gpu going to be a worthy upgrade or is the loss of selling three gpus to buy the next gen cards too much?
  • medi01 - Sunday, April 03, 2011 - link

    Besides, 350$ GPU is compared to 500$ GPU. Or so it was last time I've checked on froogle (and that was today, 3d of April 2011) Reply
  • A5 - Sunday, April 03, 2011 - link

    AT's editorial stance has always been that SLI/XFire is not an upgrade path, just an extra option at the high end, and doubly so for Tri-fire and 3x SLI.

    I'd think buying a 3rd 5770 would not be a particularly wise purchase unless you absolutely didn't have the budget to get 1 or 2 higher-end cards.
  • Mr Alpha - Sunday, April 03, 2011 - link

    I use RadeonPro to setup per application crossfire settings. While it is a bummer it doesn't ship with AMD's drivers, per application settings is not an insurmountable obstacle for AMD users. Reply
  • BrightCandle - Sunday, April 03, 2011 - link

    I found this program recently and it has been a huge help. While Crysis 2 has flickering lights (don't get me started on this games bugs!) using Radeon Pro I could fix the CF profile and play happily, without shouting at ATI to fix their CF profiles, again. Reply
  • Pirks - Sunday, April 03, 2011 - link

    I noticed that you guys never employ useful distributed computing/GPU computing tests in your GPU reviews. You tend to employ some useless GPU computing benchmarks like some weird raytracers or something, I mean stuff people would not normally use. But you never employ really useful tests like say's GPU computation clients, AKA dnetc. Those dnetc clients exist in AMD Stream and nVidia CUDA versions (check out - see, they have CUDA 2.2, CUDA 3.1 and Stream versions too) and I thought you'd be using them in your benchmarks, but you don't, why?

    Also check out their GPU speed database at

    So why don't you guys use this kind of benchmark in your future GPU computing speed tests instead of useless raytracer? OK if you think AT readers really bother with raytracers why don't you just add these dnetc GPU clients to your GPU computing benchmark suite?

    What do you think Ryan? Or is it someone else doing GPU computing tests in your labs? Is it Jarred maybe?

    I can help with setting up those tests but I don't know who to talk to among AT editors

    Thanks for reading my rant :)

    P.S. dnetc GPU client scales 100% _always_, like when you get three GPUs in your machine your keyrate in RC5-72 is _exactly_ 300% of your single GPU, I tested this setup myself once at work, so just FYI...
  • Arnulf - Sunday, April 03, 2011 - link

    "P.S. dnetc GPU client scales 100% _always_, like when you get three GPUs in your machine your keyrate in RC5-72 is _exactly_ 300% of your single GPU, I tested this setup myself once at work, so just FYI... "

    So you are essentially arguing running dnetc tests make no sense since they scale perfectly proportionally with the number of GPUs ?
  • Pirks - Sunday, April 03, 2011 - link

    No, I mean the general GPU reviews here, not this particular one about scaling Reply
  • DanNeely - Sunday, April 03, 2011 - link

    Does DNetc not have 4xx/5xx nVidia applications yet? Reply
  • Pirks - Sunday, April 03, 2011 - link

    They have CUDA 3.1 clients that work pretty nice with Fermi cards. Except that AMD cards pwn them violently, we're talking about an order of magnitude difference between 'em. Somehow RC5-72 code executes 10x faster on AMD than on nVidia GPUs, I could never find precise explanation why, must be related to poor match between RC5 algorithm and nVidia GPU architecture or something.

    I crack RC5-72 keys on my AMD 5850 and it's amost 2 BILLION keys per second. Out of 86,000+ participants my machine is ranked #43 from the top (in daily stats graph but still, man...#43! I gonna buy two 5870 sometime and my rig may just make it to top 10!!! out of 86,000!!! this is UNREAL man...)

    On my nVidia 9800 GT I was cracking like 146 million keys per second, this very low rate is soo shameful compared to AMD :)))
  • DanNeely - Monday, April 04, 2011 - link

    It's not just dnetc, ugly differences in performance also show up in the milkeyway@home and collatz conjecture projects on the boinc platform. They're much larger that the 1/8 vs 1/5(1/4 in 69xx?) differences in FP64/FP32 between would justify; IIRC both are about 5:1 in AMD's favor. Reply
  • Ryan Smith - Sunday, April 03, 2011 - link

    I love what the Dnet guys do with their client, and in the past it's been a big help to us in our articles, especially on the AMD side.

    With that said, it's a highly hand optimized client that almost perfectly traces theoretical performance. It doesn't care about cache, it doesn't care about memory bandwidth; it only cares about how many arithmetic operations can be done in a second. That's not very useful to us; it doesn't tell us anything about the hardware.

    We want to stick to distributed computing clients that have a single binary for both platforms, so that we're looking at the performance of a common OpenCL/DirectCompute codepath and how it performs on two different GPUs. The Dnet client just doesn't meet that qualification.
  • tviceman - Sunday, April 03, 2011 - link

    Ryan are you going to be using the nvidia 270 drivers in future tests? I know they're beta, but it looks like you aren't using WHQL AMD drivers either (11.4 preview). Reply
  • Ryan Smith - Sunday, April 03, 2011 - link

    Yes, we will. The benchmarking for this article was actually completed shortly after the GTX 590, so by the time NVIDIA released the 270 drivers it was already being written up. Reply
  • ajp_anton - Sunday, April 03, 2011 - link

    When looking at the picture of those packed sardines, I had an idea.
    Why don't the manufacturers make the radial fan hole go all the way through the card? With three or four cards tightly packed, the middle card(s) will still have some air coming through the other cards, assuming the holes are aligned.
    Even with only one or two cards, the (top) card will have access to more fresh air than before.
  • semo - Sunday, April 03, 2011 - link

    Correct me if I'm wrong but the idea is to keep the air flowing through the shroud body and not through and through the fans. I think this is a moot point though as I can't see anyone using a 3x GPU config without water cooling or something even more exotic. Reply
  • casteve - Sunday, April 03, 2011 - link

    "It turns out adding a 3rd card doesn’t make all that much more noise."

    Yeah, I guess if you enjoy 60-65dBA noise levels, the 3rd card won't bother you. Wouldn't it be cheaper to just toss a hairdryer inside your PC? You'd get the same level of noise and room heater effect. ;)
  • slickr - Sunday, April 03, 2011 - link

    I mean mass effect 2 is a console port, Civilisation 5 is the worst game to choose to benchmark as its a turn based game and not real time and Hawx is 4 years outdated and basically game that Nvidia made, its not even funny anymore seeing how it gives the advantage to Nvidia cards every single time.

    Replace with:
    Crysis warhead to Aliens vs predator
    battleforge to shogun 2 total war
    hawx to Shift 2
    Civilization 5 to Starcraft 2
    Mass Effect 2 with Dead Space 2
    Wolfeinstein to Arma 2
    +add Mafia 2
  • A5 - Sunday, April 03, 2011 - link

    I'm guessing Ryan doesn't want to spend a month redoing all of their benchmarks over all the recent cards. Also the only one of your more recent games that would be at all relevant is Shogun 2 - SC2 runs well on everything, no one plays Arma 2, and the rest are console ports... Reply
  • slickr - Sunday, April 03, 2011 - link

    apart from Shift, no game is console port.

    PC only is mafia 2, SC2, Arma 2, shogun 2, dead space is also not a console port. its PC port to consoles.
  • Ryan Smith - Sunday, April 03, 2011 - link

    We'll be updating the benchmark suite in the next couple of months as keeping with our twice a year schedule. Don't expect us to drop Civ V or Crysis, however. Reply
  • Dustin Sklavos - Monday, April 04, 2011 - link

    Jarred and I have gone back and forth on this stuff to get our own suite where it needs to be, and the games Ryan's running have sound logic behind them. For what it's worth...

    Aliens vs. Predator isn't worth including because it doesn't really leverage that much of DX11 and nobody plays it because it's a terrible game. Crysis Warhead STILL stresses modern gaming systems. As long as it does that it'll be useful, and at least provides a watermark for the underwhelming Crysis 2.

    Battleforge and Shogun 2 I'm admittedly not sure about, same with HAWX and Shift 2.

    Civ 5 should stay, but StarCraft II should definitely be added. There's a major problem with SC2, though: it's horribly, HORRIBLY CPU bound. SC2 is criminally badly coded given how long it's been in the oven and doesn't scale AT ALL with more than two cores. I've found situations even with Sandy Bridge hardware where SC2 is more liable to demonstrate how much the graphics drivers and subsystem hit the CPU rather than how the graphics hardware itself performs. Honestly my only justification for including it in our notebook/desktop suites is because it's so popular.

    Mass Effect 2 to Dead Space 2 doesn't make any sense; Dead Space 2 is a godawful console port while Mass Effect 2 is currently one of the best if not THE best optimized Unreal Engine 3 games on the PC. ME2 should get to stay almost entirely by virtue of being an Unreal Engine 3 representative, ignoring its immense popularity.

    Wolfenstein is currently the most demanding OpenGL game on the market. It may seem an oddball choice, but it really serves the purpose of demonstrating OpenGL performance. Arma 2 doesn't fill this niche.

    Mafia II's easy enough to test that it couldn't hurt to add it.
  • JarredWalton - Monday, April 04, 2011 - link

    Just to add my two cents....

    AvP is a lousy game, regardless of benchmarks. I also toss HAWX and HAWX 2 into this category, but Ryan has found a use for HAWX in that it puts a nice, heavy load on the GPUs.

    Metro 2033 and Mafia II aren't all that great either, TBH, and so far Crysis 2 is less demanding *and* less fun than either of the two prequels. (Note: I finished both Metro and Mafia, and I'd say both rate something around 70%. Crysis 2 is looking about 65% right now, but maybe it'll pick up as the game progresses.)
  • c_horner - Sunday, April 03, 2011 - link

    I'm waiting for the day when someone actually reports on the perceived usability of Mutli-GPU setups in comparison to a single high-end GPU.

    What I mean is this: often times even though you might be receiving an arbitrarily larger frame count, the lag and overall smoothness of the games aren't anywhere near as playable and enjoyable as a game that can be run properly with a single GPU.

    Having tried SLI in the past I was left with a rather large distaste for plopping down the cost of another high end card. Not all games worked properly, not all games scaled well, some games would scale well in the areas it could render easily but minimum frame rates sucked etc. etc. and the list goes on.

    When are some of these review sites going to post subjective and real world usage information instead of a bunch of FPS comparisons?

    There's more to the story here.
  • semo - Sunday, April 03, 2011 - link

    I think this review covers some of your concerns. It seems that AMD with their latest drivers achieve a better min FPS score compared to nVidia.

    I've never used SLI myself but I would think that you wouldn't be able to notice the latency due to more than one GPU in game. Wouldn't such latencies be in the micro seconds?
  • SlyNine - Monday, April 04, 2011 - link

    And yet, those micro seconds seemed like macro seconds, Micro studder was one of the most annoying things ever! I hated my 8800GT SLI experience.

    Haven't been back to multi videocard setups since.
  • DanNeely - Monday, April 04, 2011 - link

    Look at's reviews. Instead of FPS numbers from canned benches they play the games and list the highest settings that were acceptable. Minimum FPS levels and for SLI/xFire microstuttering problems can push their recommendations down because even when the average numbers look great the situation might actually not be playable. Reply
  • robertsu - Sunday, April 03, 2011 - link

    How is microstuttering with 3 GPU's? Is there any in this new versions? Reply
  • Ryan Smith - Sunday, April 03, 2011 - link

    I am not highly sensitive to microstuttering (aliasing on the other hand...). In my experience nothing here microstuttered, and the only thing that performed poorly was the NV 3xGTX580 setup under Bad Company 2. Reply
  • james.jwb - Sunday, April 03, 2011 - link

    Ryan, do you think you could re-run just a single game (you choose) but with the CPU overclocked to 4.0Ghz or higher if you can (4.5ghz if possible). 3 GPU's will surely react well to this, I'd love to see the results (for 2x, too). Reply
  • james.jwb - Sunday, April 03, 2011 - link

    just at 2560 btw :) Reply
  • DanNeely - Sunday, April 03, 2011 - link

    Ryan posted elsewhere that the 920 he's using is from the slow end of the bell curve and isn't stable above 3.33; so until he gets a new system this is as good as it gets. Reply
  • PhantomKnight - Sunday, April 03, 2011 - link

    This is not the only way. There are flexible PCIe cables around on eBay. I don't know how well they would work, but it would be possible to use something like this to increase the room around the cards. ETC Reply
  • 7Enigma - Tuesday, April 05, 2011 - link

    Interesting. I wonder if this would harm latency somehow (or reduce the amount of power the PCIe slot can supply due to essentially using an extension cord for the gpu). Reply
  • masterkritiker - Sunday, April 03, 2011 - link

    While it is a great article, sadly it's a limited one. The dual/triple GPU test should have been done across multiple platforms, x58/890FX/P67 regardless of some platform limited pcie lanes like the P67. Let's be honest SNB systems are the ones selling like mad right now, you'll hardly convince more people w/ an x58 system to spend money upgrading their GPU to dual/triple GPUs while SNB-E is just around the corner & their x58 systems are still great for them. Trust me people will "wait"! The test could have efficiently validated anyone deciding to upgrade from single to dual/triple GPUs across multiple platforms in either a single 1080p/1600p monitor or even multiple monitors, I guess we have to wait for the test(If that will happen). Reply
  • b1u3 - Monday, April 04, 2011 - link

    2x590GTX vs 2x6990... Reply
  • ypsylon - Monday, April 04, 2011 - link

    Frankly I can't get my head around this test. If you have ~2000$ to blow on 3 VGAs then there is good chance you will buy 2x590* or 2x6990. It is more logical and convenient choice.

    Performance by a tiny fraction lower vs 3xSLI/CF (at worst - only with reference - under-clocked - models currently available), but you'll need only 2 cards, not 3. Easier to implement, less clutter inside, and if by any chance you own a motherboard with 3* slot space between 1st and 2nd x16 slot then it is absolute win-win.
  • piroroadkill - Monday, April 04, 2011 - link

    Quite. I don't know why 6990 CF, just like 5970 CF, was almost completely ignored by AnandTech.. Reply
  • Ryan Smith - Monday, April 04, 2011 - link

    There are 2 reasons for that:

    1) We can't immediately get another 6990. I know it seems odd that we'd have trouble getting anything, but vendors are generally uninterested in sampling cards that are reference, which is why we're so grateful to Zotac and PowerColor for the reference 580/6970.

    2) We actually can't run a second 6990 with our existing testbed. The Rampage II Extreme only has x16 slots at positions 2 and 4; position 6 is x8. The spacing needs for a 6990CF setup require 2 empty slots, meaning we'd have to install it in position 6. Worse yet is that position 6 is abutted by our Antec 1200W PSU - this isn't a problem with single-GPU cards as the blowers are well clear of the PSU, but a center-mounted fan like the 6990 would get choked just as if there was another card immediately next to it.

    We will be rebuilding our testbed for SNB and using a mobo with better spacing, but that's not going to happen right away. The point being that we're not ignoring the 590/6990 multiple card configurations, it's just not something we're in a position to test right now.
  • piroroadkill - Monday, April 04, 2011 - link

    As long as it's in the works, that's alright. Seems like you have your reasons for it being the way it is. Reply
  • Rukur - Monday, April 04, 2011 - link

    This whole technology is stupid with monitors.

    Why don't you stitch together 3 projectors for a seamless canvas to play a game ?
  • SlyNine - Monday, April 04, 2011 - link

    "This whole technology is stupid with monitors." Do you suppose neural interfaces will be her soon. kick ass. Reply
  • Rukur - Monday, April 04, 2011 - link

    Can you read more than one sentence ? Reply
  • monkeyshambler - Monday, April 04, 2011 - link

    Interesting stuff, but for a 3 card SLI / crossfire what I'd really want to see is what the framerates are when every setting on the card is maxed.
    e.g. 24x AA 16x AF, high quality settings selected in the driver control panels etc.
    supplement this with whats the performance on triple SLI with 3 1920*1080 monitors @ 4x AA
    As lets face it if your going to spend this sort of money (and likely a watercooling rig too as theirs no way three cards are tolerable otherwise) you want to have a genuine show of why you should invest.
    The current resolutions just will never stretch the cards or enable them to differentiate significantly from a standard SLI setup.

    Hope we can see some of the above in a future article....
  • Rukur - Monday, April 04, 2011 - link

    I tend to agree. How is maxing everything any worse then half inch monitor bezels all over your play area.

    The whole idea of eye infinity is stupid unless we all look through widows with 1 inch gaps while racing extreme cars.

    How about some projectors stitched together for real people to actually try.
  • erple2 - Tuesday, April 05, 2011 - link

    Wasn't there an analysis a while back comparing 1x, 2x, 4x, 8x and 16x AA? I thought that the conclusion to that was that there's no discernible difference between 8x and 16x AA, and the differences between 4x and 8x were only visible in careful examination of static images. Under normal play, you couldn't actually tell any difference between them.

    Maybe I'm just remembering wrong.

    Also, I think that Ryan mentioned why they haven't yet done the triple monitor tests yet (lack of hardware).
  • DanNeely - Tuesday, April 05, 2011 - link

    That's generally correct. Toms Hardware has run PCI restriction tests roughly once per GPU generation. The only game that ever really suffered at x4 bandwidth was MS flight simulator.

    PCIe bandwidth can impact some compute tasks. Einstien@home runs about 30% faster on a 460 in an 16x slot vs an 8x.
  • fepple - Monday, April 04, 2011 - link

    With my two 5870s I have a wierd problem in crossfire. I have two screens a 24''' LCD and a 37'' LED TV. When in crossfire if I play video on the second screen it gets some odd artifacts of black(ish) horizontal lines across the bottom of the screen. Only solution i've found is to not have the cards in crossfire and plug the TV/Screen into different cards for watching stuff.

    Annoying, any thoughts?
  • marc1000 - Monday, April 04, 2011 - link

    Ryan, if at all possible, please include a reference card for the "low-point" of performance. We rarely see good tests with mainstream cards, only the top tier ones.

    So if you can, please include a radeon-5770 or GTX460 - 2 of these cards should have the same performance as one of the big ones, so it would be nice to see how well they work by now.
  • Ryan Smith - Wednesday, April 06, 2011 - link

    These charts were specifically cut short as the focus was on multi-GPU configurations, and so that I could fit more charts on a page. The tests are the same tests we always run, so Bench or a recent article ( ) is always your best buddy. Reply
  • Arbie - Monday, April 04, 2011 - link

    Looking at your results, it seems that at least 99.9% of gaming enthusiasts would need nothing more than a single HD 6970. Never mind the wider population of PC-centric folk who read Anandtech.

    More importantly, this isn't going to change for several years. PC game graphics are now bounded by console capabilities, and those advance only glacially. In general, gamers with an HD 6850 (not a typo) or better will have no compelling reason to upgrade until around 2014! I'm very sad to say that, but think it's true.

    Of course there is some technical interest in how many more FPS this or that competing architecture can manage, but most of that is a holdover from previous years when these things actually mattered on your desktop. I'm not going to spend $900 to pack two giant cooling and noise problems into my PC for no perceptible benefit. Nor will anyone else, statistically speaking.

    The harm in producing such reports is that it spreads the idea that these multi-board configurations still matter. So every high-end motherboard that I consider for my next build packs in slots for two or even three graphics boards, and an NF-200 chip to make sure that third card (!) gets enough bandwidth. The mobos are bigger, hotter, and more expensive than they need to be, and often leave out stuff I would much rather have. Look at the Gigabyte P67A-UD7, for example. Full accommodation for pointless graphics overkill (praised in reviews), but *no* chassis fan controls (too mundane for reviewers to mention).

    I'd rather see Anandtech spend time on detailed high-end motherboard comparisons (eg. Asus Maximus IV vs. others) and components that can actually improve my enthusiast PC experience. Sadly, right now that seems to be limited to SSDs and you already try hard on those. Are we reduced to... fan controllers?


  • erple2 - Tuesday, April 05, 2011 - link

    There are still several games that are not Console Ports (or destined to be ported to a console) that are still interesting to read about and subsequently benchmark. People will continue to complain that PC Gaming has been a steady stream of Console Ports, just like they have been since the PSX came out in late '95. The reality is that PC Gaming isn't dead, and probably won't die for a long while. While it may be true that EA and company generate most of their revenue from lame console rehash after lame console rehash, and therefore focus almost single-mindedly on that endeavor, there are plenty of other game publishers that aren't following that trend, thereby continuing to make PC Gaming relevant.

    The last several tests I've seen of Motherboard reviews has more or less convinced me that they just don't matter at all any more. Most (if not all) motherboards of a given chipset don't offer anything performance wise over other competing motherboards.

    There are nice features here and there (Additional Fan Headers, more USB ports, more SATA Ports), but on the whole, there's nothing significant to differentiate one Motherboard from another, at least from a performance perspective.
  • 789427 - Monday, April 04, 2011 - link

    I would have thought that someone would pay attention to if throttling was occurring on any of the cards due to thermal overload.

    The reason is that due to differences in ventilation in the case, layout and physical card package, you'll have throttling at different times.

    e.g. if the room was at a stinking hot 50C, the more aggressive the throttling,the greater the disadvantage to the card.

    Conversely, operating the cards at -5C would provide a huge advantage to the card with the worst heat/fan efficiency ratio.

  • TareX - Monday, April 04, 2011 - link

    I'm starting to think it's really getting less and less compelling to be a PC gamer, with all the good games coming out for consoles exclusively.

    Thank goodness for Arkham Asylum.
  • Golgatha - Monday, April 04, 2011 - link

    I'd like to see some power, heat, and PPD numbers for running Folding@Home on all these GPUs. Reply
  • Ryan Smith - Monday, April 04, 2011 - link

    The last time I checked, F@H did not having a modern Radeon client. If they did we'd be using it much more frequently. Reply
  • karndog - Monday, April 04, 2011 - link

    Cmon man, you have an enthusiast rig with $1000 worth of video cards yet you use a stock i7 at 3.3ghz??

    "As we normally turn to Crysis as our first benchmark it ends up being quite amusing when we have a rather exact tie on our hands."

    Ummm probably because your CPU limited! Update to even a 2500k at 4.5ghz and i bet you'll see the Crossfire setup pull away from the SLI.
  • karndog - Monday, April 04, 2011 - link

    Not trying to make fun of your test rig, if that's all you have access too. Im just saying that people who are thinking about buying the Tri SLI / Xfire video card setups reviewed here arent running their CPU at stock clock speeds, especially such low ones, which skew the results shown here. Reply
  • Castiel - Monday, April 04, 2011 - link

    Why didn't you just use a P67 board equipped with a NF200 chip for testing? Using X58 is a step in the wrong direction. Reply
  • UrQuan3 - Monday, April 04, 2011 - link

    Mr Smith,
    When you do the multi-monitor SLI\Crossfire review, could you briefly go over different connection modes? The last time I messed with SLI, it forced all monitors to be connected to the first card. Since the cards in question only had two outputs, I had to turn off SLI to connect three monitors. This caused some strange problems for 3D software.

    Would you go over the options currently available in your next review?
  • Ryan Smith - Monday, April 04, 2011 - link

    When was this? That doesn't sound right; you need SLI to drive 3 monitors at the present time. Reply
  • UrQuan3 - Thursday, April 07, 2011 - link

    Right this second I'm typing on a PC with 2 GTX 260s (not sure which revision) with two monitors plugged into the first and a third monitor plugged into the second. At the time, SLI would only allow monitors plugged into the first card. Of course, since IT doesn't trust us to do our own upgrades, I'm still running driver version 260.89.

    Of course, Windows supports multiple dissimilar cards with a monitor or two on each, even different brand cards. However, 3D support in this mode is, er, creative. In this mode most programs (games) can only drive one card's monitors. You can, however, have different programs running 3D on different cards' monitors.

    Since you'll have the hardware sitting on your desk, I'd love to see a quick test of the options.
  • BLHealthy4life - Monday, April 04, 2011 - link

    How the heck did you get 11.4 preview to work with crossfire??

    I have 6970 crossfire and I cannot for the life of me get 11.4p to work. I have used 11.2 and 11.3 with no problems. I removed previous drivers with ATI uninstaller followed by driver sweeper. Then I've installed 11.4 p 3/7 and 3/29 and neither one of them work.

    I even went as far as to do TWO fresh installs of W7 x64 Ultimate and then install 11.p and the f*cking driver breaks crossfire....
  • Ryan Smith - Monday, April 04, 2011 - link

    I'm afraid there's not much I can tell you. We did not have any issues with 11.4 and the 6970s whatsoever. Reply
  • quattro_ - Monday, April 04, 2011 - link

    did you use DOF when benching METRO ? i find the HD6990's score high! i only get 37fps average : 980x @4.4 and single hd6990 stock clocks and 11.4 preview driver . Reply
  • Ryan Smith - Monday, April 04, 2011 - link

    No, we do not. Metro is bad enough; DOF crushes performance. Reply
  • ClagMaster - Monday, April 04, 2011 - link

    I will never understand why people will by 2 or 3 graphics cards, require a 1200W power supply, so they can get 10-20 fps or more subtile eye candy.

    There are some things that are beyond the point of reason and fall into the madness of Captain Ahab. This is just about as crazy as insisting on a 0.50 cal Browning Target rifle than a more sensible 0.308 Win Target rifle for 550m target shooting and white tail deer hunting. The 0.308 Win is less punishing on the body and pocketbook to shoot than the 0.50 Browning.

    I always believed in working with one (1) graphics card that takes up 1 slot and requires 65 to 85W of power. A 9600GT plays all my games on a 1600x1200 CRT just fine.
  • looper - Tuesday, April 05, 2011 - link

    Excellent post... well-said. Reply
  • Sabresiberian - Tuesday, April 05, 2011 - link

    I've been thinking for quite awhile that we need something different, and this is the primary reason why - I can't get all I want to install on any ATX mainboard I know of.

  • Sabresiberian - Tuesday, April 05, 2011 - link

    I've always thought minimum frame rate is where the focus should be in graphics card tests (when looking at the frame rate performance aspect), instead of the average. It's the minimum frame rate that bothers people or even makes a game unplayable.


  • mapesdhs - Wednesday, April 06, 2011 - link

    I hate to say it but with the CPU at only 3.33, the results don't really mean that much. I know
    the 920 used can't go higher, but it just seems a bit pointless to do all these tests when the
    results can't really be used as the basis for making a purchasing decision because of a very
    probably CPU bottleneck. Surely it would have been sensible for an article like this to replace
    the 920 with a 950 and redo the oc to 4+. The 950 is good value now aswell. Or even the
    entry 6-core.

    Re slot spacing, perhaps if one insists on using P67 it can be hard to sort that out, but there
    *are* X58 boards which provide what one needs, eg. the Asrock X58 Extreme6 does have
    double-slot spacing between each PCIe slot, so 3 dual-slot cards would have a fully empty
    slot between each card for better cooling. Do other vendors make a board like this? I couldn't
    find one after a quick check on the Gigabyte or ASUS sites. Only down side is with all 3 slots
    used the Extreme6 operates slots 2 and 3 at 8x/8x; for many games this isn't an issue (depends
    on the game), but I'm sure some would moan nonetheless.

    Would be interesting to know how that would compare though, ie. a 4GHz 950 on an Extreme6
    for these tests.

    Unless I missed it somehow, I'm a tad surprised Gigabyte don't make an X58 board with this type
    of slot spacing, or do they?

  • xAlex79 - Thursday, April 14, 2011 - link

    I am a bit disapointed Ryan in the way you put your conclusions.

    At the start of the article you highlight how you are going to look at Trifire and Tri-Sli and compare how it does for the value.

    Yet at the end in your conclusion there isnt a single mention or even adjusted scores considering value at all. And that makes Nvidia look alot better than they should. It is as you completely forget that three 580s costs you 1500$ and that three 6970s costs you 900$.

    Based on that and the fact YOU stated you would take value into account (And personally I think posting any kind of review without value nowdays is just irresponsible and biased) I am very disapointed with an otherwise very good set of tests.

    I also understand that this is labeled "Part 1" and that the value might come into "Part 2" but you should have CLEARLY outlined that in your conclusion were that the case. And given the quality of reviews that we have come to expect from Anantech, the final numbers should ALWAYS include a value perspective.

    I will jsut outline that it is poor form and not very professional and that in the end the people you should care about are us, your readers. Not how you look or try to look for hardware manifacturers. If this was a mistake, you should correct it asap. It does not make you look good.
  • L1qu1d - Friday, April 15, 2011 - link

    I wonder why they didn't opt for the 270.51 Drivers and went with 3 month old drivers?

    Compared to the tested drivers:

    GeForce GTX 580:

    Up to 516% in Dragon Age 2 (SLI 2560x1600 8xAA/16xAF Very High, SSAO on)
    Up to 326% in Dragon Age 2 (1920x1200 8xAA/16xAF Very High, SSAO on)
    Up to 11% in Just Cause 2 (1920x1200 8xAA/16xAF, Concrete Jungle)
    Up to 11% in Just Cause 2 (SLI 2560x1600 8xAA/16xAF, Concrete Jungle)
    Up to 7% in Civilization V (1920x1200 4xAA/16xAF, Max settings)
    Up to 6% in Far Cry 2 (SLI 2560x1600 8xAA/16xAF, Max settings)
    Up to 5% in Civilization V (SLI 1920x1200 8xAA/16xAF, Max settings)
    Up to 5% in Left 4 Dead 2 (1920x1200 noAA/AF, Outdoor)
    Up to 5% in Left 4 Dead 2 (SLI 2560x1600 4xAA/16xAF, Outdoor)
    Up to 4% in H.A.W.X. 2 (SLI 1920x1200 8xAA/16xAF, Max settings)
    Up to 4% in Mafia 2 (SLI 2560x1600 AA on/16xAF, PhysX = High)
  • Fony - Thursday, April 28, 2011 - link

    taking forever for the Eyefinity/Surround testing. Reply
  • vipergod2000 - Thursday, May 05, 2011 - link

    The one thing that erks me is that the i7-920 OCed to ~3.3Ghz - causing the scaling of 3 cards being greatly reduced as opposed to other forum users that have 3 or 4 cards in CFX or SLI but with fantastic scaling - but assured with a coupling a i7-2600k at 5ghz minimum or a 980x/990x at 4.6ghz+ Reply

Log in

Don't have an account? Sign up now