The Test, Power, Temps, and Noise

CPU: Intel Core i7-920 @ 3.33GHz
Motherboard: Asus Rampage II Extreme
Chipset Drivers: Intel 9.1.1.1015 (Intel)
Hard Disk: OCZ Summit (120GB)
Memory: Patriot Viper DDR3-1333 3x2GB (7-7-7-20)
Video Cards: AMD Radeon HD 6990
AMD Radeon HD 6970
PowerColor Radeon HD 6970
EVGA GeForce GTX 590 Classified
NVIDIA GeForce GTX 580
Zotac GeForce GTX 580
Video Drivers: NVIDIA ForceWare 266.58
AMD Catalyst 11.4 Preview
OS: Windows 7 Ultimate 64-bit

With that out of the way, let’s start our look at power, temperature, and noise. We did include our jury-rigged triple-CF setup in these results for the sake of a comparison point, but please keep in mind that we’re not using a viable long-term setup, which is why we have starred the results. These results also include the GTX 590 from last week, which has its own handicap under FurMark due to NVIDIA’s OCP. This does not apply to the triple SLI setup, which we can bypass OCP on.

Given NVIDIA’s higher idle TDP, there shouldn’t be any surprises here. Three GTX 580s in SLI makes for a fairly wide gap of 37W – in fact even two GTX 580s in SLI is still 7W more than the triple 6970 setup. Multi-GPU configurations are always going to be a limited market opportunity, but if it were possible to completely power down unused GPUs, it would certainly improve the idle numbers.

With up to three GPUs, power consumption under load gets understandably high. For FurMark in particular we see the triple GTX 580 setup come just shy of 1200W due to our disabling of OCP – it’s an amusingly absurd number. Meanwhile the triple 6970 setup picks up almost nothing over the dual 6970, which is clearly a result of AMD’s drivers not having a 3-way CF profile for FurMark. Thus the greatest power load we can place on the triple 6970 is under HAWX, where it pulls 835W.

With three cards packed tightly together the middle card ends up having the most difficult time, so it’s that card which is setting the highest temperatures here. Even with that, idle temperatures only tick up a couple of degrees in a triple-GPU configuration.

Even when we forcibly wedge the 6970s apart, the triple 6970 setup still ends up being the warmest under Crysis – this being after Crysis temperatures dropped 9C from the separation. Meanwhile the triple GTX 580 gets quite warm on its own, but under Crysis and HAWX it’s nothing we haven’t seen before. FurMark is the only outlier here, where temperatures stabilized at 95C, 2C under GF110’s thermal threshold. It’s safe, but I wouldn’t recommend running FurMark all day just to prove it.

With a 3rd card in the mix idle noise creeps up some, but much like idle temperatures it’s not significantly more. For some perspective though, we’re still looking at idle noise levels equivalent to the GTX 560 Ti running FurMark, so it’s by no means a silent operation.

It turns out adding a 3rd card doesn’t make all that much more noise. Under HAWX the GTX 580 does get 3dB louder, but under FurMark the difference is under a dB. The triple 6970 setup does better under both situations, but that has more to do with our jury-rigging and the fact that FurMark doesn’t scale with a 3rd AMD GPU. Amusingly the triple 580 setup is still quieter under FurMark than the 6990 by nearly 3dB even though we’ve disabled OCP for the GTX 580, and for HAWX the difference is only .2dB in AMD’s favor. It’s simply not possible to do worse than the 6990 without overvolting/overclocking, it seems.

Fitting Three Video Cards in an ATX Case Crysis, BattleForge, Metro 2033, and HAWX
Comments Locked

97 Comments

View All Comments

  • taltamir - Sunday, April 3, 2011 - link

    wouldn't it make more sense to use a Radeon 6970 + 6990 together to get triple GPU?

    nVidia triple GPU seems to lower min FPS, that is just fail.

    Finally: Where are the eyefinity tests? none of the results were relevant since all are over 60fps with dual SLI.
    Triple monitor+ would be actually interesting to see
  • semo - Sunday, April 3, 2011 - link

    Ryan mentions in the conclusion that a triple monitor setup article is coming.

    ATI seems to be the clear winner here but the conclusion seems to downplay this fact. Also, the X58 platform isn't the only one that has more than 16 PCIe lanes...
  • gentlearc - Sunday, April 3, 2011 - link

    If you're considering going triple-gpu, I don't see how scaling matters other than an FYI. There isn't a performance comparison, just more performance. You're not going to realistically sell both your 580s and go and get three 6970s. I'd really like if you look at lower end cards capable of triple-gpu and their merit. The 5770 crossfired was a great way of extending the life of one 5770. Two 260s was another sound choice by enthusiasts looking for a low price tag upgrade.

    So, the question I would like answered is if triple gpu is a viable option for extending the life of your currently compatible mobo. Can going triple gpus extend the life of your i7 920 as a competent gaming machine until a complete upgrade makes more sense?

    SNB-E will be the cpu upgrade path, but will be available around the time the next generation of gpus are out. Is picking up a 2nd and/or 3rd gpu going to be a worthy upgrade or is the loss of selling three gpus to buy the next gen cards too much?
  • medi01 - Sunday, April 3, 2011 - link

    Besides, 350$ GPU is compared to 500$ GPU. Or so it was last time I've checked on froogle (and that was today, 3d of April 2011)
  • A5 - Sunday, April 3, 2011 - link

    AT's editorial stance has always been that SLI/XFire is not an upgrade path, just an extra option at the high end, and doubly so for Tri-fire and 3x SLI.

    I'd think buying a 3rd 5770 would not be a particularly wise purchase unless you absolutely didn't have the budget to get 1 or 2 higher-end cards.
  • Mr Alpha - Sunday, April 3, 2011 - link

    I use RadeonPro to setup per application crossfire settings. While it is a bummer it doesn't ship with AMD's drivers, per application settings is not an insurmountable obstacle for AMD users.
  • BrightCandle - Sunday, April 3, 2011 - link

    I found this program recently and it has been a huge help. While Crysis 2 has flickering lights (don't get me started on this games bugs!) using Radeon Pro I could fix the CF profile and play happily, without shouting at ATI to fix their CF profiles, again.
  • Pirks - Sunday, April 3, 2011 - link

    I noticed that you guys never employ useful distributed computing/GPU computing tests in your GPU reviews. You tend to employ some useless GPU computing benchmarks like some weird raytracers or something, I mean stuff people would not normally use. But you never employ really useful tests like say distributed.net's GPU computation clients, AKA dnetc. Those dnetc clients exist in AMD Stream and nVidia CUDA versions (check out http://www.distributed.net/Download_clients - see, they have CUDA 2.2, CUDA 3.1 and Stream versions too) and I thought you'd be using them in your benchmarks, but you don't, why?

    Also check out their GPU speed database at http://n1cgi.distributed.net/speed/query.php?cputy...

    So why don't you guys use this kind of benchmark in your future GPU computing speed tests instead of useless raytracer? OK if you think AT readers really bother with raytracers why don't you just add these dnetc GPU clients to your GPU computing benchmark suite?

    What do you think Ryan? Or is it someone else doing GPU computing tests in your labs? Is it Jarred maybe?

    I can help with setting up those tests but I don't know who to talk to among AT editors

    Thanks for reading my rant :)

    P.S. dnetc GPU client scales 100% _always_, like when you get three GPUs in your machine your keyrate in RC5-72 is _exactly_ 300% of your single GPU, I tested this setup myself once at work, so just FYI...
  • Arnulf - Sunday, April 3, 2011 - link

    "P.S. dnetc GPU client scales 100% _always_, like when you get three GPUs in your machine your keyrate in RC5-72 is _exactly_ 300% of your single GPU, I tested this setup myself once at work, so just FYI... "

    So you are essentially arguing running dnetc tests make no sense since they scale perfectly proportionally with the number of GPUs ?
  • Pirks - Sunday, April 3, 2011 - link

    No, I mean the general GPU reviews here, not this particular one about scaling

Log in

Don't have an account? Sign up now