Back to Article

  • MAIA - Tuesday, March 11, 2008 - link

    "After rebooting a few times to let windows do its thing, we installed the driver and all was well."

    This sentence is soooooo microsoft windows !!! :))

    Sorry .... had to say it.
  • dash2k8 - Tuesday, March 11, 2008 - link

    I'm just wondering: instead of piling on the number of GPU's, why hasn't a manufacturer just come out with ONE monstrous GPU that does away with the need of using multiple video cards? If someone is crazy enough to spend moola on 4 GPU's, I imagine that person would be equally willing to buy ONE card that has the same horsepower. Just saying. Reply
  • punko - Monday, March 10, 2008 - link

    Thanks Derek for a good review. As you indicated, this may be the future and its good to see the tech reach a point where it is ready for use and can be improved upon as all tech goes forward.

    It also sound like you had a lot of help directly from AMD on this one.

  • gsellis - Monday, March 10, 2008 - link

    "but today a WHQL drier is available "

    Hey Derek, typo in the beginning. Still mirthful about this one. Water cooling and you needed it drier to work with all GPUs?
  • ltcommanderdata - Sunday, March 09, 2008 - link

    I'm just curious as to whether you've checked to see if quad channel memory has any benefit for multiple GPU situations? With 3 or 4 GPUs sucking data, I would presume the additional memory bandwidth provided by quad DDR2-800 would increase performance, especially since dual channel FB-DIMMs are not as efficient as the best dual channel DDR2 or DDR3 setups on desktop boards. It would be interesting to see the results of a 4x1GB setup on Skulltrail vs the 2x2GB setup you used. Reply
  • cerwin13 - Saturday, March 08, 2008 - link

    Would it be wise to try this upgrade without SP1 installed with Vista 32? I am currently using 2x Radeon HD3870 x2s and would like to benchmark with these new drivers, but apparently SP1 isn't officially out yet? Reply
  • DerekWilson - Saturday, March 08, 2008 - link

    other people had luck without SP1; it's not a requirement, but some of our editors did find that it helped with a lot of stuff ...

    you'll want to make sure you have hotfixes:


    as a minimum
  • Ananke - Saturday, March 08, 2008 - link

    XFX has Forceware 169.32, my guess it was added after 9600GT appear. On Nvidia official download site the highest ver is 169.28 Reply
  • Ananke - Saturday, March 08, 2008 - link

    XFX has Forceware 169.32, my guess it was added after 9600GT appear. On Nvidia official download site the highest ver is 169.28 Reply
  • Incisal flyer - Saturday, March 08, 2008 - link

    Derek, thanks for the very timely and detailed review. I'm going to be building a system for Flight Simulator X and have been trying to figure out the best graphics card(s) for that application. Have you considered benchmarking that sim? A lot of discussion right now on AVSIM etc on what to do in terms of GPUs for people building new systmes. There is a lot of back and forth on advantages and disadvantages of different configs. I realize FSX is a bit of a niche product. Would FSX use multiple GPUs like 2 3870 x2s and are the potential headaches of that configurtation worth it if you are a not a computer geek? Or am I better off just getting a couple of Nvidia 8800s in SLI or a single 3870 x2 and not hassling with the 4 GPU solution? Any help or advice would be appreciated. Thanks in advance for your time.

    Incisal Flyer
  • mmntech - Saturday, March 08, 2008 - link

    I'm running an HD 3850 256mb and I get 40fps average in low density areas, 12.4fps in London. Ultra quality of course with no AA and in game AF at 1440x900. That's DirectX 9.0 performance, which is all I could test since I don't run Vista. Flight Simulator has always been very CPU dependent, particularly concerning autogen scenery, and AI traffic along with the complex physics engine. Since FSX with SP1 can take advantage of up to four CPU cores, it might be worth starting off there. I did my tests using an Athlon 64 X2 3800+, everything at stock speeds with 2gb PC3200. If I were you, I'd go with the single 3870 X2 card. Cheaper than buying two separate 3870s. For nVidia, maybe two 8800GTS 640mb cards in SLI or better if you want the best performance. I'd wait for nVidia to release the 9800GX2 first though to see what cards offer the best performance per dollar.

    As for the article, I really wonder if using more than two cards is really practical. You can get almost the same performance with two 9600GTs as with three or four HD 3750s but the two 9600GTs are far cheaper. This begs the question, is spending the extra $400 really worth it for such minimal gains? I know for some it is but then why buy mid-range cards when a couple 8800GTXs will cost the same in the end. Plus there's also the increased heat and power consumption from using four cards instead of two. I'd like to see more info on that.
  • Incisal flyer - Monday, March 10, 2008 - link

    Thanks for the replies derek anm mmntech. Mmntech, yes my feelings exactly about quad (basically) crossfire. I'm no computer geek (more like a newbie really - I don't understand understand most of what I read in the forums and couldn't overclock a toaster if you held my mother hostage). Multiple crossfire sounds just too exotic at this point and would be more headache than it is worth. Thanks for your feedback and happy flying.

    The Flyer
  • DerekWilson - Saturday, March 08, 2008 - link

    i'm looking at fsx acceleration for future graphics articles ...

    no promises, but i've been testing it internally.
  • Sundox - Saturday, March 08, 2008 - link

    isn't multi GPU the cheap way to go?
    I'm asking this because I can't figure a car race won by two slower cars, against the faster car, or two knifes cutting my steak smoother.
    to me, it looks like the problem is,... coping with the problem, the companies just want to have the most powerful GPU, not the most efficiant.
    I might be totaly wrong.
  • coldpower27 - Saturday, March 08, 2008 - link

    It's more like a delivery race rather then a car race, who can deliver the total shipment fastest?

    Two smaller trucks pulling half the load each, or a single truck pulling a larger load, the larger truck's engine is more complex, and hence more difficult to build, vs the smaller trucks which have smaller engines which are easier.

  • Griswold - Saturday, March 08, 2008 - link

    Analogies like that do not always work just like that.

    Besides that, the car race example isnt that simple anyhow. Imagine a 24h race which could easily be won by even one slower car, as long as it is more reliable than the faster one. Remember, in order to finish first, one must first finish. This, of course, has little to nothing to do with video cards, hence, analogies dont always work.
  • legoman666 - Saturday, March 08, 2008 - link

    analogies almost never work. Reply
  • DerekWilson - Saturday, March 08, 2008 - link

    "Like a balloon, and... something bad happens!" Reply
  • Simon128D - Saturday, March 08, 2008 - link

    I love the reviews and benchmarks here, I really do but I'm getting sick and tired of seeing the test system being only a super high end machine with hardware that the average person can't afford and I think benchmarking with skull trail on its own is silly. Tis applies to other site as well.

    Don't get me wrong, I enjoy seeing benchmarks from a high end system like skull trail but how many people actually have or can afford a system like that. I'd like to see more of a mid range setup inculded in graphics benchmarks - that will give a more realistic view point. A system say with a 780i or X38 chipset with a Q6600 and 4GB DDR2 800Mhz etc.

    Just my thoughts.
  • DigitalFreak - Saturday, March 08, 2008 - link

    It's really the only way to make sure the games they're testing with aren't CPU limited. Reply
  • DerekWilson - Saturday, March 08, 2008 - link

    that is key ... as is what ViRGE said above.

    in addition, people who want to run 4 GPUs in a system are not going to be the average gamer. this technology does not offer the return on investment anyone with a midrange system would want. people who want to make use of this will also want to eliminate any other bottlenecks to get the most out of it in their systems.

    not only does skulltrail help us eliminate bottlenecks and look at the potential of the graphics subsystem, in this case i would even make the argument that the system is a good match for the technology.
  • Sind - Saturday, March 08, 2008 - link

    I agree, I don't think the Skulltrail is doing anyone favours of how they could judge utilising these MGPU solutions in a "average" system that the reader on Anand would be using. X38 seems very popular as is 780i, I really don't think even more then 1% of your traffic would ever utilise the system you used to do this review. I've read the other CrossfireX reviews from around the net, and most had no problems at all, and infact most noted that it worked straight out with no messing around with the lengthy directions that were indicated in the article to get it to work. Reply
  • ViRGE - Saturday, March 08, 2008 - link

    Something very, very important to keep in mind is that Skulltrail is the only board out right now that supports Crossfire and SLI. If AT wants to benchmark both technologies without switching the boards and compromising the results, this is the only board they can use. Reply
  • Cookie Monster - Saturday, March 08, 2008 - link

    No 8800Ultra or GTX Tri-SLI for comparison? Reply
  • DerekWilson - Saturday, March 08, 2008 - link

    we were looking at 2 card configurations here ... i'll check out three and four card configs later Reply
  • JarredWalton - Saturday, March 08, 2008 - link

    Unfortunately, Tri-SLI requires a 780i motherboard. That's fine for Tri-SLI, but CrossFire (and CrossFireX) won't work on 780i AFAIK. I also think Skulltrail may have its own set of issues that prevent things from working optimally - but that's conjecture rather than actual testing. Derek and Anand have Skulltrial; I don't. Reply
  • Slash3 - Saturday, March 08, 2008 - link

    ...graphs are both using the same image. The Oblivion Performance and 4xAA/16AF Performance line graphs (oblivionscale.png) are just duplicates and link to the same file. :) Reply
  • JarredWalton - Saturday, March 08, 2008 - link

    Fixed, thanks. Reply
  • slashbinslashbash - Saturday, March 08, 2008 - link

    Graphics really are fairly unique in the computing world in that they are easily parallelized. While we're pretty quickly reaching a point of diminishing returns in number of cores in a general-purpose CPU (8 is more than enough for any current desktop type of usage), the same point has not been reached for graphics. That is why we continue to see increasing numbers of pipelines in individual GPU's, and why we continue to see effective scaling to multiple cards and multiple GPU's per card. As long as there is memory bandwidth to support the GPU power, the GPU looks like it is capable of taking advantage of much more parallelization. I expect 1000+ pipes on a 2-billion-transistor+ GPU by 2011.

    So, I expect multi-GPU to remain with us, but any high-end multi-GPU setup will always be surpassed by a single-GPU solution within a generation or two.
  • DerekWilson - Saturday, March 08, 2008 - link

    that's not the issue ... graphics is infinitely parallelizeable ...

    the problems are die size and power.

    beyond a certain die size there is huge drop off in the amount of money and IHV can make on their silicon. despite the fact that every chip could have been made larger, we are working with engineers, not scientists -- they have a budget.

    multiGPU allows IHVs to improve performance nearly linearly in some cases without the non-linear increase in cost they would see from (nearly) doubling the size of their GPU.


    then there is power. as dies shrink and we can fit more into a smaller space, will GPU makers still be able to make chips as big as R600 was? power density goes way up as die size goes down. power requirements are already crazy and it could get very difficult to properly dissipate the heat from a chips with small enough surface area and huge enough power output ... ...

    but speading the heat out over two less powerful cards would help handle that.


    in short, multigpu isn't about performance ... it's about engineering, flexibility and profitability. we could always get better performance improvement from a single GPU if it could be built to match the specs of a multiGPU config.
  • kilkennycat - Saturday, March 08, 2008 - link

    Multi-chip hybrid substrates with widely-spaced dies can help to spread out the heat rather nicely and help keep the overall yield up too, as Intel has demonstrated with the quad Core 2 processors. I fully expect hybrid substrates to become a popular interim solution to the need for masively-parallel processing GPUs -like IBMs 20-chip solution for their big number-crunchers. The hybrid/chip combo- architecture can be designed to externally emulate a single GPU. Also a very nice way of adding some extra local memory if necessary. Reply
  • DerekWilson - Saturday, March 08, 2008 - link

    i agree that this is good direction to go, but even with intel we've still got dual socket boards for multicore chips ...

    the real answer for the end user is always get as fast a single card as possible and if you need more than one make it as few and as powerful cards as you can.
  • e6600 - Saturday, March 08, 2008 - link

    no crysis benchies? Reply
  • Slash3 - Saturday, March 08, 2008 - link

    Crysis is broken as a benchmark... despite all pre-release hype, the game seems to scale very badly across multiple cores and multiple GPUs. It's is kind of unfortunate, as if there's one game that could benefit from efficient scaling, it's Crysis. Reply
  • JarredWalton - Saturday, March 08, 2008 - link

    I'm curious to see if version 1.2 fixes anything... it might. That just came out yesterday, so I don't think many have had a chance to look at whether or not performance changed.

    [Just checked]

    At least for single GPUs, I see no real change in performance. I haven't had a chance to test multi-GPU, and all I have right now is SLI and CrossFire. Could be that v1.2 will help more with 3-way and 4-way configs. We'll see.
  • DerekWilson - Saturday, March 08, 2008 - link

    there was no perf benefit at all from going to 3 or 4 gpus ... we saw this in our preview and when we tested the 8.3 driver. we mention that on the test page ... Reply

Log in

Don't have an account? Sign up now