Final Words

Is multi-GPU really the future? Maybe. Advanced rendering techniques will increasingly require the use of persistent dynamic objects created on the fly, however, and this will likely continue to get in the way of scaling beyond two GPUs unless something fundamental in GPU design changes or graphics memory architecture is expanded to allow for easier sharing of the workload. We can also take away from all this that it is a much better idea to have a smaller number of high powered GPUs doing the work, as the return on investment for dual GPUs will be higher than adding more of a lesser card to a system.

Then of course there's the ubiquitous driver questions. It is one thing to make a driver that works with the current graphics libraries; it's another to make it work optimally in all situations. Taking a (nearly) fully optimized driver and extending support to two GPUs adds another level of complexity, and we're still seeing numerous titles released that don't have properly working SLI and/or CrossFire support until after a driver update (and sometimes a game patch as well). The situation with two GPUs is actually quite good these days, particularly if you're willing to search forums and other technical sites for information on tweaks to help performance prior to the inevitable release of new drivers. Unfortunately, CrossFireX takes a step back to the earlier days of CrossFire, and there are numerous titles where scaling is either nonexistent or at least much lower than we would expect. As we've seen in the results today, drivers matter - there's no other aspect of graphics as likely to help (or hurt) performance.

Drivers aren't the only concern at present, of course. The installation process for CrossFireX is far more involved - with more potential conflicts - than any other graphics solution we've used recently. Some of the issues we experienced may be related to the Skulltrail platform rather than specifically stemming from CrossFireX. That's something we will only be able to understand better with additional time. If you want our honest opinion, right now CrossFireX is sitting way out on the razor-sharp bleeding edge. When it works, the results can be very impressive. When it doesn't, the headaches, BSODs, uninstalling, reinstalling, and hacking that may be required is enough to make the best geeks cower in fear.

For now, the real benefit of multi-GPU is still going to be in achieving higher levels of AA with little to no performance decrease. We've already looked at ATI's Super AA, and there was a feature that didn't quite make it into Catalyst 8.3 that we'll want to look at when it arrives. In an upcoming driver, we will be able to enable edge detection with Super AA, which should lead to some insanely high antialiasing. While we tend to lean towards higher resolutions as a better option than higher AA modes (and suggest 4xAA with transparency AA as a good target), we will certainly look at this when it becomes available.

We aren't done with our investigation of multi-GPU technology and the impact of more than two GPUs on a system. There are issues we definitely need to go back and investigate more. For instance, does the ability to render four frames ahead significantly affect input lag? If multi-GPU technology really is the future of high-end graphics, we need to assess the current shortcomings so that we can understand what direction to take to get us there.

World in Conflict Performance
Comments Locked

36 Comments

View All Comments

  • DerekWilson - Saturday, March 8, 2008 - link

    that is key ... as is what ViRGE said above.

    in addition, people who want to run 4 GPUs in a system are not going to be the average gamer. this technology does not offer the return on investment anyone with a midrange system would want. people who want to make use of this will also want to eliminate any other bottlenecks to get the most out of it in their systems.

    not only does skulltrail help us eliminate bottlenecks and look at the potential of the graphics subsystem, in this case i would even make the argument that the system is a good match for the technology.
  • Sind - Saturday, March 8, 2008 - link

    I agree, I don't think the Skulltrail is doing anyone favours of how they could judge utilising these MGPU solutions in a "average" system that the reader on Anand would be using. X38 seems very popular as is 780i, I really don't think even more then 1% of your traffic would ever utilise the system you used to do this review. I've read the other CrossfireX reviews from around the net, and most had no problems at all, and infact most noted that it worked straight out with no messing around with the lengthy directions that were indicated in the article to get it to work.
  • ViRGE - Saturday, March 8, 2008 - link

    Something very, very important to keep in mind is that Skulltrail is the only board out right now that supports Crossfire and SLI. If AT wants to benchmark both technologies without switching the boards and compromising the results, this is the only board they can use.
  • Cookie Monster - Saturday, March 8, 2008 - link

    No 8800Ultra or GTX Tri-SLI for comparison?
  • DerekWilson - Saturday, March 8, 2008 - link

    we were looking at 2 card configurations here ... i'll check out three and four card configs later
  • JarredWalton - Saturday, March 8, 2008 - link

    Unfortunately, Tri-SLI requires a 780i motherboard. That's fine for Tri-SLI, but CrossFire (and CrossFireX) won't work on 780i AFAIK. I also think Skulltrail may have its own set of issues that prevent things from working optimally - but that's conjecture rather than actual testing. Derek and Anand have Skulltrial; I don't.
  • Slash3 - Saturday, March 8, 2008 - link

    ...graphs are both using the same image. The Oblivion Performance and 4xAA/16AF Performance line graphs (oblivionscale.png) are just duplicates and link to the same file. :)
  • JarredWalton - Saturday, March 8, 2008 - link

    Fixed, thanks.
  • slashbinslashbash - Saturday, March 8, 2008 - link

    Graphics really are fairly unique in the computing world in that they are easily parallelized. While we're pretty quickly reaching a point of diminishing returns in number of cores in a general-purpose CPU (8 is more than enough for any current desktop type of usage), the same point has not been reached for graphics. That is why we continue to see increasing numbers of pipelines in individual GPU's, and why we continue to see effective scaling to multiple cards and multiple GPU's per card. As long as there is memory bandwidth to support the GPU power, the GPU looks like it is capable of taking advantage of much more parallelization. I expect 1000+ pipes on a 2-billion-transistor+ GPU by 2011.

    So, I expect multi-GPU to remain with us, but any high-end multi-GPU setup will always be surpassed by a single-GPU solution within a generation or two.
  • DerekWilson - Saturday, March 8, 2008 - link

    that's not the issue ... graphics is infinitely parallelizeable ...

    the problems are die size and power.

    beyond a certain die size there is huge drop off in the amount of money and IHV can make on their silicon. despite the fact that every chip could have been made larger, we are working with engineers, not scientists -- they have a budget.

    multiGPU allows IHVs to improve performance nearly linearly in some cases without the non-linear increase in cost they would see from (nearly) doubling the size of their GPU.

    ...

    then there is power. as dies shrink and we can fit more into a smaller space, will GPU makers still be able to make chips as big as R600 was? power density goes way up as die size goes down. power requirements are already crazy and it could get very difficult to properly dissipate the heat from a chips with small enough surface area and huge enough power output ... ...

    but speading the heat out over two less powerful cards would help handle that.

    ...

    in short, multigpu isn't about performance ... it's about engineering, flexibility and profitability. we could always get better performance improvement from a single GPU if it could be built to match the specs of a multiGPU config.

Log in

Don't have an account? Sign up now