… and Tearing it Down

Whether the problems we saw end up being very isolated problems or not, it's always good to be cautious. Now, let's move on to the part that will absolutely affect everyone who runs CrossFireX at some point: uninstalling the hardware. Until now, if we needed to step down to one card all we needed to do was remove the hardware and everything would be fine. AMD tells us that because CrossFireX uses Windows Vista's Linked Display Adapter (LDA) technique for combining multiple physical graphics cards into one virtual device, this is no longer as simple.

Before removing a card from the system, you MUST disable CrossFireX or uninstall the driver. If you do not disable CrossFireX, the driver will fail to load the next time the system boots. There will be no way (from what we can tell) to disable CrossFireX if the driver doesn't load. Thus, the driver will need to be uninstalled anyway. If you want to keep one card in the system, be sure to disable CrossFireX first or you will have to uninstall and reinstall your driver.

We've asked for more details about why this isn't something AMD can handle in the driver, and this is what we understand so far. After LDA mode is set up, a specific number of physical devices are expected when the driver tries to load. If there is only one card present, this looks the same as if the link failed and thus the driver won't load. AMD has said that this is the expected behavior based on how Vista handles LDA. Our position is that if this is the case, it is a design flaw in Vista that Microsoft needs to address.

After speaking with AMD on this, they have said they would try to make the documentation about how to handle uninstalling hardware properly as prominent as possible. While it's not ideal, thorough documentation of the issues is definitely a good thing to have when potential issues are very likely to arise.

Setting it Up … The Test
Comments Locked

36 Comments

View All Comments

  • DerekWilson - Saturday, March 8, 2008 - link

    that is key ... as is what ViRGE said above.

    in addition, people who want to run 4 GPUs in a system are not going to be the average gamer. this technology does not offer the return on investment anyone with a midrange system would want. people who want to make use of this will also want to eliminate any other bottlenecks to get the most out of it in their systems.

    not only does skulltrail help us eliminate bottlenecks and look at the potential of the graphics subsystem, in this case i would even make the argument that the system is a good match for the technology.
  • Sind - Saturday, March 8, 2008 - link

    I agree, I don't think the Skulltrail is doing anyone favours of how they could judge utilising these MGPU solutions in a "average" system that the reader on Anand would be using. X38 seems very popular as is 780i, I really don't think even more then 1% of your traffic would ever utilise the system you used to do this review. I've read the other CrossfireX reviews from around the net, and most had no problems at all, and infact most noted that it worked straight out with no messing around with the lengthy directions that were indicated in the article to get it to work.
  • ViRGE - Saturday, March 8, 2008 - link

    Something very, very important to keep in mind is that Skulltrail is the only board out right now that supports Crossfire and SLI. If AT wants to benchmark both technologies without switching the boards and compromising the results, this is the only board they can use.
  • Cookie Monster - Saturday, March 8, 2008 - link

    No 8800Ultra or GTX Tri-SLI for comparison?
  • DerekWilson - Saturday, March 8, 2008 - link

    we were looking at 2 card configurations here ... i'll check out three and four card configs later
  • JarredWalton - Saturday, March 8, 2008 - link

    Unfortunately, Tri-SLI requires a 780i motherboard. That's fine for Tri-SLI, but CrossFire (and CrossFireX) won't work on 780i AFAIK. I also think Skulltrail may have its own set of issues that prevent things from working optimally - but that's conjecture rather than actual testing. Derek and Anand have Skulltrial; I don't.
  • Slash3 - Saturday, March 8, 2008 - link

    ...graphs are both using the same image. The Oblivion Performance and 4xAA/16AF Performance line graphs (oblivionscale.png) are just duplicates and link to the same file. :)
  • JarredWalton - Saturday, March 8, 2008 - link

    Fixed, thanks.
  • slashbinslashbash - Saturday, March 8, 2008 - link

    Graphics really are fairly unique in the computing world in that they are easily parallelized. While we're pretty quickly reaching a point of diminishing returns in number of cores in a general-purpose CPU (8 is more than enough for any current desktop type of usage), the same point has not been reached for graphics. That is why we continue to see increasing numbers of pipelines in individual GPU's, and why we continue to see effective scaling to multiple cards and multiple GPU's per card. As long as there is memory bandwidth to support the GPU power, the GPU looks like it is capable of taking advantage of much more parallelization. I expect 1000+ pipes on a 2-billion-transistor+ GPU by 2011.

    So, I expect multi-GPU to remain with us, but any high-end multi-GPU setup will always be surpassed by a single-GPU solution within a generation or two.
  • DerekWilson - Saturday, March 8, 2008 - link

    that's not the issue ... graphics is infinitely parallelizeable ...

    the problems are die size and power.

    beyond a certain die size there is huge drop off in the amount of money and IHV can make on their silicon. despite the fact that every chip could have been made larger, we are working with engineers, not scientists -- they have a budget.

    multiGPU allows IHVs to improve performance nearly linearly in some cases without the non-linear increase in cost they would see from (nearly) doubling the size of their GPU.

    ...

    then there is power. as dies shrink and we can fit more into a smaller space, will GPU makers still be able to make chips as big as R600 was? power density goes way up as die size goes down. power requirements are already crazy and it could get very difficult to properly dissipate the heat from a chips with small enough surface area and huge enough power output ... ...

    but speading the heat out over two less powerful cards would help handle that.

    ...

    in short, multigpu isn't about performance ... it's about engineering, flexibility and profitability. we could always get better performance improvement from a single GPU if it could be built to match the specs of a multiGPU config.

Log in

Don't have an account? Sign up now