X58 Multi-GPU Scaling

While we had some hope earlier in the year of unifying our SLI and CrossFire testbed under Skulltrail, we had to scrap that project due to numerous difficulties in testing. Today, we have another ray of hope. Having a single platform that will allow us to run both SLI and CrossFire would give us better ability to compare multiGPU scaling as there would be fewer variables to consider.

So we did a few tests today to see how the new X58 platform handles multiGPU scaling. We have compared CrossFire on X48 and SLI on 790i to multiGPU scaling on X58 in Oblivion and Quake Wars to get a taste of what we should expect. And the results are a bit of a mixed bag.

Enemy Territory: Quake Wars

With Enemy Territory: Quake Wars, we see very consistent performance. Our single card numbers were very close to the numbers we saw on other platforms, and the general degree of scaling was maintained. Both NVIDIA and AMD hardware scaled slightly better on X58 here than on the hardware we had been using. This is a good sign that the potential for accurate comparison and good quality multiGPU testing might be possible on X58 going forward.

But there there's the Oblivion test.

Under Oblivion, NVIDIA hardware scaled less on X58, and our AMD tests were ... let's say inconclusive.

Oblivion

We have often had trouble with AMD drivers, especially when looking at CrossFire performance. The method that AMD uses to maintain and test their drivers necessitates eliminating some games from testing for extended periods of time. This can sometimes result in games that used to work well with AMD hardware or scale well with CrossFire to stop performing up to par or to stop scaling as well as they should.

The consistent fix, unfortunately, has been for review sites to randomly stumble upon these problems. We usually see resolutions very quickly to issues like this, but that doesn't change the fact that it shouldn't happen in the first place.

In any event, the past couple weeks have been more frustrating than usual when testing AMD hardware. We've switched drivers 4 different times in testing, and we still have issues. Yes, 3 of these four drivers have been hotfix beta drivers, but for people with Far Cry 2 the hotfix is all they've got, which is still broken even after three tries.

We certainly know that NVIDIA doesn't have it all right when it comes to drivers. But we really feel like AMD's monthly driver release schedule wastes resources on unnecessary work that could be better used to benefit their customers. If we are going to have hotfix drivers come out anyway, we might as well make sure that every full release incorporates all the fixes in every hot fix and doesn't break anything the last driver fixed.

The point of all this is, our money is on a lack of scaling under Oblivion due to some aspect of this beta driver we are using rather than scaling on X58.

As for the NVIDIA results, we're a little more worried about those. It could be that we are also seeing a driver issue here, but it just could be that Oblivion does something that doesn't scale well with SLI on X58. We were really surprised to see this as we expected comparable scaling. As the driers mature, we'll definitely test the issue of multiGPU scaling on X58 further.

The Chipset - Meet Intel's X58 Our First X58 Motherboard Preview: The ASUS Rampage II Extreme
Comments Locked

73 Comments

View All Comments

  • Clauzii - Thursday, November 6, 2008 - link

    I still use PS/2. None of the USB keyboards I've borrowed or tried out would work in 'boot'. Also I think a PS/2 keyboard/mouse don't lag so much, maybe because it has it's own non-shared interrupt line.

    But I can see a problem with PS/2 in the future, with keyboards like the Art Lebedev ones. When that technology gets more pocket friendly I'd gladly like to see upgraded but still dedicated keyboard/mouse connectors.
  • The0ne - Monday, November 3, 2008 - link

    Yes. I have the PS2 keyboard on-hand in case my USB keyboard can't get in :)
  • Strid - Monday, November 3, 2008 - link

    Ahh, makes sense. Thanks for clarifying!
  • Genx87 - Monday, November 3, 2008 - link

    After living through the hell that were ATI drivers back in 2003-2004 on a 9600 Pro AIW. I didnt learn and I plopped money down on a 4850 and have had terrible driver quality since. More BSOD from the ati driver than I have had in windows in the past 5 years combined from anything. Back to Nvidia for me when I get a chance.

    That said this review is pretty much what I expected after reading the preview article in August. They are really trying to recapture market in the 4 socket space. A place where AMD has been able to do well. This chip is designed for server work. Ill pick one up after my E8400 runs out of steam.
  • Griswold - Tuesday, November 4, 2008 - link

    You're just not clever enough to setup your system properly. I have two indentical systems sitting here side by side with the only difference being the video card (HD3870 in one and a 8800GT in the other) and the box with the nvidia cards gives me order of magnitude more headaches due to crashing driver. While that also happens on the 3870 machine now and then, its nowehere nearly as often. But the best part: none of the produces a BSOD. That is why I know you're most likely the culprit (the alternative is faulty hardware or a pathetic overclock).
  • Lord 666 - Monday, November 3, 2008 - link

    The stock speed of a Q9550 is 2.83ghz, not 2.66qhz.

    Why the handicap?
  • Anand Lal Shimpi - Monday, November 3, 2008 - link

    My mistake, it was a Q9450 that was used. The Q9550 label was from an earlier version of the spreadsheet that got canned due to time constraints. I wanted a clock-for-clock comparison with the i7-920 which runs at 2.66GHz.

    Take care,
    Anand
  • faxon - Monday, November 3, 2008 - link

    toms hardware published an article detailing that there would be a cap on how high you are allowed to clock your part before it would downclock it back to stock. since this is an integrated par of the core, you can only turn it off/up/down if they unlock it. the limit was supposedly a 130watt thermal dissipation mark. what effect did this have in your tests on overclocking the 920?
  • Gary Key - Monday, November 3, 2008 - link

    We have not had any problems clocking our 920 to the 3.6GHz~3.8GHz level with proper cooling. The 920, 940, and 965 will all clock down as core temps increase above the 80C level. We noticed half step decreases above 80C or so and watched our core multipliers throttle down to as low as 5.5 when core temps exceeded 90C and then increase back to normal as temperatures were lowered.

    This occurred with stock voltages or with the VCore set to 1.5V, it was dependent on thermals, not voltages or clock speeds in our tests. That said, I am still running a battery of tests on the 920 right now, but I have not seen an artificial cap yet. That does not mean it might not exist, just that we have not triggered it yet.

    I will try the 920 on the Intel board that Toms used this morning to see if it operates any differently than the ASUS and MSI boards.
  • Th3Eagle - Monday, November 3, 2008 - link

    I wonder how close you came to those temperatures while overclocking these processors.

    The 920 to 3.6/3.8 is a nice overclock but I wonder what you mean by proper cooling and how close you came to crossing the 80C "boundary"?

Log in

Don't have an account? Sign up now