X58 Multi-GPU Scaling

While we had some hope earlier in the year of unifying our SLI and CrossFire testbed under Skulltrail, we had to scrap that project due to numerous difficulties in testing. Today, we have another ray of hope. Having a single platform that will allow us to run both SLI and CrossFire would give us better ability to compare multiGPU scaling as there would be fewer variables to consider.

So we did a few tests today to see how the new X58 platform handles multiGPU scaling. We have compared CrossFire on X48 and SLI on 790i to multiGPU scaling on X58 in Oblivion and Quake Wars to get a taste of what we should expect. And the results are a bit of a mixed bag.

Enemy Territory: Quake Wars

With Enemy Territory: Quake Wars, we see very consistent performance. Our single card numbers were very close to the numbers we saw on other platforms, and the general degree of scaling was maintained. Both NVIDIA and AMD hardware scaled slightly better on X58 here than on the hardware we had been using. This is a good sign that the potential for accurate comparison and good quality multiGPU testing might be possible on X58 going forward.

But there there's the Oblivion test.

Under Oblivion, NVIDIA hardware scaled less on X58, and our AMD tests were ... let's say inconclusive.

Oblivion

We have often had trouble with AMD drivers, especially when looking at CrossFire performance. The method that AMD uses to maintain and test their drivers necessitates eliminating some games from testing for extended periods of time. This can sometimes result in games that used to work well with AMD hardware or scale well with CrossFire to stop performing up to par or to stop scaling as well as they should.

The consistent fix, unfortunately, has been for review sites to randomly stumble upon these problems. We usually see resolutions very quickly to issues like this, but that doesn't change the fact that it shouldn't happen in the first place.

In any event, the past couple weeks have been more frustrating than usual when testing AMD hardware. We've switched drivers 4 different times in testing, and we still have issues. Yes, 3 of these four drivers have been hotfix beta drivers, but for people with Far Cry 2 the hotfix is all they've got, which is still broken even after three tries.

We certainly know that NVIDIA doesn't have it all right when it comes to drivers. But we really feel like AMD's monthly driver release schedule wastes resources on unnecessary work that could be better used to benefit their customers. If we are going to have hotfix drivers come out anyway, we might as well make sure that every full release incorporates all the fixes in every hot fix and doesn't break anything the last driver fixed.

The point of all this is, our money is on a lack of scaling under Oblivion due to some aspect of this beta driver we are using rather than scaling on X58.

As for the NVIDIA results, we're a little more worried about those. It could be that we are also seeing a driver issue here, but it just could be that Oblivion does something that doesn't scale well with SLI on X58. We were really surprised to see this as we expected comparable scaling. As the driers mature, we'll definitely test the issue of multiGPU scaling on X58 further.

The Chipset - Meet Intel's X58 Our First X58 Motherboard Preview: The ASUS Rampage II Extreme
Comments Locked

73 Comments

View All Comments

  • Kaleid - Monday, November 3, 2008 - link

    http://www.guru3d.com/news/intel-core-i7-multigpu-...">http://www.guru3d.com/news/intel-core-i...and-cros...
  • bill3 - Monday, November 3, 2008 - link

    Umm, seems the guru3d gains are probably explained by them using a dual core core2dou versus quad core i7...Quad core's run multi-gpu quiet a bit better I believe.

  • tynopik - Monday, November 3, 2008 - link

    what about those multi-threading tests you used to run with 20 tabs open in firefox while running av scan while compressing some files while converting something else while etc etc?

    this might be more important for daily performance than the standard desktop benchmarks
  • D3SI - Monday, November 3, 2008 - link


    So the low end i7s are OC'able?

    what the hell is toms hardware talking about lol
  • conquerist - Monday, November 3, 2008 - link

    Concerning x264, Nehalem-specific improvements are coming as soon as the developers are free from their NDA.
    See http://x264dev.multimedia.cx/?p=40">http://x264dev.multimedia.cx/?p=40.
  • Spectator - Monday, November 3, 2008 - link

    can they do some CUDA optimizations?. im guessing that video hardware has more processors than quad core intel :P

    If all this i7 is new news and does stuff xx faster with 4 core's. how does 100+ core video hardware compare?.

    Yes im messing but giant Intel want $1k for best i7 cpu. when likes of nvid make bigger transistor count silicon using a lesser process and others manufacture rest of vid card for $400-500 ?

    Where is the Value for money in that. Chukkle.
  • gramboh - Monday, November 3, 2008 - link

    The x264 team has specifically said they will not be working on CUDA development as it is too time intensive to basically start over from scratch in a more complex development environment.
  • npp - Monday, November 3, 2008 - link

    CUDA Optimizations? I bet you don't understand completely what you're talking about. You can't just optimize a piece of software for CUDA, you MUST write it from scratch for CUDA. That's the reason why you don't see too much software for nVidia GPUs, even though the CUDA concept was introduced at least two years ago. You have the BadaBOOM stuff, but it's far for mature, and the reason is that writing a sensible application for CUDA isn't exactly an easy task. Take your time to look at how it works and you'll understand why.

    You can't compare the 100+ cores of your typical GPU with a quad core directly, they are fundamentaly different in nature, with your GPU "cores" being rather limited in functionality. GPGPU is a nice hype, but you simply can't offload everything on a GPU.

    As a side note, top-notch hardware always carries price premium, and Intel has had this tradition with high-end CPUs for quite a while now. There are plenty of people who need absolutely the fastest harware around and won't hesitate paying it.
  • Spectator - Monday, November 3, 2008 - link

    Some of us want more info.

    A) How does the integrated Thermal sensor work with -50+c temps.

    B) Can you Circumvent the 130W max load sensor

    C) what are all those connection points on the top of the processor for?.

    lol. Where do i put the 2B pencil to. to join that sht up so i dont have to worry about multiply settings or temp sensors or wattage sensors.

    Hey dont shoot the messenger. but those top side chip contacts seem very curious and obviously must serve a purpose :P

  • Spectator - Monday, November 3, 2008 - link

    Wait NO. i have thought about it..

    The contacts on top side could be for programming the chips default settings.

    You know it makes sence.Perhaps its adjustable sram style, rather than burning connections.

    yes some technical peeps can look at that. but still I want the fame for suggesting it first. lmao.

    Have fun. but that does seem logical to build in some scope for alteration. alot easier to manufacture 1 solid item then mod your stock to suit market when you feel its neccessary.

    Spectator.

Log in

Don't have an account? Sign up now