X58 Multi-GPU Scaling

While we had some hope earlier in the year of unifying our SLI and CrossFire testbed under Skulltrail, we had to scrap that project due to numerous difficulties in testing. Today, we have another ray of hope. Having a single platform that will allow us to run both SLI and CrossFire would give us better ability to compare multiGPU scaling as there would be fewer variables to consider.

So we did a few tests today to see how the new X58 platform handles multiGPU scaling. We have compared CrossFire on X48 and SLI on 790i to multiGPU scaling on X58 in Oblivion and Quake Wars to get a taste of what we should expect. And the results are a bit of a mixed bag.

Enemy Territory: Quake Wars

With Enemy Territory: Quake Wars, we see very consistent performance. Our single card numbers were very close to the numbers we saw on other platforms, and the general degree of scaling was maintained. Both NVIDIA and AMD hardware scaled slightly better on X58 here than on the hardware we had been using. This is a good sign that the potential for accurate comparison and good quality multiGPU testing might be possible on X58 going forward.

But there there's the Oblivion test.

Under Oblivion, NVIDIA hardware scaled less on X58, and our AMD tests were ... let's say inconclusive.

Oblivion

We have often had trouble with AMD drivers, especially when looking at CrossFire performance. The method that AMD uses to maintain and test their drivers necessitates eliminating some games from testing for extended periods of time. This can sometimes result in games that used to work well with AMD hardware or scale well with CrossFire to stop performing up to par or to stop scaling as well as they should.

The consistent fix, unfortunately, has been for review sites to randomly stumble upon these problems. We usually see resolutions very quickly to issues like this, but that doesn't change the fact that it shouldn't happen in the first place.

In any event, the past couple weeks have been more frustrating than usual when testing AMD hardware. We've switched drivers 4 different times in testing, and we still have issues. Yes, 3 of these four drivers have been hotfix beta drivers, but for people with Far Cry 2 the hotfix is all they've got, which is still broken even after three tries.

We certainly know that NVIDIA doesn't have it all right when it comes to drivers. But we really feel like AMD's monthly driver release schedule wastes resources on unnecessary work that could be better used to benefit their customers. If we are going to have hotfix drivers come out anyway, we might as well make sure that every full release incorporates all the fixes in every hot fix and doesn't break anything the last driver fixed.

The point of all this is, our money is on a lack of scaling under Oblivion due to some aspect of this beta driver we are using rather than scaling on X58.

As for the NVIDIA results, we're a little more worried about those. It could be that we are also seeing a driver issue here, but it just could be that Oblivion does something that doesn't scale well with SLI on X58. We were really surprised to see this as we expected comparable scaling. As the driers mature, we'll definitely test the issue of multiGPU scaling on X58 further.

The Chipset - Meet Intel's X58 Our First X58 Motherboard Preview: The ASUS Rampage II Extreme
Comments Locked

73 Comments

View All Comments

  • npp - Tuesday, November 4, 2008 - link

    Well, the funny thing is THG got it all messed up, again - they posted a large "CRIPPLED OVERCKLOCKING" article yesterday, and today I saw a kind of apology from them - they seem to have overlooked a simple BIOS switch that prevents the load through the CPU from rising above 100A. Having a month to prepare the launch article, they didn't even bother to tweak the BIOS a bit. That's why I'm not taking their articles seriously, not because they are biased towards Intel ot AMD - they are simply not up to the standars (especially those here @anandtech).
  • gvaley - Tuesday, November 4, 2008 - link

    Now give us those 64-bit benchmarks. We already knew that Core i7 will be faster than Core 2, we even knew how much faster.
    Now, it was expected that 64-bit performance will be better on Core i7 that on Core 2. Is that true? Draw a parallel between the following:

    Performance jump from 32- to 64-bit on Core 2
    vs.
    Performance jump from 32- to 64-bit on Core i7
    vs.
    Performance jump from 32- to 64-bit on Phenom
  • badboy4dee - Tuesday, November 4, 2008 - link

    and what's those numbers on the charts there? Are they frames per second? high is better then if thats what they are. Charts need more detail or explanation to them dude!

    TSM
  • MarchTheMonth - Tuesday, November 4, 2008 - link

    I don't believe I saw this anywhere else, but the spots for the cooler on the Mobo, they the same as like the LGA 775, i.e. can we use (non-Intel) coolers that exist now for the new socket?
  • marc1000 - Tuesday, November 4, 2008 - link

    no, the new socket is different. the holes are 80mm far from each other, on socket 775 it was 72mm away.
  • Agitated - Tuesday, November 4, 2008 - link

    Any info on whether these parts provide an improvement on virtualized workloads or maybe what the various vm companies have planned for optimizing their current software for nehalem?
  • yyrkoon - Tuesday, November 4, 2008 - link

    Either I am not reading things correctly, or the 130W TDP does not look promising for the end user such as myself that requires/wants a low powered high performance CPU.

    The future in my book is using less power, not more, and Intel does not right now seem to be going in this direction. To top things off, the performance increase does not seem to be enough to justify this power increase.

    Being completely off grid(100% solar / wind power), there seem to be very few options . . . I would like to see this change. Right now as it stands, sticking with the older architecture seems to make more sense.
  • 3DoubleD - Tuesday, November 4, 2008 - link

    130W TDP isn't much worse for previous generations of quad core processors which were ~100W TDP. Also, TDP isn't a measure of power usage, but of the required thermal dissipation of a system to maintain an operating temperature below an set value (eg. Tjmax). So if Tjmax is lower for i7 processors than it is for past quad cores, it may use the same amount of power, but have a higher TDP requirement. The article indicates that power draw has increased, but usually with a large increase in performance. Page 9 of the article has determined that this chip has a greater performance/watt than its predecessors by a significant margin.

    If you are looking for something that is extremely low power, you shouldn't be looking at a quad core processor. Go buy a laptop (or an EeePC-type laptop with an Atom processor). Intel has kept true to its promise of 2% performance increase for every 1% power increase (eg. a higher performance per watt value).

    Also, you would probably save more power overall if you just hibernate your computer when you aren't using it.
  • Comdrpopnfresh - Monday, November 3, 2008 - link

    Do differing cores have access to another's L2? Is it directly, through QPI, or through L3?
    Also, is the L2 inclusive in the L3; does the L3 contain the L2 data?
  • xipo - Monday, November 3, 2008 - link

    I know games are not the strong area of nehalem, but there are 2 games i'd like to see tested. Unreal T. 3 and Half Life 2 E2.. just to know how does nehalem handles those 2 engines ;D

Log in

Don't have an account? Sign up now