The Test

Our test configuration was as follows:

AMD Athlon 64 FX-55 (2.6GHz)

MSI K8N Neo4 Platinum/SLI

2 x 512MB Corsair DDR400

NVIDIA Graphics Cards:

NVIDIA GeForce 6600GT x 2
NVIDIA GeForce 6800GT x 2

NVIDIA 66.75 Drivers

Windows XP with DirectX 9.0c

Because of our limited time and the fact that we were thousands of miles away from our labs we could only test the cards that MSI had on hand at the time, which were NVIDIA-only.


Click to Enlarge

Setting up SLI Doom 3 Performance
Comments Locked

84 Comments

View All Comments

  • PrinceGaz - Saturday, October 30, 2004 - link

    The geometry processing *should* be shared between the cards to some extent as each card only renders a certain area of the screen when it is divided between them. So polygons that fall totally outside its area can be skipped. At least thats what I imagine happens but I don't know for sure. Obviously when each card is rendering alternate frames then the geometry calculation is effectively totally shared between the cards as they each have twice as long to work on it to maintain the same framerate.

    As for the 105% improvement of the 6800GT SLI over a single 6800GT in Far Cry 1600x1200. All I can say is No! It's against the laws of physics! That or the drivers are doing something fishy.
  • stephenbrooks - Saturday, October 30, 2004 - link

    --[I do not need a 21 years young webmaster’s article as a reference for making these claims,]--

    So what's wrong with 21 y/o webmasters then, HUH? I could be very offended by that. :)

    I'm surprised no-one's picked up this before but I just love the 105% improvement of the 6800GT on Far Cry highest res. I'm surprised because normally when a review has something like that in it, a load of people turn up and say "No! It's against the laws of physics!" Well, it isn't _technically_, but it makes you wonder what on Earth has gone on in that setup to make two cards more efficient per transistor than one.

    [Incidentally, does anyone here know _for sure_ whether or not the geometry calculation is shared between these cards via SLI?]
  • PrinceGaz - Saturday, October 30, 2004 - link

    #61 Sokaku- your understanding of the new form of SLI (Scalable Link Interface) is incorrect. You are referring to Scan-Line Interleave which was used with two Voodoo 2 cards.

    Using Scalable Link Interface, one card renders the upper part of the screen, the other card renders the lower part. Note that I say "part" of the screen, instead of "half"-- the amount of the screen rendered by each card varies depending on the complexity so that each card has a roughly equal load. So if most of the action is occurring low down, the first card may render the upper two-thirds, while the second card only does the lower third.

    The current form of SLI can also be used in an alternative mode where each card renders every other frame, so card A does frames 1,3,5 etc, while card B does frames 2,4,6 etc.

    However regardless of which method is used, SLI is only really viable when used in conjunction with top-end cards such as the 6800Ultra or 6800GT. It doesn't make sense to buy a second 6600GT later when better cards will be available more cheaply, or to buy two 6600GTs together now when a single 6800GT would be a better choice. Therefore the $800+ required for two SLI graphics-cards will mean only a tiny minority ever use it (though some fools will go and buy a second 6600GT a year later no doubt).
  • Sokaku - Saturday, October 30, 2004 - link

    #49 "Do you have some reference that actually states this? Seems to me like it's just a blatant guess with not a lot of thought behind it.":

    I often wonder how come people are this rude on the net, probably because they don’t sit face to face with the ones they talk to. I do not need a 21 years young webmaster’s article as a reference for making these claims, I think therefore I claim. And if you want a conversation beyond this point, sober up your language and tone.

    In a SLI configuration card 1, renders the 1st scan line, card 2 the 2nd scan line, card 1 the third, card 2 the fourth and so on.

    It is done this way because it’s easier to keep the cards synchronized. If you had card 1 render the left half and card 2 render the right, then card 1 may lag seriously if the left part of the scene is vastly more complex than the right part.

    So, in SLI both cards need to do the complete geometry processing for the entire frame. When the cards then render the pixels, they only have to do half each.

    Thus, a card needs a geometry engine that is twice as fast (at a given target resolution) as the pixel rendering capacity on one card, because the geometry engine must be prepared for the SLI situation.

    If the geometry engine were exactly (yeah I know, it all depends on the game and all) matched to the pixel rendering process, you wouldn't gain anything from a SLI configuration, because the geometry engine would be the bottleneck.

    This didn’t matter anything back in the voodoo2 days, because all the geometry calculations was done by the CPU, the cards only did the rendering and therefore nothing was "wasted". Now the CPU offloads the calculations to the GPU, hence the need to have twice the triangle calculation capacity on the GPU.
  • AtaStrumf - Saturday, October 30, 2004 - link

    First of all congrats to Anand for this exclusive preview!

    Now to those that think next GPU will outperform a 6800GT SLI, you must have been living under a rock for the last 2 years.

    How much is 9800XT faster than 9700 Pro? Not nearly as much as two 6800GTs are faster than a single 6800GT is the correct answer!

    Now consider that 9700 Pro and 9800XT are 3 generations apart, and 6800GT/SLI are let's say one generation apart in terms of time to market.

    How can you complain about that?!?! And don't forget that 6800GT is not a $500 high end card!

    If you get one 6800 GT now and one in say 6 months you're still way ahead performancewise, than if you buy the next top of the line $500 GPU and you only spend about $200 more, plus you get to do it in two payments.

    This is definately a good thing as long as there are no driver/chipset problems.

    Last but not least. Just as we have seen with CPUs, GPUs will also hit the wall probably with the next generation, and SLI is to GPUs what dual core is to CPUs only a hell of a lot better.

    My only gripe is that SLI chipset availability will probably be a big problem for quite some time to come and I would not buy the first revision, so add additional 4 months for this to be a viable solution.

    Me being stuck in S754 and AGP may seem like a problem, but I intend to buy a 6800GT AGP sometime next year and wait all this SLI/90nm/dual core out. I'll let others be free beta testers :-)
  • thomas35 - Saturday, October 30, 2004 - link

    pretty cool review once agian.

    Though most people seem to miss one big glaring thing. SLI while nice a pretty toy for games, is going to have a huge impact on the 3D animation industry. Modern video cards have hugefully powerfull processors on them, but because AGP isn't duplex and can't work with more than one GPU at a time, video card based hardware rendering isn't used much. But now with PCI-e and SLI, I can take 2 powerfull professional cards (FireGL and Quadro's) and have them start helping to render animations. This means, rather than add extra computers to cut down on render times, I just simply add in a couple of video cards. And that in turn, means I have more time to develope a high quality animation that I would have in the past.

    So in the middle of next year, I'll be buying a dual (dual core) cpu system and dual video card system to render. And then will have the same power as I get from the 6 computers I own now, in your standard ATX case.
  • Nick4753 - Saturday, October 30, 2004 - link

    Thanks Anand!!!!
  • Reflex - Saturday, October 30, 2004 - link

    Ghandi - Actually the nVidia SLI solution should outperform X2 in most scenerios. The reason for that is that their driver will intelligently load balance the rendering job to both graphics chips rather than simply split a scene in half and give each chip a half. Much of the time there is more action in one part of a scene than the other, so the second card would be completely wasted at those times. On average, the SLI solution should outperform X2.

    X2 has other drawbacks as well. Few techies really want to buy a complete system, preffering to build their own. So something like X2, which is only able to be acquired with a very overpriced PC that I can build on my own for a lot less money(and use nVidia's SLI if I really need that kinda power) is not a very attractive solution for a power user. You also point out a part about drivers as an advantage that is truly a drawback. What kind of experience does Alienware have with drivers? How long do they intend to support that hardware? I can get standard Ati or nVidia drivers dating back to 1996, will Alienware guarnatee me that in 8 years that rig will still be able to run at the least an OS with full driver support? What kind of issues will they have, seeing as they have never had to do that before, writing drivers is NOT a simple process.

    I have nothing against the X2, but I do know it was first mentioned months ago and I have yet to see any evidence of it since. It would not suprise me if they ditched it and just went with nVidia's solution. At this point they are more or less just doing design for Ati, as anyone who wants to do SLI with nVidia cards can now do it without paying a premium for Alienware's solution.

    Your comment about nVidia vs. Ati is kinda odd. It really depends on what games you play as to which you prefer. Myself, I am perfectly happy with my Radeon 9600SE, yes its crappy, but it plays Freedom Force and Civilization 3 just fine. ;)
  • GhandiInstinct - Saturday, October 30, 2004 - link

    I don't consider my statement any form of criticism it is merely a realization that the high-end user might want to wait for X2 because the current concensus reveals that Nvidia cards aren't a big hype compared to ATi. Ask that high-end user would he rather waste that $200 on dual-Nvidias or Dual-Ati's?
  • SleepNoMore - Saturday, October 30, 2004 - link

    I think you could build this and RENT it to friends and gawkers at 20 bucks an hour. Who knows maybe some gaming arcade places will do this?

    Cause that's what it is at this point: a really cool, ooh-ahh, novelty. I'd rather rent it as a glimpse at the future than go broke trying to buy it. The future will come soon enough.

    The final touch - for humor - you just need to build it into a special case with a Vornado fan ;) to make it totally unapologetically brute force funky.

Log in

Don't have an account? Sign up now