While the world turned I was on a flight over to Taiwan to meet and discuss future products with just about every Taiwanese Motherboard and Video Card manufacturer I could get a meeting with. The discussions yielded a great deal of important information, such as roadmap updates, a better understanding of some of the current supply shortages and some insight into how the markets here in Taiwan and globally were holding up. While I'll talk about most of these topics in a separate article, I couldn't resist but post information on a very interesting product I managed to get some "alone-time" with while in Taiwan.

Just a few weeks ago our own Wesley Fink and I traveled to NYC to meet with NVIDIA and, more importantly, to get some first hand experience with nForce4 and nForce4 SLI platforms. As you'll know from our previous coverage on the topic, nForce4 SLI is the highest-end nForce4 offering outfitted with a configurable number of PCI Express lanes. The beauty of having a configurable number of PCI Express lanes is that you can have a single PCI Express x16 slot, or you can split that one slot into two x8 slots - which is perfect for installing two graphics cards in.

NVIDIA is less than a month away from sending final shipping nForce4 SLI boards out to reviewers, but we managed to get some quality benchmarking time with a pre-release nForce4 SLI board from MSI. The important thing to note here is that it was pre-release and we had a very limited amount of time with it - not to mention that I'm about halfway around the world from my testing equipment and benchmarks, so forgive me if the number of tests or benchmarks is not as complete as you're used to seeing on AnandTech.

There will be two versions of the MSI nForce4 SLI board shipping worldwide; in the US it will be called the MSI K8N Neo4 Platinum/SLI but in the rest of the world it will be called the MSI K8N Diamond. There will be some slight changes in the specs between the two but nothing major.


Click to Enlarge

The MSI motherboard we tested is actually the very first working sample of the K8N Neo4 Platinum/SLI; in fact, as of right now there are only 4 working nForce4 SLI samples at MSI in Taiwan, two of which happen to be in my hotel room. Despite the early nature of the motherboard, it was 100% stable and didn't crash once during our hours of testing nor in the 12 hours of burn-in before that. There were some rendering issues during some of the testing but we'd chalk that up to drivers that need some work; one thing to keep in mind is that SLI is extremely driver intensive and we'll explain why in a moment. Please be sure to read our nForce4 review and SLI preview before continuing on with this review to understand what's behind nForce4 and SLI.

We did not have time to run a full gamut of benchmarks, so all of our tests are limited to 1024 x 768, 1280 x 1024 and 1600 x 1200 with 4X AA enabled. We tested using an Athlon 64 FX-55 with 1GB of Corsair DDR400 under Windows XP Professional with DX9c. Finding game benchmarks was a bit of a challenge in Taiwan, but despite the Chinese boxes our copies of Doom 3 and Far Cry were basically the english versions. We also included the Counterstrike: Source Visual Stress Test in our impromptu test suite. But before we get to the benchmarks, let's talk a little bit about how you actually get SLI working.

Setting up SLI
Comments Locked

84 Comments

View All Comments

  • PrinceGaz - Saturday, October 30, 2004 - link

    The geometry processing *should* be shared between the cards to some extent as each card only renders a certain area of the screen when it is divided between them. So polygons that fall totally outside its area can be skipped. At least thats what I imagine happens but I don't know for sure. Obviously when each card is rendering alternate frames then the geometry calculation is effectively totally shared between the cards as they each have twice as long to work on it to maintain the same framerate.

    As for the 105% improvement of the 6800GT SLI over a single 6800GT in Far Cry 1600x1200. All I can say is No! It's against the laws of physics! That or the drivers are doing something fishy.
  • stephenbrooks - Saturday, October 30, 2004 - link

    --[I do not need a 21 years young webmaster’s article as a reference for making these claims,]--

    So what's wrong with 21 y/o webmasters then, HUH? I could be very offended by that. :)

    I'm surprised no-one's picked up this before but I just love the 105% improvement of the 6800GT on Far Cry highest res. I'm surprised because normally when a review has something like that in it, a load of people turn up and say "No! It's against the laws of physics!" Well, it isn't _technically_, but it makes you wonder what on Earth has gone on in that setup to make two cards more efficient per transistor than one.

    [Incidentally, does anyone here know _for sure_ whether or not the geometry calculation is shared between these cards via SLI?]
  • PrinceGaz - Saturday, October 30, 2004 - link

    #61 Sokaku- your understanding of the new form of SLI (Scalable Link Interface) is incorrect. You are referring to Scan-Line Interleave which was used with two Voodoo 2 cards.

    Using Scalable Link Interface, one card renders the upper part of the screen, the other card renders the lower part. Note that I say "part" of the screen, instead of "half"-- the amount of the screen rendered by each card varies depending on the complexity so that each card has a roughly equal load. So if most of the action is occurring low down, the first card may render the upper two-thirds, while the second card only does the lower third.

    The current form of SLI can also be used in an alternative mode where each card renders every other frame, so card A does frames 1,3,5 etc, while card B does frames 2,4,6 etc.

    However regardless of which method is used, SLI is only really viable when used in conjunction with top-end cards such as the 6800Ultra or 6800GT. It doesn't make sense to buy a second 6600GT later when better cards will be available more cheaply, or to buy two 6600GTs together now when a single 6800GT would be a better choice. Therefore the $800+ required for two SLI graphics-cards will mean only a tiny minority ever use it (though some fools will go and buy a second 6600GT a year later no doubt).
  • Sokaku - Saturday, October 30, 2004 - link

    #49 "Do you have some reference that actually states this? Seems to me like it's just a blatant guess with not a lot of thought behind it.":

    I often wonder how come people are this rude on the net, probably because they don’t sit face to face with the ones they talk to. I do not need a 21 years young webmaster’s article as a reference for making these claims, I think therefore I claim. And if you want a conversation beyond this point, sober up your language and tone.

    In a SLI configuration card 1, renders the 1st scan line, card 2 the 2nd scan line, card 1 the third, card 2 the fourth and so on.

    It is done this way because it’s easier to keep the cards synchronized. If you had card 1 render the left half and card 2 render the right, then card 1 may lag seriously if the left part of the scene is vastly more complex than the right part.

    So, in SLI both cards need to do the complete geometry processing for the entire frame. When the cards then render the pixels, they only have to do half each.

    Thus, a card needs a geometry engine that is twice as fast (at a given target resolution) as the pixel rendering capacity on one card, because the geometry engine must be prepared for the SLI situation.

    If the geometry engine were exactly (yeah I know, it all depends on the game and all) matched to the pixel rendering process, you wouldn't gain anything from a SLI configuration, because the geometry engine would be the bottleneck.

    This didn’t matter anything back in the voodoo2 days, because all the geometry calculations was done by the CPU, the cards only did the rendering and therefore nothing was "wasted". Now the CPU offloads the calculations to the GPU, hence the need to have twice the triangle calculation capacity on the GPU.
  • AtaStrumf - Saturday, October 30, 2004 - link

    First of all congrats to Anand for this exclusive preview!

    Now to those that think next GPU will outperform a 6800GT SLI, you must have been living under a rock for the last 2 years.

    How much is 9800XT faster than 9700 Pro? Not nearly as much as two 6800GTs are faster than a single 6800GT is the correct answer!

    Now consider that 9700 Pro and 9800XT are 3 generations apart, and 6800GT/SLI are let's say one generation apart in terms of time to market.

    How can you complain about that?!?! And don't forget that 6800GT is not a $500 high end card!

    If you get one 6800 GT now and one in say 6 months you're still way ahead performancewise, than if you buy the next top of the line $500 GPU and you only spend about $200 more, plus you get to do it in two payments.

    This is definately a good thing as long as there are no driver/chipset problems.

    Last but not least. Just as we have seen with CPUs, GPUs will also hit the wall probably with the next generation, and SLI is to GPUs what dual core is to CPUs only a hell of a lot better.

    My only gripe is that SLI chipset availability will probably be a big problem for quite some time to come and I would not buy the first revision, so add additional 4 months for this to be a viable solution.

    Me being stuck in S754 and AGP may seem like a problem, but I intend to buy a 6800GT AGP sometime next year and wait all this SLI/90nm/dual core out. I'll let others be free beta testers :-)
  • thomas35 - Saturday, October 30, 2004 - link

    pretty cool review once agian.

    Though most people seem to miss one big glaring thing. SLI while nice a pretty toy for games, is going to have a huge impact on the 3D animation industry. Modern video cards have hugefully powerfull processors on them, but because AGP isn't duplex and can't work with more than one GPU at a time, video card based hardware rendering isn't used much. But now with PCI-e and SLI, I can take 2 powerfull professional cards (FireGL and Quadro's) and have them start helping to render animations. This means, rather than add extra computers to cut down on render times, I just simply add in a couple of video cards. And that in turn, means I have more time to develope a high quality animation that I would have in the past.

    So in the middle of next year, I'll be buying a dual (dual core) cpu system and dual video card system to render. And then will have the same power as I get from the 6 computers I own now, in your standard ATX case.
  • Nick4753 - Saturday, October 30, 2004 - link

    Thanks Anand!!!!
  • Reflex - Saturday, October 30, 2004 - link

    Ghandi - Actually the nVidia SLI solution should outperform X2 in most scenerios. The reason for that is that their driver will intelligently load balance the rendering job to both graphics chips rather than simply split a scene in half and give each chip a half. Much of the time there is more action in one part of a scene than the other, so the second card would be completely wasted at those times. On average, the SLI solution should outperform X2.

    X2 has other drawbacks as well. Few techies really want to buy a complete system, preffering to build their own. So something like X2, which is only able to be acquired with a very overpriced PC that I can build on my own for a lot less money(and use nVidia's SLI if I really need that kinda power) is not a very attractive solution for a power user. You also point out a part about drivers as an advantage that is truly a drawback. What kind of experience does Alienware have with drivers? How long do they intend to support that hardware? I can get standard Ati or nVidia drivers dating back to 1996, will Alienware guarnatee me that in 8 years that rig will still be able to run at the least an OS with full driver support? What kind of issues will they have, seeing as they have never had to do that before, writing drivers is NOT a simple process.

    I have nothing against the X2, but I do know it was first mentioned months ago and I have yet to see any evidence of it since. It would not suprise me if they ditched it and just went with nVidia's solution. At this point they are more or less just doing design for Ati, as anyone who wants to do SLI with nVidia cards can now do it without paying a premium for Alienware's solution.

    Your comment about nVidia vs. Ati is kinda odd. It really depends on what games you play as to which you prefer. Myself, I am perfectly happy with my Radeon 9600SE, yes its crappy, but it plays Freedom Force and Civilization 3 just fine. ;)
  • GhandiInstinct - Saturday, October 30, 2004 - link

    I don't consider my statement any form of criticism it is merely a realization that the high-end user might want to wait for X2 because the current concensus reveals that Nvidia cards aren't a big hype compared to ATi. Ask that high-end user would he rather waste that $200 on dual-Nvidias or Dual-Ati's?
  • SleepNoMore - Saturday, October 30, 2004 - link

    I think you could build this and RENT it to friends and gawkers at 20 bucks an hour. Who knows maybe some gaming arcade places will do this?

    Cause that's what it is at this point: a really cool, ooh-ahh, novelty. I'd rather rent it as a glimpse at the future than go broke trying to buy it. The future will come soon enough.

    The final touch - for humor - you just need to build it into a special case with a Vornado fan ;) to make it totally unapologetically brute force funky.

Log in

Don't have an account? Sign up now