For months we’ve been waiting to take advantage of NVIDIA’s SLI and it’s looking like the tier one motherboard manufacturers will be doing their best to bring the first nForce4 SLI motherboards to market before the end of this year.  So is SLI all it’s cracked up to be? 

With a final board and final drivers, it’s time to look at SLI from a final perspective to see if NVIDIA squandered the opportunity to regain technology and performance leadership or if SLI is really everything it used to be…

How SLI Works

NVIDIA’s Scalable Link Interface (SLI) is based on the simple principle of symmetric distribution of load, meaning that the architecture depends on (and will only really work) if both GPUs get the exact same load as one another.  The nature of NVIDIA’s SLI indicates that odd combinations such as cards with different clock speeds or GPU feature sets (e.g. 16-pipes + 8 pipes) will not work; NVIDIA’s driver will run all cards at the lowest common clock speed, but there’s nothing you can do about trying to get different GPUs to work in SLI mode, the driver simply won’t let you enable the option. 

NVIDIA’s first task in assuring that the load distributed to both GPUs would be balanced and symmetrical was to equip their nForce4 SLI chipset with identical width PCI Express graphics slots.  By default, PCI Express graphics cards use a x16 slot, which features 16 PCI Express lanes offering 8GB/s of total bandwidth.  Instead of outfitting their chipsets with 16 more PCI Express lanes, NVIDIA simply allows the number of lanes to be reconfigurable to either a single x16 slot or two x8 slots, with the use of a little card on the motherboard itself.  The physical slots themselves are both x16 slots, but electrically they can be configured to be two x8 slots.  This won’t cause any compatibility issues with x16 cards, as they will just use fewer lanes for data transfers, and the real world performance impact is negligible in games, which is what NVIDIA is counting on.

The next trick is to make sure that the GPUs receive the exact same vertex data from the CPU, which is done by the CPU sending all vertex data to the primary GPU and then the primary GPU forwards it on to the secondary GPU.  Once data arrives at the primary GPU via the PCI Express bus, all GPU to GPU communication is handled via NVIDIA’s video bridge.  The video bridge is a bus that connects directly to the GPU and is used for transferring data from the frame buffer of one GPU directly to the next.  NVIDIA isn’t offering too much information on the interface, other than saying that it is capable of transferring data at up to 10GB/s.  While it is possible to have this GPU-to-GPU communication go over the PCI Express bus, NVIDIA insists that it would be silly to do so because of latency issues and bandwidth constraints, and has no plans in moving in that direction. 

NVIDIA’s driver plays an important role in maintaining symmetry in the rendering by looking at the workload and making two key decisions: 1) determining rendering method, and depending on the rendering method, 2) determining the workload split between the two GPUs. 

NVIDIA supports two main rendering methods: Alternate Frame Rendering (AFR) and Split Frame Rendering (SFR).  As the names imply, AFR has each GPU render a separate frame (e.g. GPU 1 renders all odd frames and GPU 2 renders all even frames) while SFR splits up the rendering of a single frame amongst the two GPUs.  NVIDIA’s driver does not determine whether to use AFR or SFR on the fly, instead NVIDIA’s software engineers have profiled the majority of the top 100 games and created profiles for each and every one, determining whether they should default to AFR or SFR mode in each game.  NVIDIA’s driver defaults to AFR as long as there are no dependencies between frames; for example, in some games that use slow motion special effects the game itself doesn’t clear the frame buffer and will render the next frame on top of the previous frame, alpha blending the two frames together to get the slow motion effect – in this case there is a frame to frame dependency and AFR cannot be used. 

If AFR can’t be used, the SFR is used but now the driver must determine how much of each frame to send to GPU 1 vs. GPU 2.  Since the driver can count on both GPUs being the exact same speed (see why it’s important?), it makes an educated guess on what the load split should be.  The educated guess comes through the use of a history table that stores the load each GPU was placed under for the past several frames.  Based on the outcomes stored in this history table, NVIDIA’s driver will make a prediction of what the rendering split should be between the two GPUs for future frames and will adjust the load factor accordingly.  This should all sound very familiar to anyone who has ever heard of a branch predictor in a CPU, and just like a branch predictor there is a penalty for incorrectly predicting.  If NVIDIA’s driver predicts incorrectly one GPU will finish its rendering task much sooner than the other, giving it nothing to do but wait until the other GPU is done, thus reducing the overall performance potential of the SLI setup. 

By now you can begin to see where the performance benefits of SLI come into play.  With twice the GPU rendering power you effectively have a 32-pipe 6800GT with twice as much memory bandwidth if you pair two of the cards together, a configuration that you won’t see in a single card for quite some time.  At the same time you should see that SLI does have a little bit of overhead associated with it, and at lower CPU-bound resolutions you can expect SLI to be slightly slower than a single card.  Then again, you don’t buy an SLI setup to run at lower resolutions. 

Once both GPUs have completed their rendering, whether in AFR or SFR mode, the secondary GPU sends its frame buffer to the primary GPU via NVIDIA’s video bridge.  The important thing here is that the data is sent digitally, so there’s no loss in image quality as a result of SLI.  The primary GPU recombines the data and outputs the final completed frame (or frames) through its outputs.  Sounds simple enough, right?

Surprisingly enough, throughout all of our testing, we didn’t encounter any rendering issues in SLI mode.  NVIDIA insists that they have tested quite a few of the top 100 games to ensure that there aren’t any issues with SLI mode and it does seem that they’ve done a good job with their driver.  If the driver hasn’t been profiled with a game, it will default to single-GPU mode to avoid any rendering issues, but the user can always force SLI mode if they wish. 

ASUS’ A8N-SLI Deluxe
Comments Locked

74 Comments

View All Comments

  • CrystalBay - Wednesday, November 24, 2004 - link

    #61 good point, I also wonder how well it runs at super high resolutions...
  • PrinceGaz - Wednesday, November 24, 2004 - link

    We've seen profiles for many games in nVidia drivers since the 5x.xx series, I expect the SLI mode is just something they've added to it for when the card is running in an SLI mode. If in doubt, I'd have thought SLI AFR will be fine for most games that don't have a profile defined (assuming you can choose SLI mode), or SFR for those games that use motion-blurring (usually certain types of racing games).
  • Pampero - Wednesday, November 24, 2004 - link

    where in the review talk about the profiles?
    Is there a list of the games that can enable SLI?

  • Gatak - Wednesday, November 24, 2004 - link

    Where is the 2048x1536 tests? It is clear that 1600x1200 is no match for the SLI setup in most games. Why not do testing at higher resolution.

    If 2048x1536 ran smoothly, Then the demand for better monitors would be stronger - giving manufacturers reason to make better monitors for us =).
  • PrinceGaz - Wednesday, November 24, 2004 - link

    #55- If SLI is only able to be used on games nVidia have profiled like you say, and that the user cannot themselves force it to run in SLI AFR or SFR mode, then that's a serious problem for people who play non-mainstream games who might be considering SLI.

    After reading this article, I now believe more than ever that the only people who should be seriously considering SLI are those who are willing to buy two 6800GT or 6800 Ultra cards in one go, in order to have a top of the range system. The upgrade option doesn't make sense.

    Anand didn't compare the costs of an SLI upgrade against a non-SLI upgrade; instead he compared buying a second 6600GT later on when they're cheaper, to buying a high-end card initially and *not* doing any upgrade. Of course it's going to be more expensive if you buy a high-end card from the outset.

    The true upgrade alternative is that instead of buying a second (now cheaper) 6600GT to achieve roughly 6800GT performance, you would sell your 6600GT while it can still fetch a good price and put the money towards a (now also cheaper) 6800GT or maybe a mid-range next-generation card that has the required performance. When you look at how much prices fall on high-end cards when something even a little faster comes out, pushing them nearer to mid-range cards; it should be obvious that replacing the card with a faster one is a more cost-effective option for anyone considering an upgrade at a later date, than buying a second identical card on top of the one you already have.

    Yes there's the hassle of selling your first card, but not only do you have total flexibility over what you upgrade to (with SLI you have none); you also don't need an SLI mobo, you won't have two graphics-cards generating excess noise, and you'll have a lot more PCI/PCI-e slots left free for other cards.
  • bob661 - Wednesday, November 24, 2004 - link

    IMHO, SLI is for people like me and also for people that need to have the latest and greatest. People like me don't upgrade every 3 months, 6 months, or even a year. I upgrade every 2.5 to 3 years. It would be nice to be able to run 2006 or 2007 games on 2004 technology. Who knows, this might extend my upgrades to 4 years. ;-)
  • Momental - Wednesday, November 24, 2004 - link

    #56 I would imagine that nVidia is aware of this and who knows, they may implement a utility within their driver that automatically flashes the BIOS of the "older" card, if one is detected. Either that, or they could write something into the driver to search for another GPU and once it's found, ask you if you would like to flash its BIOS upon restart. And voila!

    The fact of the matter is that it's way too early to speculate as to whether or not SLI is a viable and cost-effective solution. Something tells me that it will be because it's not like the "next big thing" ie: cards that are twice as fast, are right around the corner. If they were, then I'd say 'no'. It isn't worth it for reasons stated by #42.
  • nserra - Wednesday, November 24, 2004 - link

    I think the issue of the card to MUST have the same bios is ENORMOUS. So the buy one now and buy the other LATER will "not" be possible. I doubt that a year old card has the same BIOS of brand new one.

    Too much “issues”…
  • nserra - Wednesday, November 24, 2004 - link

    I think the issue of the card to MUST have the same bios is ENORMOUS. So the buy one now and buy the other lather will "not" be possible. I doubt that a year old card has the same BIOS of brand new one.

    Too much “issues”…
  • Elliot - Wednesday, November 24, 2004 - link

    I want one of this SLI boards but the article said that you can force the driver to enable SLI on games without profiles on Nvidia drivers but this is not real.

    If no SLI profile exists for a game, there is no SLI rendering. It is not possible to force SLI mode or generate your own profile. According to NVIDIA however the driver already contains over 50 profiles for games running with SLI. For newer titles this therefore means that SLI system owners have to wait for a new driver. But even then there is no guarantee that SLI will be possible with a particular game. So this is not very good news.

Log in

Don't have an account? Sign up now