Introduction

Ever since its launch in the fall of last year, nForce 4 has brought us SLI capability. Configurations for PCI Express slots in nForce 4 SLI motherboards must be selected by using a paddle that can be flipped to allow for either 2 x8 connections or full bandwidth for one with the other disabled. There is added flexibility on an nForce 4 SLI motherboard, but this flexibility comes with some limitations. Today, NVIDIA sheds the shackles of the paddle selector and limited bandwidth with the new nForce 4 SLI X16 chipset.



In addition to the increased bandwidth and ease of use come quite a few extra niceties. Boards based on nForce 4 SLI X16 will have more I/O options on top of the added PCI Express bandwidth. The introduction of a new enthusiast part will also push prices down on existing products and NVIDIA will begin selling its current nForce4 SLI solution at mainstream prices. Aside from cheaper being better, this should increase adoption of the SLI platform giving the mainstream user some reason to care about SLI. With this also comes value for options like 6600 and 6200 SLI. Everything seems to be coming up roses for NVIDIA's dual GPU business right now with ATI's Crossfire still waiting in the wings.

With this introduction also comes quite a surprise from Dell. NVIDIA will be supplying core logic to the previously Intel-only volume computer manufacturer, making nForce4 SLI X16 the first non-Intel chipset for dudes to get in their Dells. This is quite a big announcement and will really help to boost NVIDIA's already successful chipset business. This announcement also gives some glimmer of hope for Dell as non-Intel hardware on the motherboard may mean that Dell is capable of making good decisions in the processor department as well. While it is unlikely that we will see AMD based Dell systems anytime soon, it's nice to know that the thin line between volume discounts and unfair business practices is clear enough to allow Dell to make the right choice for performance once in a while. At least, now one of their chipset vendors supports AMD as well as Intel.

Unfortunately, we don't yet have a board to test for performance numbers on the new configuration, but that won't stop us from talking about what's new under the hood.

The New AMD and Intel Chipsets
Comments Locked

61 Comments

View All Comments

  • PrinceGaz - Tuesday, August 9, 2005 - link

    You can easily test to see if there is any performance difference between x8 and x16 PCIe with a standard nF4 SLI board. Just drop one card (ideally a 7800GTX) in the first graphics-card slot, and run tests with the paddle set to single-card mode. That gives you the PCIe x16 results. Now set the paddle to SLI mode and re-run the tests with the same single card. It will now be running at PCIe x8 and you can see if there is any drop in performance. Voila! :)
  • Fluppeteer - Tuesday, August 9, 2005 - link

    The thing about graphics slot bandwidth is that it's *always* much less than
    native on-card bandwidth. Any game which is optimized to run quickly will,
    therefore, do absolutely as much as possible out of on-card RAM. You'd be
    unlikely to see much difference in a game between a 7800GTX on an 8 or 16-lane
    slot (or even a 4-lane slot). If you want to see much difference, put in a
    6200TC card which spends all its time using the bus.

    There *is* a difference if you're sending lots of data backwards and forwards.
    This tends to be true of Viewperf (and you've got a workstation card which is
    trying to do some optimization, which is why the nForce4 Pro workstation chipset
    supports this configuration), or - as mentioned - in GPGPU work. It might also
    help for cards without an SLi connector, where the image (or some of it) gets
    transferred across the PCI-e bus.

    This chipset sounds like they've just taken an nForce4 Pro (2200+2050 combo)
    and pulled one of the CPUs out. It does make my Tyan K8WE (NF4Pro-based dual
    16-lane slots, dual Opteron 248s) look a bit of an expensive path to have taken,
    even though I've got a few bandwidth advantages. Guess I'll have to save up
    for some 275s so I don't look so silly. :-)
  • PrinceGaz - Tuesday, August 9, 2005 - link

    I wasn't suggesting measuring the difference between x8 and x16 with a TC card, it was for people who are worried that there is some performance hit with current SLI setups running at x8 which this new chipset will solve. I'm well aware that performance suffers terribly if the card runs out of onboard memory, and was not suggesting that. Besides anyone with a TC card won't be running in SLI mode anyway so the x8 vs x16 issue is irrelevant there.

    I agree there is unlikely to be much difference between x8 and x16 in games but it would be nice to test it just to be sure. Any difference there is could be maximised by running tests at low resolutions (such as 640x480) as that will simulate what the effect would be of the x8 bus limitation on a faster graphics-card at higher resolutions. It's all about how many frames it can send over the bus to the card.

    Actually my new box has a 6800GT in it and an X2 4400+ running at 2.6GHz, so I'll do some tests this evening then flick all the little switches (it's a DFI board) and re-run them, then report back with the results. I doubt there'll be much difference.
  • Fluppeteer - Tuesday, August 9, 2005 - link

    Sorry, should've been clearer - I didn't mean to suggest a
    bandwidth comparison test either, just to say that where
    you don't see a difference with the 7800 you might with the
    6200TC. Not that I'd expect all that many owners of this
    chipset to be buying 6200s.

    I'd be interested in the results of your experiment, but
    you might also be interested in:
    http://graphics.tomshardware.com/graphic/20041122/...">http://graphics.tomshardware.com/graphic/20041122/...
    (which is the source of my assertions) - although not as many
    games are tested as I'd thought I remembered. Still, the full
    lane count makes a (minor) difference to Viewperf, but not
    to (at least) Unreal Tournament.

    Of course, this assumes that my statement about how much
    data goes over the bus is correct. The same may not apply
    to other applications - responsiveness in Photoshop, or
    video playback (especially without GPU acceleration) at
    high resolutions. Anyone who's made the mistake of running
    a 2048x1536 display off a PCI card and then waited for
    Windows to try to fade to grey around the "shutdown" box
    (it locks the screen - chug... chug...) will have seen the
    problem. But you need to be going some for 8 lanes not to
    be enough.

    It's true that you're more likely to see an effect at
    640x480 - simulating the fill rate of a couple of
    generations of graphics cards to come, at decent resolution.
    The TH results really show when pre-7800 cards become fill
    limited.

    My understanding was that, in non-SLi mode, the second slot
    works but in single-lane config. Is that right? I'd like to
    see *that* benchmarked...

    Ah, wonderful toys, even if we don't really need them. :-)
  • PrinceGaz - Tuesday, August 9, 2005 - link

    Yes, when an nF4 SLI mobo is set to single-card mode, the second slot does run at x1 so it is still very useful assuming companies start making PCIe TV-tuner cards, soundcards, etc in the next year or two. Apparently Creative's new X-Fi will be PCI only at first which is lame beyond belief. The 250MB/s bi-directional bandwidth that a x1 PCIe link would give a graphics-card would have quite an impact I'm sure.
  • Fluppeteer - Wednesday, August 10, 2005 - link

    Re. the X-Fi, I don't see the bandwidth requirements needing more
    than PCI (not that I know anything about sound); I'm sure they
    can make a version with a PCI-e bridge chip once people start
    having motherboards without PCI slots (which, given how long ISA
    stuck around, will probably be in a while). If even the Ageia
    cards are starting out as PCI, I'd not complain too much yet.

    Apparently the X-Fi *is* 3.3V compatible, which at least means I
    can stick it in a PCI-X slot. (For all the above claims about
    PCI sticking around, my K8WE has all of *one* 5V 32-bit PCI
    slot, and that's between the two PCI-Es. I hope Ageia works with
    3.3V too...)
  • nserra - Tuesday, August 9, 2005 - link

    "Obviously, Intel is our key processor technology partner and we are extremely familiar with their products. But we continue to look at the technology from AMD and if there is a unique advantage that we believe will benefit the customer, sure, we will look at it."
  • jamori - Monday, August 8, 2005 - link

    I'm curious as to whether or not they fixed the problem with displaying on two monitors without rebooting into non-SLI mode. I'm planning to buy a new motherboard this week, and am going with the ultra version instead of SLI for this reason alone.

    I figure I'll spend less on a motherboard and more on a videocard that will actually do what I want it to.
  • Doormat - Monday, August 8, 2005 - link

    Anandtech, please make sure to test out the CPU Utilization when using the onboard Ethernet w/ ActiveArmor on production boards. I'd like to see if they revised that at all, since the CPU Utilization was so high on the current revision of the boards. In fact, most nVidia nForce Pro motherboards for Opterons dont use the included nVidia Ethernet, they use the Broadcom or some other chip because performance is so bad.
  • Anemone - Monday, August 8, 2005 - link

    Just cuz if you own the mobo a few years there will be things to stick in x1 and x4 slots I'm sure.

    Nice going Nvidia :)

Log in

Don't have an account? Sign up now