The New AMD and Intel Chipsets

The upgrade to nForce4 SLI X16 is more of an upgrade than an overhaul. This solution expands NVIDIA's core logic solution to two chips rather than one. The current nForce4 MCP will act as the southbridge and will be connected to the new AMD nForce4 System Platform Processor (SPP) via its HyperTransport link (usually connected to the processor). This gives 8GB/s bandwidth between the AMD MCP and SPP. Added latency over the HT link shouldn't be very high and we don't suspect that it will have an impact on anything. The SPP and MCP each provide x16 PCI Express links along with a few other choice features.

As the Intel core logic solution already incorporates an SPP, the upgrade for the Intel nForce4 SLI X16 is even simpler. Since the MCP included on the current Intel chipsets simply has its PCI Express lanes disabled; enabling them is all that NVIDIA needs to do. The total number of available PCI Express lanes on Intel nForce4 SLI X16 based systems comes to 40 after the SPP and MCP are added together. These lanes can be divided up into 9 different slots. AMD based systems will offer 38 lanes over up to 7 slots. This means that we could see a bunch of x1 or x2 slots, but since PCI Express cards can plug into larger slots and this solution has lanes to spare, we'd like to see some larger connectors on these consumer motherboards. There aren't any widely available parts to make full use of the bandwidth now, but motherboards that cost upwards of $200 should be somewhat future proof and flexible.

NVIDIA states that motherboards shipping with the nForce4 SLI X16 chipset will generally have all the enthusiast bells and whistles like dual gigabit network connections and 6 to 8 SATA ports. Supporting all these options alongside up to 40 PCI Express lanes (38 for AMD systems) and 5 PCI slots, these new motherboards will cater to almost workstation level I/O needs. For example configurations of Intel and AMD solutions, take a look at these block diagrams provided by NVIDIA.





These configurations can vary depending on the manufacturer of the motherboard.

Index Final Words
Comments Locked

61 Comments

View All Comments

  • akugami - Monday, August 8, 2005 - link

    This will boost video graphics performance as much as when 8x AGP came out over the then cutting edge 4x AGP. Which is to say slim to none. As others have stated, there is no known graphics card capable of fully utilizing the 8x AGP bus much less 16x PCI-Express bus. The Geforce 7800 doesn't come in AGP flavors so we don't know if it has a significant performance difference between 4x and 8x AGP.
  • JarredWalton - Monday, August 8, 2005 - link

    A few thoughts about the bandwidth increases now offered with the new chipset. First, for transfers from system RAM to the GPUs, this is completely useless. I also have to wonder what the link between the MCP and SPP is - it would have to have 8 GB/s of bandwidth to make the second X16 slot the same speed as the primary SPP slot. Hmmmm.... most I've heard of for a NB to SB interconnect is about 1 GB/s. Two HyperTransport channels running at 1000 MHz would provide enough bandwidth, but I seriously doubt that's present.

    Now, even if the NB to SB connection were fast enough, dual-channel PC3200 DDR only offers 6.4 GB/s of bandwidth - less than that of a single X16 slot. So SATA controllers sitting on X4 connections combined with two GPUs on X16 connections will now be possible, but the actual performance probably wouldn't be any different than SATA controllers on an X2 connection with two GPUs on X8 connections. Maybe we'll get quad-channel DDR2-667 RAM with socket M2 to make this a realizable performance boost? (/sarcasm)

    There is a use case for it, though: GPGPU for one, and potentially SLI without the extra connector. Board to board SLI transfers over the internal X16 should be at least as fast as the proprietary connector I'd think. That last one is especially interesting, I think. If current SLI takes an X16 channel and breaks it into two X8 channels, how about a board with four X8 connections and four physical X16 slots for quad-GPU SLI? It wouldn't surprise me at all to find that NVIDIA has a team working on that exact project.

    As the article states, the biggest deal about this launch for gamers is that prices will drop on SLI boards. Maybe then I'll be able to stomach recommending SLI for a mid-range system. The even bigger deal for NVIDIA is that they now have an "in" with Dell. THAT is freaking huge! I don't think Dell actually sells that many XPS systems, but then I don't think that many Intel SLI setups have been purchased as a whole. Dell has marketing power, and they WILL find ways to convince people to buy Intel SLI PCs.
  • ceefka - Tuesday, August 9, 2005 - link

    4 graphics cards? Looks like the PC is turning into a gaming console and losing its general purpose, unless you're a stock broker maybe ;-)

    Having this abundance of PCI-E lanes looks like a step to abandon PCI(-X). The nF4 boards have issues with professional soundcards on the PCI-bus. It is a pity all these gadgets and extra performance have downgraded the PCI-bus instead of enhancing it.

    I believe it is time for cardmanufactureres to develop more PCI-E based cards. It seems like chipset manufacturers aren't willing to spend the time to preserve good bandwith for the old PCI-bus.
  • ChiefNutz - Monday, August 8, 2005 - link

    Nvidia Graphic cards communicate through their "crossbar" on the top of the cards, so, even having just 1 HT link between SB & NB wouldn't be that big of a deal. I don't think that it would saturate the HT link either, due to the crossbar. But this setup would be nice in a system if they got rid of the 16x crap and just gave you the straight channels like the 2200pro and 2250, like a 8x or 2 4x slots like that.

    What I really want to know is will they now support raid 5 on a non NForce-Pro AMD system?? Intel edition has it and so does the 2200pro? where is the NON-ECC love!
  • JarredWalton - Monday, August 8, 2005 - link

    Yes, they have their crossbar controller, but they still get information from the CPU and main memory. If you ignore the GPU to CPU/RAM via PCIe bus communications, then there is no difference whatsoever between SLI X8 and SLI X16. (Which is likely the case anyway.)

    RAID 5 appears to be coming with the nForce4 SLI X16 chipsets to both platforms. We just neglected to mention it:

    quote:

    World-Class features for both AMD and Intel platforms
    * ActiveArmor secure networking engine with NVIDIA Firewall
    * NVIDIA nTune
    * MediaShield with 4 SATA 3Gb/s ports and RAID 5
  • afrost - Monday, August 8, 2005 - link

    So now nforce boards will have two really hot chips that need loud little fans or elaborate heat pipes? I hope that with this new generation of nforce chips they figure out a way to cut down some of the heat output. The nForce 3 was perfectly fine but the 4 gets toasty.

    This is one of the main reason that I am looking forward to the ATI boards....I like passive chipset coolers.

  • Anton74 - Monday, August 8, 2005 - link

    Perhaps now that it is a 2 chip solution rather than 1, each of the chips will run a bit cooler than if they were combined, hopefully allowing for simple passive cooling with good (aftermarket) heatsinks like that blue Zalman one, the ZM-NB47J. As long as they don't put those chips in un-strategic places...
  • Gerbil333 - Tuesday, August 9, 2005 - link

    That was my first thought when I read this. The current nF4 chips run way too hot. I really hope the new two chip design runs cooler.
  • virtualrain - Monday, August 8, 2005 - link

    Doesn't this solution completely do away with the need to either open the case and flip the switch to enable SLI or select it electronically (i.e. ASUS A8N-SLI Premium) and reboot?

    If so, that's a positive move even if there is no performance gain.

    One of the appeals of ATI's crossfire solution is the expanded flexibility and ease-of-use. I think this even's that part of the playing field somewhat.
  • Calin - Tuesday, August 9, 2005 - link

    As I remember, old NVIDIA SLI had that switch to distribute the PCI-E lines in 1x16 (a single usable slot) or 2x8 (two usable slots). There might have been some kind of extra connections to have a single x16 slot and one 4x slot (20 PCI-E lines used) or two x8 slots (16 PCI-E lines used, and 4 unused).
    It would be great to be able to change the SLI/non-SLI configuration from drivers.

Log in

Don't have an account? Sign up now