The Kicker(s)

On server setups with available HyperTransport lanes to processors, up to 3 nForce Pro 2050 MCPs can be added alongside a 2200 MCP.

As each MCP makes a HT connection to a processor, a dual or quad processor setup is required for a server to take advantage of 3 2050 MCPs (in addition to the 2200), but in the case where such I/O power is needed, the processing power would be an obvious match. It is possible for a single Opteron to connect to one 2200 and 2 2050 MCPs for slightly less than the maximum I/O, as any available HyperTransport link can be used to connect to one of NVIDIA's MCPs. Certainly, the flexibility now offered by AMD and NVIDIA to third party vendors has been increased multiple orders of magnitude.

So, what does a maximum of 4 MCPs in one system get us? Here's the list:

  • 80 PCI Express lanes across 16 physical connections
  • 16 SATA 3Gb/s channels
  • 4 GbE interfaces

Keep in mind that these features are all part of MCPs connected directly to physical processors with 1GHz HyperTransport connections. Other onboard solutions with massive I/O have provided multiple GbE, SATA, and PCIe lanes through a south bridge, or even the PCI bus. The potential for onboard scalability with zero performance hit is staggering. We wonder if spreading out all of this I/O across multiple HyperTransport connections and processors may even help to increase performance over the traditional AMD I/O topography.


Server configurations can drop an MCP off of each processor in an SMP system.

The fact that NVIDIA's RAID can span across all of the devices in the system (all 16 SATA 3Gb/s and the 4 PATA devices from the 2200) is quite interesting as well. Performance will be degraded by mixing PATA and SATA devices (as the system will have to wait longer on PATA drives), and likely, most will want to keep RAID limited to SATA. We are excited to see what kind of performance that we can get from this kind of system.

So, after everyone catches their breath from the initial "wow" factor of the maximum nForce Pro configuration, keep in mind that this will only be used in very high end servers using Infiniband connections, 10 GbE, and other huge I/O interfaces. Boards will be configured with multiple x8 and x4 PCIe slots. This will be a huge boon to the high end server market. We've already shown the scalability of the Opteron at the quad level to be significantly higher than Intel's Xeon (single front side bus limitations hurt large I/O), but unless Intel has some hidden card up their sleeve, this new boost will put AMD based high end servers well beyond the reach of anything that Intel could put out in the near term.

On the workstation side, NVIDIA will be pushing dual processor platforms with one 2200 and one 2050 MCP. This will allow the following:

  • 40 PCI Express lanes across 8 physical connections
  • 8 SATA 3Gb/s channels
  • 2 GbE interfaces


This is the workstation layout for a DP NFPro system.

Again, it may be interesting to see if a vendor will come out with a single processor workstation board requiring an Opteron 2xx to enable a 2200 and 2050. This way, a single processor workstation could be had with all the advantages of the system.


IWill is shipping their DK8ES as we speak and should have availability soon.

While NVIDIA hasn't announced SLI versions of Quadro or support for them, boards based on the nForce Pro will support SLI version of Quadro when available. And the most attractive feature of the nFroce Pro when talking about SLI is this: 2 full x16 PCI Express slots. Not only do both physical x16 slots have all x16 electrical connections, but NVIDIA has enough headroom left for an x4 slot and some x1 slots as well. In fact, this feature is big enough to be a boon without the promise of SLI in the future. The nForce Pro offers full bandwidth and support for multiple PCI Express cards, meaning that multimonitor support on PCI Express just got a shot in the arm.

One thing that's been missing since the move away from PCI graphics cards has been multiple card support in systems. Due to the nature of AGP, running a PCI card alongside an AGP card is nice, but PCI cards are much lower performance. Finally, those who need multiple graphics cards will have a solution with support for more than one of the latest and greatest graphics cards. This means two 9MP monitors for some, and more than two monitors for others. And each graphics card has a full x16 connection back to the rest of the system.


Tyan motherboards based on nForce Pro are also shipping soon according to NVIDIA.

The two x16 connections will also become important for workstation and server applications that wish to take advantage of GPUs for scientific, engineering, or other vector-based applications that lend themselves to huge arrays of parallel floating point hardware. We've seen plenty of interest in doing everything from fluid flow analysis to audio editing on graphics cards. It's the large bi-directional bandwidth of PCI Express that enables this, and now that we'll have two PCI Express graphics cards with full bandwidth, workstations have even more potential.

Again, workstations will be able to have RAID across all 8 SATA devices and 4 PATA devices if desired. We can expect mid-range and lower end workstation solutions only to have a single nForce 2200 MCP on them. NVIDIA sees only the very high end of the market looking at pairing the 2200 with 2050 in the workstation space.

Perhaps the only down side to the huge number of potential PCI Express lanes available is the inability of NVIDIA MCPs to share lanes between MCPs and slight lack of physical PCI Express connections. In otherwords, it will not be possible to have a motherboard with 5 x16 PCI Express slots or 10 x8 PCI Express slots, even though there are 80 lanes available. Of course, having 4 x16 slots is nothing to sneeze at. On the flip side, it's not possible to put 1 x16, 2 x4, and 6 x1 PCIe slots on a dual processor workstation. Even though there are a good 16 PCI Express lanes available, there are no more physical connections to be had. Again, this is an extreme example, and not likely to be a real world problem. But it's worth noting nonetheless.


NVIDIA nForce Pro 2200 MCP and 2050 MCP Final Words
Comments Locked

55 Comments

View All Comments

  • smn198 - Friday, January 28, 2005 - link

    It does do RAID-5!
    http://www.nvidia.com/object/IO_18137.html

    w00t!
  • smn198 - Friday, January 28, 2005 - link

    #18
    It can do RAID-5 according to http://www.legitreviews.com/article.php?aid=152

    Near bottom of page:
    "Update: NVIDIA contacted us to let us know that RAID 5 is also supported on the 2200 and 2050. They also didn't hesitate to point out that when the 2200 is matched with three 2050's, the RAID array can be spanned across 16 drives!"

    However, nidia's site does not mention it! http://www.nvidia.com/object/feature_raid.html

    I wonder. would be nice!
  • DerekWilson - Friday, January 28, 2005 - link

    #50,

    each lane in PCIe consists of a serial up link and down link. this means that x16 actually has 4Gb/s up and down at the same time (thus the 8Gb/s number everyone always quotes). Saying 8Gb/s bandwidth without saying 4 up and 4 down is a lil misleading because that bandwidth can't move in one direction when needed.

    #53,

    4x SATA 3Gb/s -> 12Gb/s -> 1.5GB/s + 2GbE -> 0.25GB/s + USB 2.0 ~-> .5GB/s = 2.25 GB/s ... so this is really manageable bandwidht. Especially as its unlikely for all this to be moving while all 5 gig up and down of the 20 PCIe lanes are moving at the same time.

    It's more likely that we'll see video cards setting aside 30% of the PCI Express b/w to nearly idle (as, again, upload is often not used). Unless using the 2 x16 SLI ... We're still not quite sure how much bandwidth this will use over the top and through the PCIe bus. But one card is definitely going to send data back up stream.

    Each MCP has a 16x16 HT link @ 1GHz to the system... Bandwidth is 8GB/s (4 up and 4 down) ...
  • guyr - Thursday, January 27, 2005 - link

    Can anyone explain how these MCPs work regarding throughput? What kind of clock rate do they have? 4 SATA II drives alone is 12 Gbps. Add 2 GigE and that is 14. Throw in 8 USB 2.0 and that almost an additional 4 Gbps. So if you add everything up, it looks to be over 20 Gbps! Oops, sorry, forgot about 20 lanes of PCIe. Anyway, has anyone identified a realistic throughput that can be expected? These specs are wonderful, but if the chip can only pass 100 MB/s, it doesn't mean anything.
  • jeromechiu - Thursday, January 27, 2005 - link

    #12, if you have a gigabit switch that supports port trunking, then you could use BOTH of the gigabit ports for faster intranet file-transfer. Hell! Perhaps you could add another two 4-port gigabit adaptors and give your PC a sort-of-10Gbps connection to the switch! ;)
  • philpoe - Wednesday, January 26, 2005 - link

    Being a newbie to PCI-E, if I read a PCI-Express FAQ correctly, aren't the x16 slots in use for graphics cards today 1 way only? Too bad the lanes can't be combined, or you could get to a 1-way x32 slot (apparently in the PCI-E spec). In any case, 4 x8 full duplex cards would be just the ticket for Infiniband (making all that Gbe worthless?) and 4 x2 slots for good measure :). Just think of 16x SATA-300 drives attached and RAID. Talk about a throughput monster.
    Imagine Sun, with the corporate-credible Solaris OS selling such a machine.
  • DerekWilson - Tuesday, January 25, 2005 - link

    #32 henry, and anyone who saw my wrong math :-)

    You were right in your setup even though you only mentioned hooking up 4 x1 lanes -- 2 more could have been connected. Oops. I've corrected the article to reflect a configuration that actually can't be done (for real this time, I promise). Check my math again to be sure:

    1 x16, 2 x4, 6 x1

    that's 9 slots with only 8 physical connections. still with 10 lanes left over. In the extreme I could have said you can't do 9 x1 connectios on one board, but I wanted to maintain some semblance of reality.

    Again, it looks like the nForce Pro is able to throw out a good deal of firepower ....
  • ceefka - Tuesday, January 25, 2005 - link

    Plus I can't wait to see a rig like this doing benchies :-)
  • ceefka - Tuesday, January 25, 2005 - link

    In one word: amazing!

    Some of this logic eludes me, however.

    There's no board that can fully exploit the theoretical connectivity of a 4-way opteron config with these chipsets?
  • SunLord - Tuesday, January 25, 2005 - link

    I'd pay upto $450 for a dual cpu/chipset board as long as it gave me 2x16 1x4 and 1-3x1 connectors... as I see no use for pci-x when pci-e cards are coming out... Would make for one hell of a workstation to replace my aging athlon mp using tyan thunder k7 pro board. Even if the onbaord raid doesn't do raid 5 I can use the 4x slot for a sata2 raid card with little to no impact! Though 2 gigabit ports is kinda overkill. mmm 8x74GB(136GB) raptor raid 0/1 and 12x500GB(6TB) Raid 5 3Ware/AMCC controller.

    I can dream can't I? No clue what I would do with that much diskspace though... and still have enough room for 4 dvd-+rw dual layer burners hehe

Log in

Don't have an account? Sign up now