NVIDIA nForce Pro 2200 MCP and 2050 MCP

There will be two different MCPs in the nForce Professional lineup: the nForce Pro 2200 and the nForce Pro 2050. The 2200 is a full-featured MCP, and while the 2050 doesn't have all the functionalty of the 2200, they are based on the same silicon. The feature set of the NVIDIA nForce Pro 2200 MCP is just about the same as the nForce 4 SLI and is as follows:

  • 1 1GHz 16x16 HyperTransport Link
  • 20 PCI Express lanes configurable over 4 physical connections
  • Gb ethernet w/ TCP/IP offload Engine (TOE)
  • 4 SATA 3Gb/s
  • 2 ATA-133 channels
  • RAID and NCQ support (RAID can span SATA and PATA)
  • 10 USB 2.0
  • PCI 2.3

The 20 PCI Express lanes can be spread out over 4 controllers at the motherboard vendor's discretion via NVIDIA's internal crossbar connection. For instance, a board based on the 2200 could employ 1 x16 slot and 1 x4 slot, or 1 x16 and 3 x1 slots. It cannot host more than 4 physical connections or 20 total lanes. Technically, NVIDIA could support configurations like x6 which don't match PCI Express spec. This may prove interesting if vendors decide to bend the rules on anything, but likely server and workstation products will stick to the guidelines.

Maintaining SATA and PATA support is a good thing, especially with 4 SATA 3Gb/s channels, 2 PATA channels (for 4 devices), and support for RAID on both. Even better is the fact that NVIDIA's RAID solution can be applied across a mixed SATA/PATA environment. Our initial investigation of NCQ wasn't all that impressive, but hardware is always improving, and applications in the professional space are a good fit to NCQ features.


This is the layout of a typical system with the nForce 2200 MCP.

The nForce Pro 2050 MCP, the cut down version of the 2200 that will be used as an I/O add-on, supports these features:

  • 1 1GHz 16x16 HyperTransport Link
  • 20 PCI Express lanes configurable over 4 physical connections
  • Gb ethernet w/ TCP/IP offload Engine (TOE)
  • 4 SATA 3Gb/s

Again, the PCI Express controllers and lanes are configurable. Dropping this down to add those plus another GbE and 4 more SATA connections is an obvious advantage, but there is more.

As far as we can tell from this list, the only new feature introduced from nForce 4 is the TCP/IP offload Engine in the GbE. Current nForce 4 SLI chipsets are capable of all other functionality discussed in the NFPro 2200 MCP, although there may be some server level error reporting built into the core logic of the Professional series that we are not aware of. After all, those extra two million transistors had to go somewhere.

But that is definitely not all there is to the story. In fact, the best part is yet to come.


The New nForce Professional The Kicker(s)
Comments Locked

55 Comments

View All Comments

  • SDA - Monday, January 24, 2005 - link

    Thanks, Kris, but I do know that PCI-X != PCI-Express.. a lot of people use it to mean that by mistake, though, so I'm not sure what the author meant by PCI-X on the last page of the article.

    Also, technically, PCI-X isn't quite 64-bit PCI. 64-bit PCI is, well, 64-bit PCI; the main difference between it and PCI-X is that PCI-X also runs at a faster clock (133MHz, or 266MHz for 2.0). Obsolete PC technology is one of the few things I have any knowledge about, heh.
  • REMF - Monday, January 24, 2005 - link

    my mistake Derek, got the diagram muddled up with those hideous dual boards that connect all the memory through CPU0 and route it via HT to CPU1.

    mixed up memory with IO, silly me.
  • DerekWilson - Monday, January 24, 2005 - link

    nf pro supports ncq and not tcq ...

    I also updated the article ... MCPs are more flexible than I thought and NVIDIA has corrected me on a point --

    one 2200 and 2 2050s can connect to an Opteron 150. dual and quad servers are able to connect to 4 MCPs total (2 each processor for dual and 1 each for quad).

    With 8-way servers, it's possible to build even more I/O in to the system. NVIDIA says their mostly targeting 2 and 4 way, but with 8 way systems, there are topographies that essentially connect 2 4-way setups together. In these cases, 6 MCPs could be used giving even more I/O ...

    #21 ---

    Every Opteron has 3 HT links ... the difference between a 1xx, 2xx, and 8xx is the number of coherent HT links. In a dual core setup, either AMD could use one of the 3 HT links for core to core comm, or they could add an HT link for intra core comm.
  • pio!pio! - Monday, January 24, 2005 - link

    If I'm reading this correctly...with all those PCI Express slots and multiple MCP's and multiproc's...the number of traces in the mobo should be astronomically high..I wonder how expensive the motherboards will be
  • jmautz - Monday, January 24, 2005 - link

    Please correct my memory/misunderstanding...

    I thought the reason AMD could make a dual-core Opt so easilly was because they attached both cores via the unused HyperTrasport connector. Doesn't that mean there is no availible HyperTrasport conencters on to attch the 2050? (at least on the 22x models).

    Thanks.
  • DerekWilson - Monday, January 24, 2005 - link

    #18

    capable of RAID 0, 1, 0+1 ... same as NF4. The overhead of RAID 5 would require a much more powerful processor (or performance would be much slower).

    #15

    Quad and 8-way scientific systems with 4 video cards in them doing general purpose scientific computing (or any vector fp math app) comes to mind as a very relevant app ... I could see cluster of those being very effective in crunching large data science/math/engineering problmes.

    #12/#13

    NUMA and memory bandwidth has nothing to do with NVIDIA's nForce 4 or nForce Pro, or even AMD's chipsets.

    Each Opteron has it's own on die memory controller, and the motherboard vendor can opt to impliment a system that would allow or disallow NUMA as they see fit. What's required is a bios that: has APIC 2, no node interleaving, and can build an SRAT. Also the motherboard must allow physical memory to be attached to each processors' memory controllers. It's really a BIOS and phsyical layout issue.

    The NVIDIA core logic does do a lot for being single chip. But we should remember that it doesn't need to act as a memory controller as Intel's northbridge must. The nForce has no effect on memory config.
  • tumbleweed - Monday, January 24, 2005 - link

    The Tech Report mentioned that the nForce Pro supports TCQ instead of just NCQ - is that wrong, or was that just not mentioned here?
  • Doormat - Monday, January 24, 2005 - link

    Perhaps I missed it, but what RAID modes is it capable of? 0/1/5? I'd love to have a board with 8 SATA-II ports and dual opteron processors and run RAID 5 as a file server (with 64b linux of course). Let the CPUs do the parity calcs (since that'd be the only thing its used for). Mmmm... 8x400GB in RAID-5.
  • jmautz - Monday, January 24, 2005 - link

    Thanks I see that now. When I missed it the first time I went back and looked at the summery specs on page 3 and didn't see it listed.

    Thanks again.
  • ProviaFan - Monday, January 24, 2005 - link

    #14 / jmautz:

    On page 2 of the article, there is this statement:
    "NVIDIA has also informed us that they have been validating AMD's dual core solutions on nForce Professional before launch as well. NVIDIA wants its customers to know that it's looking to the future, but the statement of dual core validation just serves to create more anticipation for dual core to course through our veins in the meantime. Of course, with dual core coming down the pipe later this year, the rest of the system can't lag behind."

Log in

Don't have an account? Sign up now