Final Words

We can't wait to get our hands on a board. Now that NVIDIA has announced this type of scaling I/O with HyperTransport connections, we wonder why we haven't been pushing it all along. It seems rather obvious in hindsight that using the extra HT connections processors would be advantageous and relatively simple in an Opteron environment. This is especially true when all the core logic fits on a single chip. Kudos to NVIDIA for bringing the 2200 and 2050 combination to market.

Though much of the nForce Professional series is very similar to the nForce 4, NVIDIA has likely made good use of those two million extra transistors. Though, we can't be exactly sure what went in there - it's likely the TCP/IP offload Engine, and possibly some server level error reporting routines. But for this, nForce Pro is exactly the same as the nForce 4.

The creativity that the nForce Pro 2050 MCP will offer vendors is unfathomable. We've already seen what everyone has tried with the NF4 Ultra and SLI chipsets, and now that we have something made for scalability and multiple configurations, we are sure to see some ingenious designs spring forth.

NVIDIA mentioned that many of their partners wanted a launch in December. NVIDIA also told us that IWill and Tyan are already shipping boards, but we aren't sure how widespread availability is yet. We will have to speak with IWill and Tyan about these matters. As far as we are concerned, the faster that NVIDIA can get nForce Professional out the door, the better.

The last thing to look at is how the new NVIDIA solution compares to its competition from Intel. Well, here's a handy comparison chart for those who wish to know what they can get in terms of I/O from NVIDIA and from Intel on their server and workstation boards.


 Server/Worstation Platform Comparison
   NVIDIA nForce Pro (single)  NVIDIA nForce Pro (quad)  Intel E7525/E7520
PCI Express Lanes 20 Lanes 80 lanes 24
SATA 4 SATA II 16 SATA II 2 SATA 1.0
Gigabit Ethernet MAC 1 4 1
USB 2.0 10 10 4
PCI-X Support No No Yes
DDR/DDR2 DDR DDR DDR2

Opteron boards with NFPro can have PCI-X support when combined with the proper AMD-8000 series chips, but NVIDIA didn't build in PCI-X support. It's obvious how well beyond Lindenhurst and Tumwater (E7520 and E7525) that the nForce Pro will scale with dual and quad Opteron solutions. Even in a single MCP configuration, NVIDIA has a lot of flexibility with its configurable PCI Express controller. Intel's solutions are locked into either 1 x16 slot + 1 x8 (E7525) or 3 x8 (E7520). The x8 connections to the MCH can run 2 physical devices instead (up to 2 x4). Also, if the motherboard vendor includes Intel's additional PCI hub for more PCI-X slots, either 4 or 8 of those PCI Express lanes go away.

Unfortunately, there isn't a whole lot more that we can say until we get our hands on it for testing. Professional series products can take longer to get into our lab, so it may be some time before we can get a review out, but we will try our best to get product as soon as possible. Of course, boards will cost a lot, and the more exciting the board, the less affordable it will be. But that won't stop us from reviewing them. On paper, this is definitely one of the most intriguing advancements that we've seen in AMD-centered core logic, and could be one of the best things ever to happen to high end AMD servers.

On the workstation side, we are very interested in testing a full 2 x16 PCI Express SLI setup, as well as the multiple display possibilities of such a system. It's an exciting time for the AMD workstation market, and we're really looking forward to getting our hands on systems.


The Kicker(s)
Comments Locked

55 Comments

View All Comments

  • ProviaFan - Monday, January 24, 2005 - link

    For a long time, making a quad CPU workstation was pretty much not an option, because there was no way to connect an AGP graphics card for good 3D performance (yes, there is a PCI-X Parhelia, and no, that doesn't count). The only one I can remember was an SGI desktop system with 4 PIII's, though maybe the graphics on that were integrated (though of course they weren't bad, unlike Intel's).

    Now, with a quad CPU system and PCI-E, it will be possible to do whatever you want with those x16 slots, including using a high-performance graphics card (or two, which is something that used to be reserved for Sun systems with their proprietary graphics connectors). Or, with dual core, you could have a virtually 8-way workstation, though I'm not sure what the benefit of that would be outside of complex scientific calculations or 3D rendering.

    The sad part is there's no freakin way that I'll be able to afford that... :(
  • jmautz - Monday, January 24, 2005 - link

    I may have missed it, but I didn't see anything about support for dual-core processors. Was this mentioned? I would love to get a dual-core dual Opt board with all PCIe slots (2x16, 1x4, 4x1 would be nice).
  • R3MF - Monday, January 24, 2005 - link

    update on the Abit DualCPU board:

    Chipset

    * nVidia CrushK8-04 Pro Chipset

    > it does appear to use the nForce4 chipset, so one immediate question springs to mind: why if they can get numa memory on dual CPU boards with the nForce4, can they not do the same with the nForce Pro?


  • R3MF - Monday, January 24, 2005 - link

    what does this mean for the new Abit DualCPU board:
    http://forums.2cpu.com/showthread.php?s=ef43ac4b9b...

    one core-logic chip, yet with NUMA memory, presumably this means it is not an nForce Pro board if i understand anandtechs diagrams correctly.....?

    i like the sound of the Abit board:
    2x CPU
    2x NUMA memory per CPU
    2x SLI
    4x SATA2 slots
    1x GigE with Active Armour (my guess)

    best of all i am not paying for stuff i will never use like:
    second GigE socket
    PCI-X
    registered memory

    the only thing it lacks is a decent sound solution, but then every nForce4 suffers the same lack. hopefully someone will come out with a decent PCI-E dolby-digital soundcard...............
  • R3MF - Monday, January 24, 2005 - link

    @ 10 - i don't think so. the active armour is a DSP that does the necessary computations to run the firewall, as opposed to letting the CPU do the grunt work.
  • Illissius - Monday, January 24, 2005 - link

    Isn't the TCP offload engine thingy just ActiveArmor with a different name?
  • R3MF - Monday, January 24, 2005 - link

    two comments:

    the age of cheap dual opteron speed demons is not yet up on us, because although you only need one 2200 chip to have a dual CPU rig the second CPU connects via the first, thus you only get 6.4GB bandwidth as opposed to 12.8GB. yes you can pair a 2200 and a 2050 together but i bet they will be very pricey!

    the article makes mention of SLI for quaddro cards, presumably driver specific to accomodate data sharing over two different PCI-E bridges as opposed to one PCI-E bridge as is the case with nForce4 SLI. this would seem to indicate that regular PCI-E 6xxx series cards will not be able to be used in an SLI confiuration on nForce Pro boards as the ability will not be enabled in the driver. am i right?
  • DerekWilson - Monday, January 24, 2005 - link

    The Intel way of doing PCI Express devices is MUCH simpler. 3 dedicated x8 PCI Express ports on their MCH. These can be split in half and treated as two logical x4 connections or combined into a x16 PEG port. This is easy to approach from a design and implimentation perspective.

    Aside from NVIDIA having a more complex crossbar that needs to be setup at boot by the bios, allowing 20 devices would meant NVIDIA would potentially have to setup and stream data to 20 different PCI Express devices rather than 4 ... I'm not well versed in the issues of PCI Express, or NVIDIA's internal architecture, but this could pose a problem.

    There is also absolutely no reason (or physical spec) to have 20 x1 PCI Express lanes on a motherboard ;-)

    I could see an argument to have 5 physcial connections in case someone there was a client that simply needed 5 x4 connections. But that's as far as I would go with it.

    The only big limitation is that the controllers can't span MCPs :-)

    meaning it is NOT possible to have 5 x16 PCI Express connectors a quad Opteron mobo with the 2200 and 3 2050s. Nor is it possible to have 10 x8 slots. Max bandwidth config would be 8 x8 slots and 4 x4 slots ... or maybe throw in 2 x16, 4 x8, 4 x4 ... Thats still too many connectors for conventional boards ... I think I'll add this note to the article.

    #7 ... There shouldn't be much reason for MCPs to communicate explicityly with eachother. It was probably necessary for NVIDIA to extend the RAID logic to allow it to span across multiple MCPs, and it is possible that some glue logic may have been neccessary to allow a boot device to be locate on a 2050 for instance. I can't see it being 2M transistors in MCP-to-MCP kind of stuff though.
  • ksherman - Monday, January 24, 2005 - link

    I agree with #6... I think the extra transistors would be used to allow all the chips to communicate.
  • mickyb - Monday, January 24, 2005 - link

    One has to wonder what the 2 million extra transisters are for. I would be surprised if it was "just" to allow multiple MCPs. Sounds like a lot of logic. I am also surprised about the 4 physical connector limit. I didn't realize the PCI-e lanes had to partitioned off like so. I assumed that if there were 20 lanes, they could create up to 20 connectors.

Log in

Don't have an account? Sign up now