DualNet

DualNet's suite of options actually brings a few enterprise type network technologies to the general desktop such as teaming, load balancing, and fail-over along with hardware based TCP/IP acceleration. Teaming will double the network link by combining the two integrated Gigabit Ethernet ports into a single 2-Gigabit Ethernet connection. This brings the user improved link speeds while providing fail-over redundancy. TCP/IP acceleration reduces CPU utilization rates by offloading CPU-intensive packet processing tasks to hardware using a dedicated processor for accelerating traffic processing combined with optimized driver support.

While all of this sounds impressive, the actual impact for the general computer user is minimal. On the other hand, a user setting up a game server/client for a LAN party or implementing a home gateway machine will find these options very valuable. Overall, features like DualNet are better suited for the server and workstation market. We believe these options are being provided (we are not complaining) since the NVIDIA professional workstation/server chipsets are based upon the same core logic.


NVIDIA now integrates dual Gigabit Ethernet MACs using the same physical chip. This allows the two Gigabit Ethernet ports to be used individually or combined depending on the needs of the user. The previous NF4 boards offered the single Gigabit Ethernet MAC interface with motherboard suppliers having the option to add an additional Gigabit port via an external controller chip. This too often resulted in two different driver sets, with various controller chips residing on either the PCI Express or PCI bus and typically worse performance than a well-implemented dual-PCIe Gigabit Ethernet solution. .

Teaming


Teaming allows both of the Gigabit Ethernet ports in NVIDIA DualNet configurations to be used in parallel to set up a 2-Gigabit Ethernet backbone. Multiple computers can to be connected simultaneously at full gigabit speeds while load balancing the resulting traffic. When Teaming is enabled, the gigabit links within the team maintain their own dedicated MAC address while the combined team shares a single IP address.

Transmit load balancing uses the destination (client) IP address to assign outbound traffic to a particular gigabit connection within a team. When data transmission is required, the network driver uses this assignment to determine which gigabit connection will act as the transmission medium. This ensures that all connections are balanced across all the gigabit links in the team. If at any point one of the links is not being utilized, the algorithm dynamically adjusts the connection to ensure an optimal connection. Receive load balancing uses a connection steering method to distribute inbound traffic between the two gigabit links in the team. When the gigabit ports are connected to different servers, the inbound traffic is distributed between the links in the team.


The integrated fail-over technology ensures that if one link goes down, traffic is instantly and automatically redirected to the remaining link. If a file is being downloaded as an example, the download will continue without loss of packet or corruption of data. Once the lost link has been restored, the grouping is re-established and traffic begins to transmit on the restored link.

NVIDIA quotes on average a 40% performance improvement in throughput can be realized when using teaming although this number can go higher. In their multi-client demonstration, NVIDIA was able to achieve a 70% improvement in throughput utilizing six client machines. In our own internal test we realized about a 36% improvement in throughput utilizing our video streaming benchmark while playing Serious Sam II across three client machines. For those without a Gigabit network, DualNet has the capability to team two 10/100 Fast Ethernet connections. Once again, this is a feature set that few desktop users will truly be able to exploit at the current time. However, we commend NVIDIA for forward thinking in this area as we see this type of technology being useful in the near future.

TCP/IP Acceleration

NVIDIA TCP/IP Acceleration is a networking solution that includes both a dedicated processor for accelerating networking traffic processing and optimized drivers. The current nForce 590SLI and nForce 680i SLI MCP chipsets have TCP/IP acceleration and hardware offload capability built in to both native Gigabit Ethernet Controllers. This capability will typically lower the CPU utilization rate when processing network data at gigabit speeds.


In software solutions, the CPU is responsible for processing all aspects of the TCP protocol: Checksumming, ACK processing, and connection lookup. Depending upon network traffic and the types of data packets being transmitted this can place a significant load upon the CPU. In the above example all packet data is processed and then checksummed inside the MCP instead of being moved to the CPU for software-based processing that improves overall throughout and CPU utilization.

NVIDIA dropped the ActiveArmor slogan for the nForce 500 release and it is no different for the nForce 600i series. Thankfully the ActiveArmor firewall application was jettisoned to deep space as NVIDIA pointed out that the basic features provided by ActiveArmor will be a part of Microsoft Vista. We also feel NVIDIA was influenced to drop ActiveArmor due to the reported data corruption issues with the nForce4 caused in part by overly aggressive CPU utilization settings, customer support headaches, issues with Microsoft, and quite possibly hardware "flaws" in the original nForce MCP design.

We have found a higher degree of stability with the new TCP/IP acceleration design but this stability comes at a price. If TCP/IP acceleration is enabled via the control panel, then certain network traffic will bypass third party firewall applications. We noticed CPU utilization rates near 14% with the TCP/IP offload engine enabled and rates near 26% without it.

LinkBoost and FirstPacket MediaShield and HD Audio
Comments Locked

60 Comments

View All Comments

  • Wesley Fink - Thursday, November 9, 2006 - link

    The other time you might need a fan on the northbrdige is when using water cooling or phase-change cooling. There is no air-flow spillover from water-cooling the CPU like there is with the usual fan heatsink on the CPU, so the auxillary fan might be needed in that situation.
  • Wesley Fink - Thursday, November 9, 2006 - link

    The 680i Does NOT require active notrthbridge cooling and is shipped as a passive heatpipe design. At 80nm it is much cooler than the 130nm nVdia chipsets. The fan you see in the pictures is an included accessory for massive overclocking, much like Asus includes auxillary fans in their top boards.

    In our testing we really did not find the stock fanless board much of a limitation in overclocking as the northbridge did not get particularly hot at any time. We installed the fan when we were trying to set the OC record and left it on for our 3 days at 2100 FSB. Since it is a clip and 3 screws to install we left it on.
  • IntelUser2000 - Monday, November 13, 2006 - link

    quote:

    The 680i Does NOT require active notrthbridge cooling and is shipped as a passive heatpipe design. At 80nm it is much cooler than the 130nm nVdia chipsets. The fan you see in the pictures is an included accessory for massive overclocking, much like Asus includes auxillary fans in their top boards.

    In our testing we really did not find the stock fanless board much of a limitation in overclocking as the northbridge did not get particularly hot at any time. We installed the fan when we were trying to set the OC record and left it on for our 3 days at 2100 FSB. Since it is a clip and 3 screws to install we left it on.


    That's funny. A cooler running one consuming more power. Must be the die size is much larger :D.
  • yacoub - Thursday, November 9, 2006 - link

    ah okay thanks for that clarification! =)
  • yacoub - Thursday, November 9, 2006 - link

    NTune would be a lot more interesting if it wasn't so slow to respond to page changes, cumbersome, and a gigantic UI realestate hog.

    The same functionality in a slimmer, more configurable, and efficient UI design would be highly desireable.
  • yacoub - Thursday, November 9, 2006 - link

    and actually, that goes for the entire NVidia display/GPU settings configuration panel.
  • Khato - Wednesday, November 8, 2006 - link

    Each CPU is going to have a max FSB clock that it'll run stably at for the same reason that it has a max core logic frequency. The main difference here is that you have two possible barriers: signal degredation due to the analog buffers not being designed for such high speed and then whatever buffer logic there is in the CPU to clock cross from FSB to core not liking the higher frequency. I'm kinda leaning towards the buffer logic being the limiting factor, since I'd expect the manufacturing variance in the analog buffers to be minimal. That and the described 75MHz variance in top FSB frequency between various processors sounds reasonable for non-optimized logic.
  • Staples - Wednesday, November 8, 2006 - link

    I have no need for SLI. Makes the board more expensive and an SLI setup is just not worth it to me. I was about to buy a P965 chipset but now I am interested in a the 650i Ultra. Will we see a review of this chipset in the future? Most of it seems to be exactly the same as the 680i however it does lack some features and I am afraid that those missing features may affect performance. As it stands now, do you expect the performance of the 650i Ultra to perform identical to the 680i SLI?
  • Gary Key - Wednesday, November 8, 2006 - link

    quote:

    As it stands now, do you expect the performance of the 650i Ultra to perform identical to the 680i SLI?


    We do not, we do expect the 650i SLI to perform closely to it. We will have 650i boards in early December for review. :)
  • Pirks - Wednesday, November 8, 2006 - link

    is this functionality where you can overclock your CPU and FSB and memory on the fly without rebooting Windows available only on nForce mobos? I'm a stability freak and I want to be able to raise and lower my clocks and voltage on the fly, similar to the way Macs do this - they spin their fans under load and become totally quiet when idle - I wanna do the same so that my rig is dead quiet when idle/doing word/inet/email/etc and becomes noisy and fast OCed beast when firing up Crysis or something. and I want this Mac-style WITHOUT rebooting Windows

    so do I have to buy nVidia mobo for that?

    600i series only or earlier nForce 4 or 5 series will do as well?

    I still can't dig what's up with these "dynamic BIOS updates that _require_ reboot to work" - so can you OC without rebooting or not? if yes - what are these BIOS options that nTune changes that DOES require reboot?

    could you happy nTune owners enlighten me on that stuff? thanks ;)

Log in

Don't have an account? Sign up now