New Feature: LinkBoost

One of the feature sets unique to the nForce 590 SLI MCP - and highly touted by NVIDIA - is called LinkBoost. If a GeForce 7900 GTX is detected on the nForce5 system then LinkBoost will automatically increase the PCI Express (PCIe) and MCP HyperTransport (HT) bus speeds by 25%. This increases the bandwidth available to each PCIe and HT bus link from 8GB/s to 10GB/s. Since this technology increases the clock speed of the PCI Express bus by 25%, NVIDIA requires certification of the video card for this program to work automatically. In this case, the 7900GTX is the only compatible card currently, although you can manually set the bus speeds and achieve the same or better overclock depending upon your components.

In essence, NVIDIA is guaranteeing their chipset's PCIe and HT interconnect links are qualified to perform up to 125% of their default speeds without issue. While LinkBoost is an interesting idea, its actual implementation did not change our test scores outside the normal margin of error. The 25% increase in PCIe and HT yielded virtually the same performance as our system without LinkBoost enabled. The reason is that the performance boost is being applied in areas that have minimal impact on system performance.

LinkBoost is part of a package of easy-to-use auto overclocking features on the nForce5 designed for the OC newbie. If you fit in that category and you are excited about the 25% LinkBoost speed increase, you need to clearly understand that the 25% LinkBoost increase yielded little to no real performance increase. The true performance potential of this technology would have been realized on the AM2 CPU if the MCP's link to the CPU/Memory subsystem would have been dynamically increased from the base 8GB/s level, but NVIDIA does not control AMD CPU certification and thus left the CPU at stock speed.

The end result is that the Northbridge to CPU HyperTransport link remains at 8GB/s, and only the link between the MCP and SPP as well as the PEG slots get increased bandwidth. Having 8GB/s of bandwidth feed 10GB/s basically means you are still effectively limited to 8GB/s. It is possible that increasing the Northbridge to CPU bandwidth could improve performance slightly, but HyperTransport performance is rarely the bottleneck in current systems as you will see in our performance results.

New Feature: FirstPacket

As part of the overhaul of the networking features found in the NVIDIA nForce 500 Series, FirstPacket is a packet prioritization technology that allows latency-sensitive applications and games to effectively share the upstream bandwidth of their broadband connection. Essentially this technology allows the user to set network data packets for applications and games that are more latency sensitive with a higher queue priority for outbound traffic only.

FirstPacket is embedded in the hardware and offers driver support that is specifically designed to reduce latency for networked games and other latency-sensitive traffic like Voice over IP (VoIP). When network traffic constrains a connection, latency is increased which in turn can result in dropped packets that would create a jitter and delay in VoIP connections or higher ping rates to the game server resulting in stutters and decreased game play abilities.

In the typical PC configuration, the operation system, network hardware, and driver software are unaware of latency issues and therefore are unable to reduce it. The standard interfaces that allow applications to send and receive data are basically identical to the OS in a typical system. This design results in latency-tolerant and large packet applications like FTP or Web browsers filling the outbound pipeline without regards to the needs of small packet and very latency-sensitive applications like games or VoIP applications.

FirstPacket operates by creating an additional transmit queue in the network driver. This queue is designed to provide expedited packet transmission for applications the user determines are latency-sensitive. The ability of the designated applications to get preferential access to the upstream bandwidth usually results in improved performance and lower ping rates. The FirstPacket setup and configuration is available through a new Windows based driver control panel that is very simple to use.

In our LAN testing, we witnessed ping rate performance improvements of 27% to 43% during the streaming of video from our media server while playing Serious Sam II across three machines attached to the network. We noticed ping rate performance improvements of 15% to 23% while uploading files via BitTorrent and playing Battlefield 2 on varying servers with VoIP conversations on Skype during game play. The drawback at this time is that only outbound packets are prioritized, so if you spend more time downloading than uploading the FirstPacket technology will have little impact for you. However, in NVIDIA's defense they cannot control the behavior or quality of service on other networked clients, so FirstPacket addresses the services NVIDIA can control - namely uploading.

Basic Platform Features DualNet, Teaming, and TCP/IP Acceleration


View All Comments

  • Googer - Wednesday, May 24, 2006 - link">">
  • Googer - Wednesday, May 24, 2006 - link

    AM2 Now Shiping at">
  • Doormat - Wednesday, May 24, 2006 - link

    The media shield feature looks nice. Buy two drives for a RAID-0 array for the OS and whatnot. Then the RAID-5 array for all your important stuff (saved games, documents, pictures, etc). Having both arrays on one chipset is nice. Reply
  • Pirks - Wednesday, May 24, 2006 - link


    Then the RAID-5 array for all your important stuff (saved games, documents, pictures, etc)
    Why would you penalize your write speed with RAID5 when there is RAID1? Why not get RAID1 instead of RAID5 and enjoy 1) reliability (same as RAID5) 2) speed (same as single drive for writing, faster than single drive for reading) 3) low price (no need for more than two hard drives)
  • mino - Wednesday, May 24, 2006 - link

    AND lower available capacity for the money you pay. You see 4 300G drives in RAID5 bring you 900GB of (cheap and reliable) storage. Do that with 4 drives and RAID1(or 0+1 for that) means i.e. 2x400 + 2x500 which is SIGNIFICANTLY more expensive.

    Remember there are guys with 10 drives, any situation you could economically justify 3+ drives for storage RAID5 is the most cost effective way.
  • JarredWalton - Thursday, May 25, 2006 - link

    Too bad the integrated RAID 5 solutions from NVIDIA only work with 3 drives (and potentially one hot-swap). Maybe I'm mistaken, but I'm pretty sure you can't run 4, 5, or 6 drives in a single RAID 5 array using the NVIDIA controller. That's why you can do two RAID 5 arrays with 3 drives in each array. Problem is, doing RAID 5 without a lot of RAM for the RAID controller can really hurt (write) performance. Reply
  • nordicpc - Wednesday, May 24, 2006 - link

    Something I noticed yesterday while looking through the AM2 reviews that incorporated both ATI and nVidia's chipsets was the huge disparency in power usage, some 40 watts in some cases.

    Charlie D. has brought this up over at the Inq aswell.

    Not only with nVidia's 5x0 series do you need a huge chunk of copper with 3 pipes to eliminate the fan, but also you'll be paying a bit extra on the power bill it seems, for what? Some extra networking options that most of us never use because they are so dodgy.

    Where's the power consumption page on here?
  • Gary Key - Wednesday, May 24, 2006 - link


    Where's the power consumption page on here?

    They are coming in a different article as we just started receiving our ATI AM2, nF550, and other boards. The pull in by AMD was a stretch for the board suppliers who had planned on rolling the AM2 series out during Computex and shipping at that time. NVIDIA was caught trying to qualify drivers for both the video and platform side in half the time. We just received final AM2 chips on Saturday morning. ;-)
  • NullSubroutine - Wednesday, May 24, 2006 - link

    meh Reply
  • fitten - Wednesday, May 24, 2006 - link

    I concurr. Reply

Log in

Don't have an account? Sign up now