New Feature: LinkBoost

One of the feature sets unique to the nForce 590 SLI MCP - and highly touted by NVIDIA - is called LinkBoost. If a GeForce 7900 GTX is detected on the nForce5 system then LinkBoost will automatically increase the PCI Express (PCIe) and MCP HyperTransport (HT) bus speeds by 25%. This increases the bandwidth available to each PCIe and HT bus link from 8GB/s to 10GB/s. Since this technology increases the clock speed of the PCI Express bus by 25%, NVIDIA requires certification of the video card for this program to work automatically. In this case, the 7900GTX is the only compatible card currently, although you can manually set the bus speeds and achieve the same or better overclock depending upon your components.



In essence, NVIDIA is guaranteeing their chipset's PCIe and HT interconnect links are qualified to perform up to 125% of their default speeds without issue. While LinkBoost is an interesting idea, its actual implementation did not change our test scores outside the normal margin of error. The 25% increase in PCIe and HT yielded virtually the same performance as our system without LinkBoost enabled. The reason is that the performance boost is being applied in areas that have minimal impact on system performance.

LinkBoost is part of a package of easy-to-use auto overclocking features on the nForce5 designed for the OC newbie. If you fit in that category and you are excited about the 25% LinkBoost speed increase, you need to clearly understand that the 25% LinkBoost increase yielded little to no real performance increase. The true performance potential of this technology would have been realized on the AM2 CPU if the MCP's link to the CPU/Memory subsystem would have been dynamically increased from the base 8GB/s level, but NVIDIA does not control AMD CPU certification and thus left the CPU at stock speed.

The end result is that the Northbridge to CPU HyperTransport link remains at 8GB/s, and only the link between the MCP and SPP as well as the PEG slots get increased bandwidth. Having 8GB/s of bandwidth feed 10GB/s basically means you are still effectively limited to 8GB/s. It is possible that increasing the Northbridge to CPU bandwidth could improve performance slightly, but HyperTransport performance is rarely the bottleneck in current systems as you will see in our performance results.

New Feature: FirstPacket

As part of the overhaul of the networking features found in the NVIDIA nForce 500 Series, FirstPacket is a packet prioritization technology that allows latency-sensitive applications and games to effectively share the upstream bandwidth of their broadband connection. Essentially this technology allows the user to set network data packets for applications and games that are more latency sensitive with a higher queue priority for outbound traffic only.

FirstPacket is embedded in the hardware and offers driver support that is specifically designed to reduce latency for networked games and other latency-sensitive traffic like Voice over IP (VoIP). When network traffic constrains a connection, latency is increased which in turn can result in dropped packets that would create a jitter and delay in VoIP connections or higher ping rates to the game server resulting in stutters and decreased game play abilities.



In the typical PC configuration, the operation system, network hardware, and driver software are unaware of latency issues and therefore are unable to reduce it. The standard interfaces that allow applications to send and receive data are basically identical to the OS in a typical system. This design results in latency-tolerant and large packet applications like FTP or Web browsers filling the outbound pipeline without regards to the needs of small packet and very latency-sensitive applications like games or VoIP applications.



FirstPacket operates by creating an additional transmit queue in the network driver. This queue is designed to provide expedited packet transmission for applications the user determines are latency-sensitive. The ability of the designated applications to get preferential access to the upstream bandwidth usually results in improved performance and lower ping rates. The FirstPacket setup and configuration is available through a new Windows based driver control panel that is very simple to use.

In our LAN testing, we witnessed ping rate performance improvements of 27% to 43% during the streaming of video from our media server while playing Serious Sam II across three machines attached to the network. We noticed ping rate performance improvements of 15% to 23% while uploading files via BitTorrent and playing Battlefield 2 on varying servers with VoIP conversations on Skype during game play. The drawback at this time is that only outbound packets are prioritized, so if you spend more time downloading than uploading the FirstPacket technology will have little impact for you. However, in NVIDIA's defense they cannot control the behavior or quality of service on other networked clients, so FirstPacket addresses the services NVIDIA can control - namely uploading.

Basic Platform Features DualNet, Teaming, and TCP/IP Acceleration
POST A COMMENT

64 Comments

View All Comments

  • DigitalFreak - Wednesday, May 24, 2006 - link

    Does NTune 5 also work with NF4 boards? Reply
  • Gary Key - Wednesday, May 24, 2006 - link

    quote:

    Does NTune 5 also work with NF4 boards?


    Yes, but depending upon bios support several of the new features will not be active. We have an updated bios coming for a nF4 board so we can verify which features do and not do work with full nF4 bios support.
    Reply
  • nullpointerus - Wednesday, May 24, 2006 - link

    Does nTune 5 support multiple profiles and automatic profile switching? If so, do these things actually work properly? Unfortunately, nTune 3 was a mess on my MSI board. Reply
  • Gary Key - Wednesday, May 24, 2006 - link

    quote:

    Does nTune 5 support multiple profiles and automatic profile switching? If so, do these things actually work properly? Unfortunately, nTune 3 was a mess on my MSI board.


    Yes to multiple profiles and working correctly, what is your definition of automatic profile switching? You can setup custom rules that will dictate how the system should operate under different conditions, a game profile for max performance or a DVD profile that will instruct the system to go in to "quiet mode" once a DVD is inserted if you are watching a movie as an example. We are still testing the rules setup, but so far, it works. We only received the kits last Friday so all major features were tested first but I am following up on the bells and whistles now. nTune 5 probably deserves a small but separate article on its features. We just received a new build last night so testing begins again today.

    We did report a bug to NVIDIA as the motherboard settings screen will not refresh correctly after loading a new profile. We had to exit to the main control panel and then return to the performance section for a refresh. I personally have close to 30 profiles setup for our test suites at this time. It is just a matter
    Reply
  • DigitalFreak - Wednesday, May 24, 2006 - link

    quote:

    At the top of the product offering, the nForce 590 SLI consists of two chips, the C51Xe SPP and the MCP55PXE. This solution offers dual X16 PCI-E lanes for multiple graphics card configurations. While other features have changed, the overall design is very similar to the nForce4 SLI X16. The total number of PCI-E lanes is now 46, with 18 lanes coming from the SPP. Of those 18, two are used to link to the MCP and the remaining 16 are for the PEG slot.{/Q]

    Uh... I thought that the SPP & MCP were connected via HT? If only 2 PCI-E lanes were used, that's only ~ 500MB of bandwidth between the two. Reply
  • JarredWalton - Wednesday, May 24, 2006 - link

    Sorry - that was smy fault and I'll edit it. Written while not thinking I guess. Reply
  • R3MF - Wednesday, May 24, 2006 - link

    "If TCP/IP acceleration is enabled via the new control panel, then third party firewall applications must be switched off in order to use the feature."

    this statement presumes that non third-party firewalls (i.e. nVidia firewall application) would work fine with the TCP-IP acceleration function.............?

    nVidia: here is a great function, but you can't use it without getting haXXoR3d

    ???
    Reply
  • Wesleyrpg - Wednesday, May 24, 2006 - link

    hey anand,

    wheres this dodgy nforce4 networking article that you been promising for weeks?

    Reply
  • Gary Key - Wednesday, May 24, 2006 - link

    quote:

    wheres this dodgy nforce4 networking article that you been promising for weeks?

    The nf4 tests with driver sets back to the 5 series is complete, waiting on release versions of the new 9.x platform drivers to see what actual changes have been made since 6.85 on the nf4 x16 boards.
    Reply
  • Wesleyrpg - Thursday, May 25, 2006 - link

    can people with the 'normal' nforce4 chipset use the 6.85 drivers or are we stuck with the bodgy 6.70 drivers. Reply

Log in

Don't have an account? Sign up now