Exar's Neterion Solution

SR-IOV will be supported in ESX 5.0 and the successor of Windows Server 2008. Since VMware’s ESX is the dominant hypervisor in most datacenters, that means that a large part of the already virtualized servers will have to wait a year or more before they can get the benefits of SR-IOV.

Exar, the pioneer of multiple devices queues, saw a window of opportunity. Besides the standard SR-IOV and VMware NetQueue support, the X3100 NICs also have a proprietary SR-IOV implementation.

Poprietary solutions only make sense if they offer enough advantages. Neterion claims that the NIC chip has extensive hardware support for network prioritization and quality of service. That hardware support should be superior to hypervisor traffic shaping, especially on the receive side. After all, if bursty traffic causes the NIC to drop packets on the receive side, there is nothing the hypervisor can do: it never saw those packets pass. To underline this, Neterion equips the X3100 with a massive 64MB receive buffer; for comparison, the competitors have in the best case a 512KB receive buffer. This huge receive buffer should ensure that the QoS is guaranteed even if relatively long bursts of network traffic occur.

Neterion NICs can be found in IBM, HP, Dell, Fujitsu, and Hitachi machines. Neterion is part of Exar and has also access to a world distributor channel. The typical price of this NIC is around $745.

The Competition: Solarflare

Solarflare is a relatively young company, founded in 2001. The main philosophy of Solarflare has been been “make Ethernet [over copper] better” (we added “over copper”). Solarflare NICs support optical media too, but Solarflare made headlines with their 10G Ethernet copper products. In 2006, Solarflare was the first with 10GBase-T PHY. 10GBase-T allows 10Gigabit over the very common and cheap CAT5E and the reasonably priced CAT6 and CAT6A UTP cables. Solarflare is also strongly advocating the use of 10GbE in the HPC world with the claim that the latency of 10GbE can be as low as 4 µs. In June of this year, Solarflare launched the SFN5121T, a dual-ported 10Gbase-T NIC which featured a very reasonable 12.9W power consumption, especially for a UTP based 10GbE product. In January of this year, the company decided to start selling NIC adapters directly to the end-user.

As we got an SFP+ Neterion X3120, we took a look at the optical brother of the SFN5121T, the SFN5122F. Both Solarflare NICs support SR-IOV and and make use of PCIe 2.0. The SFP+ SFN5122F should only consume a very low 4.9W for the complete card. Solarflare pulled this off by designing their own PHYs and reducing the chip count on the NIC. Although our power measurement methods (measured at the wall) are too crude to measure the exact power consumption, we can confirm that the Solarflare NIC consumed the least of the three NICs we tested.

The Solarflare chips are only slightly more expensive than the other NICs. Prices on the web were typically around $815.

The oldies

It is always interesting to get some “historical perspective”. Do these new cards outperform the older ones by a large margin? We included the Neterion XFrame-E for one test, and used the multi-queue pioneer and the Intel 82598, the 10GbE price breaker, as the historical reference for every benchmark. We'll try to add the Intel 82599 which also support SR-IOV, has a larger receive buffer, more queues and is priced around $700. We plugged those NICs in our Supermicro Twin² for testing.

The Final Piece of the Puzzle: SR-IOV Benchmark Configuration
Comments Locked

38 Comments

View All Comments

  • Kahlow - Friday, November 26, 2010 - link

    Great article! The argument between fiber and 10gig E is interesting but from what I have seen it is extremely application and workload dependant that you would have to have a 100 page review to be able to figure out what media is better for what workload.
    Also, in most cases your disk arrays are the real bottleneck and max’ing your 10gig E or your FC isn’t the issue.

    It is good to have a reference point though and to see what 10gig translates to under testing.

    Thanks for the review,
  • JohanAnandtech - Friday, November 26, 2010 - link

    Thanks.

    I agree that it highly depends on the workload. However, there are lots and lots of smaller setups out there that are now using unnecessarily complicated and expensive setups (several physical separated GbE and FC). One of objective was to show that there is an alternative. As many readers have confirmed, a dual 10GbE can be a great solution if your not running some massive databases.
  • pablo906 - Friday, November 26, 2010 - link

    It's free and you can get it up and running in no time. It's gaining a tremendous amount of users because of the recent Virtual Desktop licensing program Citrix pushed. You could double your XenApp (MetaFrame Presentation Server) license count and upgrade them to XenDesktop for a very low price, cheaper than buying additonal XenApp licenses. I know of at least 10 very large organizations that are testing XenDesktop and preparing rollouts right now.

    What gives. VMWare is not the only Hypervisor out there.
  • wilber67 - Sunday, November 28, 2010 - link

    Am I missing something in some of the comments?
    Many are discussing FCoE and I do not believe any of the NICs tested were CNAs, just 10GE NICs.
    FCoE requires a CNA (Converged Network Adapter). Also, you cannot connect them to a garden variety 10GE switch and use FCoE. . And, don't forget that you cannot route FCoE.
  • gdahlm - Sunday, November 28, 2010 - link

    You can use software initiators on switches which support 802.3X flow control. Many web managed switches do support 802.3X as do most 10GE adapters.

    I am unsure how that would effect performance at in a virtualized shared environment as I believe it pauses on the port level.

    If you workload is not storage or network bound it would work but I am betting that when you hit that hard knee in your performance curve that things get ugly pretty quick.
  • DyCeLL - Sunday, December 5, 2010 - link

    To bad HP virtual connect couldn't be tested (a blade option).
    It splits the 10GB nics in a max of 8 Nics for the blades. It can do it for fiber and ethernet.
    Check: http://h18004.www1.hp.com/products/blades/virtualc...
  • James5mith - Friday, February 18, 2011 - link

    I still think that 40Gbps Infiniband is the best solution. By far it seems to be the best $/Gbps ratio of any of the platforms. Not to mention it can pass pretty much any traffic type you want.
  • saah - Thursday, March 24, 2011 - link

    I loved the article.

    I just reminded myself that VMware published official drivers for the ESX4 recently: http://downloads.vmware.com/d/details/esx4x_intel_...
    The ixgbe version is 3.1.17.1.
    Since the post says that "enables support for products based on the Intel 82598 and 82599 10 Gigabit Ethernet Controllers." I would like to see the test redone with an 82599-based card and recent drivers.
    Would it be feasible?

Log in

Don't have an account? Sign up now