ESX 4.0 Performance

Let us see what these NICs can do in a virtualized environment. After some testing in ESX 4.1 we had to go back to ESX 4.0 u2 as there were lots of driver issues, and this surely was not the NIC vendors fault solely. Apparantly, VMDirectPath is broken in ESX 4.1, and a bug report has been filed: update 1 should take care of this.

We started NTttcp from the Windows 2008 node:

NTttcp –m 4,0, [ip number] -a 4 –t 120

On the “virtualized node”, we created four virtual machines with Windows 2008. Each VM gets four network load threads. In other words, there are 16 threads active, all sending network traffic through one NIC port.

The Intel chip delivered the highest throughput with 9.6 Gb/s, followed closely by the Solarflare (9.2 Gbit/s) and the Neterion X3100 (9.1 Gbit/s). The old Xframe-E was not capable of delivering more than 3.4 Gbit/s. The difference between the top three is hardly worth discussing: few people are going to notice a bandwidth increase of 5%. However, notice that the Neterion NIC is the one that load balances the traffic the fairest over the four VMs. All Virtual machines get the same bandwidth: about 2.2 to 2.3 Gbit/s. The Solarflare SF5122F and Intel 82598 are not that bad either: the lowest bandwidth was 1.8 Gbit/s. Bandwidth tests with the Ixia Chariot 5.4 test suite gave the same numbers. We also measured response times.

Again, the Neterion X3100 chip stands out with a low response time in all virtual machines. The Solarflare SF5122 drops a stitch here as one VM gets twice the amount of latency. Let us see how much CPU power these NICs needed while they round-robin the network traffic over to the virtual machines. This test was done on the Xeon E5504 (2GHz) and the Xeon X5670 (2.93GHz); Hyper-Threading was disabled on all CPUs.

Total CPU load, network load over 4 VMs

Nine gigabits of paravirtualized network traffic is enough to swamp the dual quad-core 2GHz Xeon CPU in most cases. Although it is one of slowest Xeons now available, it is still impressive that nothing less than eight of these cores are necessary just to run the benchmark and manage the network traffic. So be warned that these 10GbE NICs require some heavy CPU power. The Solarflare chip offers the low-end Xeon some breathing space, the Neterion chip needs the most for it’s almost perfect load balancing services.

But the Neterion chip has a secret weapon: it is the only NIC that can make virtual functions available in VMware ESX. Once you do this, the CPU load is a lot lower: we measured only 63%. This lower CPU load is accompagnied with a small dip in the network bandwidth: we achieved 8.1 Gbit/s instead of 9.1 Gbit/s.

Once we use one of the fastest Xeons available, the picture changes. The Intel and Neterion make better use of the extra cores and higher frequency.

Native Bandwidth: Windows Server 2008 Consolidation and Choking
Comments Locked

38 Comments

View All Comments

  • fr500 - Wednesday, November 24, 2010 - link

    I guess there is LACP or PAGP and some propietary solution.

    A quick google told me it's called cross-module trunking.
  • mlambert - Wednesday, November 24, 2010 - link

    FCoE, iSCSI (*not that you would, but you could), FC, and IP all across the same link. Cisco offers VCP LACP with CNA as well. 2 links per server, 2 links per storage controller, thats not many cables.
  • mlambert - Wednesday, November 24, 2010 - link

    I meant VPC and Cisco is the only one that offers it today. I'm sure Brocade will in the near future.
  • Zok - Friday, November 26, 2010 - link

    Brocade's been doing this for a while with the Brocade 8000 (similar to the Nexus 5000), but their new new VDX series takes it a step further for FCoE.
  • Havor - Wednesday, November 24, 2010 - link

    Do these network adapters are real nice for servers, don't need a manged NIC, i just really want affordable 10Gbit over UTP ore STP.

    Even if its only 30~40M / 100ft because just like whit 100Mbit network in the old days my HDs are more then a little out preforming my network.

    Wondering when 10Gbit will become common on mobos.
  • Krobar - Thursday, November 25, 2010 - link

    Hi Johan,

    Wanted to say nice article first of all, you pretty much make the IT/Pro section what it is.

    In the descriptions of the cards and conclusion you didnt mention Solarflares "Legacy" Xen netfront support. This only works for paravirt Linux VMs and requires a couple of extra options at kernal compile time but it run like a train and requires no special hardware support from the motherboard at all. None of the other brands support this.
  • marraco - Thursday, November 25, 2010 - link

    I once made a resume of total cost of the network on the building where I work.

    Total cost of network cables was far larger than the cost of the equipment (at least with my country prices). Also, solving any cable related problem was a complete hell. The cables were hundreds, all entangled over the false roof.

    I would happily replace all that for 2 of tree cables with cheap switches at the end. Selling the cables would pay for new equipment and even give a profit.

    Each computer has his own cable to the central switch. A crazy design.
  • mino - Thursday, November 25, 2010 - link

    IF you go 10G for cable consolidation, you better forget about cheap switches.

    The real saving are in the manpower, not the cables themselves.
  • myxiplx - Thursday, November 25, 2010 - link

    If you're using a Supermicro Twin2, why don't you use the option for the on board Mellanox ConnectX-2? Supermicro have informed me that with a firmware update these will act as 10G Ethernet cards, and Mellanox's 10G Ethernet range has full support for SR-IOV:

    Main product page:
    http://www.mellanox.com/content/pages.php?pg=produ...

    Native support in XenServer 5:
    http://www.mellanox.com/content/pages.php?pg=produ...
  • AeroWB - Thursday, November 25, 2010 - link

    Nice Article,

    It is great to see more test around virtual environments. What surprises me a little bit is that at the start of the article you say that ESXi and Hyper-V do not support SR-IOV yet. So I was kind of expecting a test with Citrix Xenserver to show the advantages of that. Unfortunately it's not there. I hope you can do that in the near future.
    I work with both Vmware ESX and Citrix XenServer we have a live setup of both. We started with ESX and later added a XenServer system, but as XenServer is getting more mature and gets more and more features we probably replace the ESX setup with XenServer (as it is much much cheaper) when maintenance runs out in about one year so I'm really interested in tests on that platform.

Log in

Don't have an account? Sign up now