Benchmark Configuration

We used a point-to-point configuration to eliminate the need for a switch. We have one machine that we use as the other “end of the network” and one machine on which we measure throughput and CPU load. We used Ixia IxChariot to test the network performance.

Server One ("the other end of the network"):
Supermicro SC846TQ-R900B chassis
Dual Intel Xeon 5160 “Woodcrest” at 3GHz
Supermicro X7DBN Rev1.00 Motherboard
Intel 5000P (Blackford) Chipset
4x4GB DDR2-667 Kingston Value Ram CAS 5
BIOS version 03/20/08

Server two (for measurements):
Supermicro A+ 2021M-UR+B chassis
Dual AMD Opteron 8389 “Shanghai” at 2.9GHz
Supermicro H8DMU+ Motherboard
NVIDIA MCP55 Pro Chipset
8x2GB of Kingston DDR2-667 Value RAM CAS 5
BIOS version 080014 (12/23/2009)

NICs

Both servers were equipped with the following NICs:

  • Two dual-portIntel PRO/1000 PT Server adapter (82571EB) (four ports in total)
  • One Supermicro AOC-STG-I2 dual-port 10Gbit/s Intel 82598EB
  • One Neterion Xframe-E 10Gbit/s

We tested the NICs using CentOS 5.4 x64 Kernel 2.6.18 and VMware ESX 4 Update 1

Important note:the NICs used are not the latest and greatest. For example, Neterion already has a more powerful 10Gbit NIC out, the Xframe 3100. We tested with what had available in our labs.

Drivers CentOS 5.4
Neterion Xframe-E: 2.0.25.1
Supermicro AOC-STG-I2 dual-port: 2.0.8-k3, 2.6.18-164.el5

Drivers ESX 4 Update 1 b208167
Neterion Xframe-E: vmware-esx-drivers-net-s2io-400.2.2.15.19752-1.0.4.00000
Supermicro AOC-STG-I2 dual-port: vmware-esx-drivers-net-ixgbe-400.2.0.38.2.3-1.0.4.164009
Intel PRO/1000 PT Server adapter: vmware-esx-drivers-net-e1000e-400.0.4.1.7-2vmw.1.9.208167

Index The Hardware
POST A COMMENT

49 Comments

View All Comments

  • fredsky - Tuesday, March 09, 2010 - link

    we do use 10GbE at work, and i passed a long time finding the right solutiom
    - CX4 is outdated, huge cable, short length power hungry
    - XFP is also outdated and fiber only
    - SFP + is THE thing to get. very long power, and can used with copper twinax AS WELL as fiber. you can get a 7m twinax cable for 150$.

    and the BEST card available are Myricom very powerfull for a decent price.
    Reply
  • DanLikesTech - Tuesday, March 29, 2011 - link

    CX4 is old? outdated? I just connected two VM host servers using CX4 at 20Gb (40Gb aggregate bandwidth)

    And it cost me $150. $50 for each card and $50 for the cable.
    Reply
  • DanLikesTech - Tuesday, March 29, 2011 - link

    And not to mention the low latency of InfiniBand compared to 10GbE.

    http://www.clustermonkey.net/content/view/222/1/
    Reply
  • thehevy - Tuesday, March 09, 2010 - link

    Great post. Here is a link to a white paper that I wrote to provide some best practice guidance when using 10G and VMware vShpere 4.

    Simplify VMware vSphere* 4 Networking with Intel® Ethernet 10 Gigabit Server Adapters white paper -- http://download.intel.com/support/network/sb/10gbe...">http://download.intel.com/support/network/sb/10gbe...

    More white papers and details on Intel Ethernet products can be found at www.intel.com/go/ethernet

    Brian Johnson, Product Marketing Engineer, 10GbE Silicon, LAN Access Division
    Intel Corporation
    Linkedin: www.linkedin.com/in/thehevy
    twitter: http://twitter.com/thehevy">http://twitter.com/thehevy
    Reply
  • emusln - Tuesday, March 09, 2010 - link

    Be aware that VMDq is not SR-IOV. Yes, VMDq and NetQueue are methods for splitting the data stream across different interrupts and cpus, but they still go through the hypervisor and vSwitch from the one PCI device/function. With SR-IOV, the VM is directly connected to a virtual PCI function hosted on the SR-IOV capable device. The hypervisor is needed to set up the connection, then gets out of the way. This allows the NIC device, with a little help from an iommu, to DMA directly into the VM's memory, rather than jumping through hypervisor buffers. Intel supports this in their 82599 follow-on to the 82598 that you tested. Reply
  • megakilo - Tuesday, March 09, 2010 - link

    Johan,

    Regarding the 10Gb performance on native Linux, I have tested Intel 10Gb (the 82598 chipset) on RHEL 5.4 with iperf/netperf. It runs at 9.x Gb/s with a single port NIC and about 16Gb/s with a dual-port NIC. I just have a little doubt about the Ixia IxChariot benchmark since I'm not familiar about it.

    -Steven
    Reply
  • megakilo - Tuesday, March 09, 2010 - link

    BTW, in order to reach 9+ Gb/s, the iperf/netperf have to run multiple threads (about 2-4 threads) and use a large TCP window size (I used 512KB). Reply
  • JohanAnandtech - Tuesday, March 09, 2010 - link

    Thanks. Good feedback! We'll try this out ourselves. Reply
  • sht - Wednesday, March 10, 2010 - link

    I was surprised by the poor native Linux results as well. I got > 9 Gbit/s with Broadcom NetXtreme using nuttcp as well. I don't recall whether multiple threads were required to achieve those numbers. I don't think they were, but perhaps using a newer kernel helped, the Linux networking stack has improved substantially since 2.6.18. Reply
  • themelon - Tuesday, March 09, 2010 - link

    Did I miss where you mention this or did you completely leave it out of the article?

    Intel has had VMDq in Gig-E for at least 3-4 years in the 82575/82576 chips. Basically, anything using the igb driver instead of the e1000g driver.
    Reply

Log in

Don't have an account? Sign up now