Benchmark Configuration

We used a point-to-point configuration to eliminate the need for a switch. We have one machine that we use as the other “end of the network” and one machine on which we measure throughput and CPU load. We used Ixia IxChariot to test the network performance.

Server One ("the other end of the network"):
Supermicro SC846TQ-R900B chassis
Dual Intel Xeon 5160 “Woodcrest” at 3GHz
Supermicro X7DBN Rev1.00 Motherboard
Intel 5000P (Blackford) Chipset
4x4GB DDR2-667 Kingston Value Ram CAS 5
BIOS version 03/20/08

Server two (for measurements):
Supermicro A+ 2021M-UR+B chassis
Dual AMD Opteron 8389 “Shanghai” at 2.9GHz
Supermicro H8DMU+ Motherboard
NVIDIA MCP55 Pro Chipset
8x2GB of Kingston DDR2-667 Value RAM CAS 5
BIOS version 080014 (12/23/2009)

NICs

Both servers were equipped with the following NICs:

  • Two dual-portIntel PRO/1000 PT Server adapter (82571EB) (four ports in total)
  • One Supermicro AOC-STG-I2 dual-port 10Gbit/s Intel 82598EB
  • One Neterion Xframe-E 10Gbit/s

We tested the NICs using CentOS 5.4 x64 Kernel 2.6.18 and VMware ESX 4 Update 1

Important note:the NICs used are not the latest and greatest. For example, Neterion already has a more powerful 10Gbit NIC out, the Xframe 3100. We tested with what had available in our labs.

Drivers CentOS 5.4
Neterion Xframe-E: 2.0.25.1
Supermicro AOC-STG-I2 dual-port: 2.0.8-k3, 2.6.18-164.el5

Drivers ESX 4 Update 1 b208167
Neterion Xframe-E: vmware-esx-drivers-net-s2io-400.2.2.15.19752-1.0.4.00000
Supermicro AOC-STG-I2 dual-port: vmware-esx-drivers-net-ixgbe-400.2.0.38.2.3-1.0.4.164009
Intel PRO/1000 PT Server adapter: vmware-esx-drivers-net-e1000e-400.0.4.1.7-2vmw.1.9.208167

Index The Hardware
Comments Locked

49 Comments

View All Comments

  • radimf - Wednesday, March 10, 2010 - link

    HI,
    thanks for article!
    Btw I am reading your site because of your virtualization articles.

    I planned almost 3 years ago for IT project with only a 1/5 of complete budget for small virtualization scenario.
    If you want redundancy, It can´t get much simplier than that:
    - 2 ESX servers
    - one SAN + one NFS/iSCSI/potentially FC storage for D2D backup
    - 2 TCP switches, 2 FC switches

    world moved, IT changed, EU dotation took too long to process - we finished last summer what was planned years ago...

    My 2 cents from small company finishing small IT virtualization project?
    FC saved my ass.

    iSCSI was on my list (DELL gear), but went FC instead(HP) for lower total price (thanks crisis :-)

    HP hardware looked sweet on specs sheets, and actual HW is superb, BUT.... FW sucked BIG TIME.
    IT took HP half year to fix it.

    HP 2910al switches do have option for up to 4 10gbit ports - that was the reason I bought them last summer.
    Coupled with DA cables - very cheap solution how to get 10gbit to your small VMware cluster. (viable 100% now)

    But unfortunatelly FW (that time) sucked so much, that 3 out of 4 supplied DA cables did not work at all (out of the box).
    Thanks to HP - they changed our DA for 10gbit SFP+ SR optics! :-)

    After installation we had several issues with "dead ESX cluster".
    Not even ping worked!
    FC worked flawlessly through these nightmares.
    Swithces again...
    Spanning tree protocol bug ate our cluster.

    Now we are happy finally. Everything works as advertised.
    10gbit primary links are backed up by 1gbit stand-by.
    Insane backup speeds of whole VMs compared to our legacy SMB solution to nexenta storage appliance.







  • JohanAnandtech - Monday, March 8, 2010 - link

    Thank you. Very nice suggestion especially since we already started to test this out :-). Will have to wait until April though, as we got a lot of server CPU launches this month;
  • Lord 666 - Monday, March 8, 2010 - link

    Aren't the new 32nm Intel server platforms coming with standard 10gbe nics? After my SAN project, going to phase in the new 32nm cpu servers mainly for AES-NI. The 10gbe nics would be an added bonus.
  • hescominsoon - Monday, March 8, 2010 - link

    It's called xsigo(pronounced zee-go) and solves the i/o issue you are tying to solve here for vm i/o bandwidth.
  • JohanAnandtech - Monday, March 8, 2010 - link

    Basically, it seems like using infiniband to connect each server to an infinibandswitch. And that infiniband connection is then used by a software which offers both a virtual HBA and a virtual NIC. Right? Innovative, but starting at $100k, looks expensive to me.
  • vmdude - Monday, March 8, 2010 - link

    "Typically, we’ll probably see something like 20 to 50 VMs on such machines."

    That would be a low vm per core count in my environment. I typically have 40 vms or more running on a 16 core host that is populated with 96 GB of Ram.
  • ktwebb - Sunday, March 21, 2010 - link

    Agreed. With Nahalems it's about a 2 VM's per core ratio in our environment. And that's conservative. At least with vSphere and overcommit capabilities.
  • duploxxx - Monday, March 8, 2010 - link

    All depends on design and application type, we typically have 5-6 VM's on a 12 core 32GB machine and about 350 of those, running in a constant 60-70% CPU utilization range.
  • switcher - Thursday, July 29, 2010 - link

    Great article and comments.

    Sorry I'm so late to this thread, but I was curious to know what the vSwitch is doing during the benchmark? How is it configured? @emuslin notes that SR-IOV is more than just VMDq, and AFAIK the Intel 82598EB doesn't support SR-IOV so what we're seeing it the boost from NetQueue. What support for SR-IOV is there in ESX these days?

    I'd be nice to see SR-IOV data too.

Log in

Don't have an account? Sign up now