Benchmark Configuration

We used a point-to-point configuration to eliminate the need for a switch. We have one machine that we use as the other “end of the network” and one machine on which we measure throughput and CPU load. We used Ixia IxChariot to test the network performance.

Server One ("the other end of the network"):
Supermicro SC846TQ-R900B chassis
Dual Intel Xeon 5160 “Woodcrest” at 3GHz
Supermicro X7DBN Rev1.00 Motherboard
Intel 5000P (Blackford) Chipset
4x4GB DDR2-667 Kingston Value Ram CAS 5
BIOS version 03/20/08

Server two (for measurements):
Supermicro A+ 2021M-UR+B chassis
Dual AMD Opteron 8389 “Shanghai” at 2.9GHz
Supermicro H8DMU+ Motherboard
NVIDIA MCP55 Pro Chipset
8x2GB of Kingston DDR2-667 Value RAM CAS 5
BIOS version 080014 (12/23/2009)

NICs

Both servers were equipped with the following NICs:

  • Two dual-portIntel PRO/1000 PT Server adapter (82571EB) (four ports in total)
  • One Supermicro AOC-STG-I2 dual-port 10Gbit/s Intel 82598EB
  • One Neterion Xframe-E 10Gbit/s

We tested the NICs using CentOS 5.4 x64 Kernel 2.6.18 and VMware ESX 4 Update 1

Important note:the NICs used are not the latest and greatest. For example, Neterion already has a more powerful 10Gbit NIC out, the Xframe 3100. We tested with what had available in our labs.

Drivers CentOS 5.4
Neterion Xframe-E: 2.0.25.1
Supermicro AOC-STG-I2 dual-port: 2.0.8-k3, 2.6.18-164.el5

Drivers ESX 4 Update 1 b208167
Neterion Xframe-E: vmware-esx-drivers-net-s2io-400.2.2.15.19752-1.0.4.00000
Supermicro AOC-STG-I2 dual-port: vmware-esx-drivers-net-ixgbe-400.2.0.38.2.3-1.0.4.164009
Intel PRO/1000 PT Server adapter: vmware-esx-drivers-net-e1000e-400.0.4.1.7-2vmw.1.9.208167

Index The Hardware
Comments Locked

49 Comments

View All Comments

  • Parak - Tuesday, March 9, 2010 - link

    The per-port prices of 10Gbe are still $ludicrous; you're not going to be able to connect an entire vmware farm plus storage at a "reasonable" price. I'd suggest looking at infiniband:

    Pros:

    40Gb/s theoretical - about 25Gb/s maximum out of single stream ip traffic, or 2.5x faster than 10Gbe.
    Per switch port costs of about 3x-4x times less that of 10Gbe, and comparable per adapter port costs.
    Latency even lower than 10Gbe.
    Able to do remote direct memory access for specialized protocols (google helps here).
    Fully supported under your major operating systems, including ESX4.

    Cons:

    Hefty learning curve. Expect to delve into mailing lists and obscure documentations, although just the "basic" ip functionality is easy enough to get started with.

    10Gbe has the familiarity concept going for it, but it is just not cost effective enough yet, where as infiniband just seems to get cheaper, faster, and lately, a lot more user friendly. Just something to consider next time :D
  • has407 - Monday, March 8, 2010 - link

    Thanks. Good first-order test and summary. A few more details and tests would be great, and I look forward to more on this subject...

    1. It would be interesting to see what happens when the number of VM's exceeds the number of VMDQ's provided by the interface. E.g., 20-30 VM's with 16 VMDQ's... does it fall on its face? If yes, that has significant implications for hardware selection and VM/hardware placement.

    2. Would be interesting to see if the Supermicro/Intel NIC can actually drive both ports at close to an aggregate 20Gbs.

    3. What were the specific test parameters used (MTU, readers/writers, etc)? I ask because those throughput numbers seem a bit low for the non-virtual test (wouldn't have been surprised 2-3 years ago) and very small changes can have very large effects with 10Gbe.

    4. I assume most of the tests were primarily unidirectional? Would be interesting to see performance under full-duplex load.

    > "In general, we would advise going with link aggregation of quad-port gigabit Ethernet ports in native mode (Linux, Windows) for non-virtualized servers."

    10x 1Gbe links != 1x 10Gbe link. Before making such decisions, people need to understand how link aggregation works and its limitations.

    > "10Gbit is no longer limited to the happy few but is a viable backbone technology."

    I'd say it has been for some time, as vendors who staked their lives on FC or Infiniband have discovered over the last couple years much to their chagrin (at least outside of niche markets). Consolidation using 10Gbe has been happening for a while.
  • tokath - Tuesday, March 9, 2010 - link

    "2. Would be interesting to see if the Supermicro/Intel NIC can actually drive both ports at close to an aggregate 20Gbs. "

    At best since it's a PCIe 1.1 x8 would be about 12Gbps per direction for a total aggregate throughput of about 24Gbps bi-directional traffic.

    The PCIe 2.0 x8 dual port 10Gb NICs can push line rate on both ports.
  • somedude1234 - Wednesday, March 10, 2010 - link

    "At best since it's a PCIe 1.1 x8 would be about 12Gbps per direction for a total aggregate throughput of about 24Gbps bi-directional traffic."

    How are you figuring 12 Gbps max? PCIe 1.x can push 250 MBps per lane (in each direction). A x8 connection should max out around 2,000 MBps, which sounds just about right for a dual 10 GbE card.
  • mlambert - Monday, March 8, 2010 - link

    This is a great article and I hope to see more like it.
  • krazyderek - Monday, March 8, 2010 - link

    In the opening statements it basically boils down to file servers being the biggest bandwidth hogs, so i'd like to see a SMB and enterprise review of how exactly you could saturate these connections, comparing the 4x1gb port to your 10GB cards in real world usage. Everyone use's chariot to show theoretical numbers, but i'd like to see real world examples.

    What kind of raid arrays, or SSD's and CPU's are required on both the server AND CLIENT side of these cards to really utilize that much bandwidth?

    Other then a scenario such as 4 or 5 clients all writing large sequential files to a fileserver at the same time i'm having trouble seeing the need for 10Gb connection, even at that level you'd be limited by hard disk performance on a 4 or maybe even 8 disk raid array unless you're using 15k drives in raid 0.

    I guess i'd like to see the other half of this "affordable 10Gb" explained for SMB and how best to use it, when it's usable, and what is required beyond the server's NIC to use it.

    Continuing the above example, if the 4 or 5 clients were reading off a server instead of writting you begin to be limited by the client CPU and HD write speeds, in this scenario what upgrades are required on the client side to best make use of the 10Gb server?

    Hope this doesn't sound to newb.
  • dilidolo - Monday, March 8, 2010 - link

    I agree with you.

    The biggest benefit for 10Gb is not bandwidth, it's port consolidation, thus reducing total cost.

    Then it comes down to how much IO the storage subsystem can provide. If the storage system can only provide 500MB/s, then how can 10Gb nic help?

    I also don't understand why anyone wants to run a file server as a VM, and connects to NAS to store actual data. NAS is designed for it already, why add another layer.
  • JohanAnandtech - Monday, March 8, 2010 - link

    File server access is - as far as I have seen - not that random. In our case it used to stream (OS + desktop apps) images, software installations etc.

    So in most cases you have relatively few users that are downloading hundreds of MB. Why would you not consolidate that file server? It uses very little CPU power (compared to the webservers) most of the time, and it can use the power of your SAN pretty well as it sequentially access the disks. Why would you need a separate NAS?

    Once your NAS is integrated in your virtualized platform, you can get the benefit of HA, live migration etc.

  • dilidolo - Monday, March 8, 2010 - link

    For most people, their storage for virtualized platform is NAS based(NFS/iSCSI). I still put iSCSI into NAS as it's an addon to NAS. Most NAS devices support multiple protocols - NFS, CIFS, ISCSI, etc.

    If you don't have a proper NAS device, that's a different story, but if you do, why do you waste resources on virtual host to duplicate the features your NAS already provides?
  • MGSsancho - Tuesday, March 9, 2010 - link

    Only thing I can think of at the moment is your SAN is overburdened and you want to move portions of it into your VM to give your SAN more resources to do other things. As mentioned, streaming system images can be put on a cheap/simple NAS or VM where you allow your SAN with all its features to do what you paid for it to do. Seams like a quick fix to free up your SAN temporally, however it is rare to see any IT shop set things up ideally. There are always various constraints.

Log in

Don't have an account? Sign up now