The Asus Z9PE-D8 WS has 2 GbE ports. However, we have several VMs trying to stress the NAS under test. Sharing two GbE links across 12 VMs was never going to be a suitable solution. The testbed used in earlier reviews had an Intel ESA-I340 quad-port GbE network adapter. Based on the Gigabit 82580, it has a PCIe 2.0 x4 connection enabling it to provide maximum bidirectional throughput for all the four GbE ports.

The Z9PE-D8 WS has 7 PCIe slots, and we decided to fill up three of them with the Intel ESA-I340 cards. They provided a total of twelve physical GbE ports, one for each VM. The ESA I-340 has a full height form factor, but also comes with a low profile bracket. One of the interesting aspects of the I340 server adapter is the VMDq (Virtual Machine Device Queue). It reduces I/O overhead in the hypervisor (and lowers CPU usage) by maintaining hardware queues for data packets sorted by their destination and sending them later to the hypervisor for forwarding to the appropriate VM. Even though we had the VMDq feature enabled in the settings for all the adapters, it probably had negligible effect on the CPU usage in our workloads because we had each VM reserving one port exclusively for its own use. The host machine itself connected to the network through one of the native GbE ports on the motherboard.

Between the NAS testbed and the NAS under test, it is essential that we have a switch capable of standing up to the stressful network traffic. Our switch of choice was one of the 24-port managed enterprise models from ZyXel, the GS2200-24. Even though it is advertised as a 24-port model, it actually has 4 extra ports with dual personalities (i.e, both copper and optical SFP).

The switch has all the necessary features for an L2 managed model including VLANs, CoS/QoS, port trunking and bandwidth control. Port trunking / 802.3ad LACP aggregation is essential because NASes with dual GbE ports provide that as an option to increase performance. Even though a NAS might support link aggregation, it is of no use if the switch to which it is connected doesn’t aggregate the connected ports.

Hardware Build - Memory and Storage Hardware Build - Chassis and PSU
Comments Locked

74 Comments

View All Comments

  • Tor-ErikL - Thursday, September 6, 2012 - link

    As always a great article and a sensible testbench which can be scaled to test everything from small setups to larger setups. good choice!

    However i would also like some type of test that is less geared towards technical performance and more real world scenarios.

    so to help out i give you my real world scenario:
    Family of two adults and two teenagers...

    Equipment in my house is:
    4 latops running on wifi network
    1 workstation for work
    1 mediacenter running XBMC
    1 Synollogy NAS

    laptops streams music/movies from my nas - usually i guess no more than two of these runs at the same time
    MediaCenter also streams music/movies from the same nas at the same time
    in adition some of the laptops browse all the family pictures which are stored on the NAS and does light file copy to and from the NAS.
    The NAS itself downloads movies/music/tvshows and does unpacking and internal file transfers

    My guess for a typical home use scenario there is not that much intensiv file copying going on, usually only light transfers trough mainly either wifi or 100mb links

    I think the key factor is that usually there are multiple clients connecting and streaming different stuff that is the most relevant factor. at tops 4-5 clients

    Also as mentioned difference on the different sharing protocols like SMB/CIFS would be interesting to se more details about.

    Looking forward for the next chapters in your testbench :)
  • Jeff7181 - Thursday, September 6, 2012 - link

    I'd be very curious to see tests involving deduplication. I know deduplication is found more on enterprise-class type storage systems, but WHS used SIS, and FreeNAS uses ZFS, which supports deduplication.
  • _Ryan_ - Thursday, September 6, 2012 - link

    It would be great if you guys could post results for the Drobo FS.
  • Pixelpusher6 - Thursday, September 6, 2012 - link

    Quick Correction - On the last page under specs for the memory do you mean 10-10-10-30 instead of 19-10-10-30?

    I was wondering about the setup with the CPUs for this machine. If each of the 12 VMs use 1 dedicated real CPU core then what is the host OS running on? With 2 Xeon E5-2630Ls that would be 12 real CPU cores.

    I'm also curious about how hyper-threading works in a situation like this. Does each VM have 1 physical thread and 1 HT thread for a total of 2 threads per VM? Is it possible to run a VM on a single HT core without any performance degradation? If the answer is yes then I'm assuming it would be possible to scale this system up to run 24 VMs at once.
  • ganeshts - Thursday, September 6, 2012 - link

    Thanks for the note about the typo in the CAS timings. Fixed it now.

    We took a punt on the fact that I/O generation doesn't take up much CPU. So, the host OS definitely shares CPU resources with the VMs, but the host OS handles that transparently. When I mentioned that one CPU core is dedicated to each VM, I meant that the Hyper-V settings for the VM indicated 1 vCPU instead of the allowed 2 , 3 or 4 vCPUs.

    Each VM runs only 1 thread. I am still trying to figure out how to increase the VM density in the current set up. But, yes, it looks like we might be able to hit 24 VMs because the CPU requirements from the IOMeter workloads are not extreme.
  • dtgoodwin - Thursday, September 6, 2012 - link

    Kudos on excellent choice of hardware for power efficiency. 2 CPUs, 14 network ports, 8 sticks of RAM, and a total of 14 SSDS idling at just over 100 watts is very impressive.
  • casteve - Thursday, September 6, 2012 - link

    Thanks for the build walkthrough, Ganesh. I was wondering why you used a 850W PSU when worst case DC power use is in the 220W range? Instead of the $180 Silverstone Gold rated unit, you could have gone with a lower power 80+ Gold or Platinum PSU for less $'s and better efficiency at your given loads.
  • ganeshts - Thursday, September 6, 2012 - link

    Just a hedge against future workloads :)
  • haxter - Thursday, September 6, 2012 - link

    Guys yank those NICs and get a dual 10gbe card in place. SOHO is 10Gbe these days. What gives? How are you supposed to test SOHO NAS with each VM so crippled?
  • extide - Thursday, September 6, 2012 - link

    10GBe is certainly not SOHO.

Log in

Don't have an account? Sign up now