The virtualization infrastructure in our testbed involved running Hyper-V on Windows Server 2008 R2. The benchmarking programs include IOMeter 1.1rc1 / Dynamo and Intel NASPT. While the former is used to test multi-client performance, the latter is used for testing the single client scenario. IOMeter itself runs on the host machine, while Dynamo instances run on each VM. Intel NASPT is run from one of the VMs only. Twelve guest machines were set up with similar configurations as below:

2012 AnandTech NAS Testbed Virtual Machine Configuration
CPU 1 vCPU
Memory Static 2GB
IDE Controller 0 Physical Disk X (64GB OCZ Vertex 4 SSD)
Network Adapter 0 Local Area Connection X (ESA-I340 Port)
Network Adapter 1 Internal Network
OS Windows 7 Ultimate x64

Each VM gets one vCPU. By default, each VMs gets 2GB of RAM. However, when performing the robocopy tests to transfer a Blu-ray folder structure to and from the NAS, we choose to increase the amount of memory in that particular VM to 24GB (since only one VM is active in this case). This enables the creation of a RAM disk to house the folder structure (10.7GB). Under all scenarios, the host OS has a 32GB RAM disk mounted at startup. The RAM disks were created and mounted with Passmark Software’s free utility, OSFMount.

Two network interfaces are set up for each VM. While one of them connects to the NAS through one of the physical LAN ports provided by the Intel ESA-I340 and gets its IP address through DHCP in the 192.168.1.x subnet, the other is an internal network between the VMs and the host machine. With the host taking the static IP 10.0.0.1, each VM’s internal network interface is set up with a static IP of the form 10.0.0.x, with x between 2 and 13. The 32GB RAM disk mounted on the host is shared over this internal network for the VMs and the host to exchange information on the fly.

The guest OS on each of the VMs is Windows 7 Ultimate x64. The intention of the build is to determine how the performance of the NAS under test degrades when multiple clients begin to access it. This degradation might be in terms of increased response time or a decrease in available bandwidth. Both of these can be measured using IOMeter. While IOMeter is the controlling program installed in the host, each of the VMs run the Dynamo workload generator component. Dynamo and IOMeter communicate through the internal network to ensure that there is no effect on the benchmark runs.

Four distinct workloads corresponding to the storage performance evaluation metrics outlined here were set up to run on each machine. In the first pass, only the first VM runs the workload and reports the results. In the second pass, the first two VMs run the workload and report back and so on, till we conclude the IOMeter benchmark runs with all the twelve VMs running workloads simultaneously and reporting results. Fortunately, all the synchronization aspects are handled by IOMeter itself.

Manually running the dynamo process on each VM and restarting it after the completion of each pass is definitely a cumbersome process. To make things a little easier, we installed SSH servers on all the VMs. Bitvise SSH Server was our software of choice for the ease of use and configurability. After installing Strawberry Perl (on the host as well as all the VMs), we developed a small script to SSH from the host into each of the VMs in order, mount the NAS share and run Dynamo multiple times in sequence. The ICF files used in each of the twelve passes are available for download here.

In our review of the Western Digital Red hard drive, we used this testbed to fill up the NAS to varying levels. For this purpose, some files and folders were copied onto the RAM disk in the host which was shared over the 10.0.0.x internal network. This shared disk was mounted on all the VMs. A Perl script to copy over the contents from the mounted RAM disk (as many times as necessary to achieve a required fill percentage) to the NAS drive was processed. This simulates the NAS being subject to activity from multiple clients in the process of getting filled up.

In the next two sections, we will take a look at the sort of results that this testbed is able to provide us in terms of evaluating NAS performance.

Hardware Build - Chassis and PSU Testbed in Action : Synology DS211+
Comments Locked

74 Comments

View All Comments

  • Tor-ErikL - Thursday, September 6, 2012 - link

    As always a great article and a sensible testbench which can be scaled to test everything from small setups to larger setups. good choice!

    However i would also like some type of test that is less geared towards technical performance and more real world scenarios.

    so to help out i give you my real world scenario:
    Family of two adults and two teenagers...

    Equipment in my house is:
    4 latops running on wifi network
    1 workstation for work
    1 mediacenter running XBMC
    1 Synollogy NAS

    laptops streams music/movies from my nas - usually i guess no more than two of these runs at the same time
    MediaCenter also streams music/movies from the same nas at the same time
    in adition some of the laptops browse all the family pictures which are stored on the NAS and does light file copy to and from the NAS.
    The NAS itself downloads movies/music/tvshows and does unpacking and internal file transfers

    My guess for a typical home use scenario there is not that much intensiv file copying going on, usually only light transfers trough mainly either wifi or 100mb links

    I think the key factor is that usually there are multiple clients connecting and streaming different stuff that is the most relevant factor. at tops 4-5 clients

    Also as mentioned difference on the different sharing protocols like SMB/CIFS would be interesting to se more details about.

    Looking forward for the next chapters in your testbench :)
  • Jeff7181 - Thursday, September 6, 2012 - link

    I'd be very curious to see tests involving deduplication. I know deduplication is found more on enterprise-class type storage systems, but WHS used SIS, and FreeNAS uses ZFS, which supports deduplication.
  • _Ryan_ - Thursday, September 6, 2012 - link

    It would be great if you guys could post results for the Drobo FS.
  • Pixelpusher6 - Thursday, September 6, 2012 - link

    Quick Correction - On the last page under specs for the memory do you mean 10-10-10-30 instead of 19-10-10-30?

    I was wondering about the setup with the CPUs for this machine. If each of the 12 VMs use 1 dedicated real CPU core then what is the host OS running on? With 2 Xeon E5-2630Ls that would be 12 real CPU cores.

    I'm also curious about how hyper-threading works in a situation like this. Does each VM have 1 physical thread and 1 HT thread for a total of 2 threads per VM? Is it possible to run a VM on a single HT core without any performance degradation? If the answer is yes then I'm assuming it would be possible to scale this system up to run 24 VMs at once.
  • ganeshts - Thursday, September 6, 2012 - link

    Thanks for the note about the typo in the CAS timings. Fixed it now.

    We took a punt on the fact that I/O generation doesn't take up much CPU. So, the host OS definitely shares CPU resources with the VMs, but the host OS handles that transparently. When I mentioned that one CPU core is dedicated to each VM, I meant that the Hyper-V settings for the VM indicated 1 vCPU instead of the allowed 2 , 3 or 4 vCPUs.

    Each VM runs only 1 thread. I am still trying to figure out how to increase the VM density in the current set up. But, yes, it looks like we might be able to hit 24 VMs because the CPU requirements from the IOMeter workloads are not extreme.
  • dtgoodwin - Thursday, September 6, 2012 - link

    Kudos on excellent choice of hardware for power efficiency. 2 CPUs, 14 network ports, 8 sticks of RAM, and a total of 14 SSDS idling at just over 100 watts is very impressive.
  • casteve - Thursday, September 6, 2012 - link

    Thanks for the build walkthrough, Ganesh. I was wondering why you used a 850W PSU when worst case DC power use is in the 220W range? Instead of the $180 Silverstone Gold rated unit, you could have gone with a lower power 80+ Gold or Platinum PSU for less $'s and better efficiency at your given loads.
  • ganeshts - Thursday, September 6, 2012 - link

    Just a hedge against future workloads :)
  • haxter - Thursday, September 6, 2012 - link

    Guys yank those NICs and get a dual 10gbe card in place. SOHO is 10Gbe these days. What gives? How are you supposed to test SOHO NAS with each VM so crippled?
  • extide - Thursday, September 6, 2012 - link

    10GBe is certainly not SOHO.

Log in

Don't have an account? Sign up now