The virtualization infrastructure in our testbed involved running Hyper-V on Windows Server 2008 R2. The benchmarking programs include IOMeter 1.1rc1 / Dynamo and Intel NASPT. While the former is used to test multi-client performance, the latter is used for testing the single client scenario. IOMeter itself runs on the host machine, while Dynamo instances run on each VM. Intel NASPT is run from one of the VMs only. Twelve guest machines were set up with similar configurations as below:

2012 AnandTech NAS Testbed Virtual Machine Configuration
CPU 1 vCPU
Memory Static 2GB
IDE Controller 0 Physical Disk X (64GB OCZ Vertex 4 SSD)
Network Adapter 0 Local Area Connection X (ESA-I340 Port)
Network Adapter 1 Internal Network
OS Windows 7 Ultimate x64

Each VM gets one vCPU. By default, each VMs gets 2GB of RAM. However, when performing the robocopy tests to transfer a Blu-ray folder structure to and from the NAS, we choose to increase the amount of memory in that particular VM to 24GB (since only one VM is active in this case). This enables the creation of a RAM disk to house the folder structure (10.7GB). Under all scenarios, the host OS has a 32GB RAM disk mounted at startup. The RAM disks were created and mounted with Passmark Software’s free utility, OSFMount.

Two network interfaces are set up for each VM. While one of them connects to the NAS through one of the physical LAN ports provided by the Intel ESA-I340 and gets its IP address through DHCP in the 192.168.1.x subnet, the other is an internal network between the VMs and the host machine. With the host taking the static IP 10.0.0.1, each VM’s internal network interface is set up with a static IP of the form 10.0.0.x, with x between 2 and 13. The 32GB RAM disk mounted on the host is shared over this internal network for the VMs and the host to exchange information on the fly.

The guest OS on each of the VMs is Windows 7 Ultimate x64. The intention of the build is to determine how the performance of the NAS under test degrades when multiple clients begin to access it. This degradation might be in terms of increased response time or a decrease in available bandwidth. Both of these can be measured using IOMeter. While IOMeter is the controlling program installed in the host, each of the VMs run the Dynamo workload generator component. Dynamo and IOMeter communicate through the internal network to ensure that there is no effect on the benchmark runs.

Four distinct workloads corresponding to the storage performance evaluation metrics outlined here were set up to run on each machine. In the first pass, only the first VM runs the workload and reports the results. In the second pass, the first two VMs run the workload and report back and so on, till we conclude the IOMeter benchmark runs with all the twelve VMs running workloads simultaneously and reporting results. Fortunately, all the synchronization aspects are handled by IOMeter itself.

Manually running the dynamo process on each VM and restarting it after the completion of each pass is definitely a cumbersome process. To make things a little easier, we installed SSH servers on all the VMs. Bitvise SSH Server was our software of choice for the ease of use and configurability. After installing Strawberry Perl (on the host as well as all the VMs), we developed a small script to SSH from the host into each of the VMs in order, mount the NAS share and run Dynamo multiple times in sequence. The ICF files used in each of the twelve passes are available for download here.

In our review of the Western Digital Red hard drive, we used this testbed to fill up the NAS to varying levels. For this purpose, some files and folders were copied onto the RAM disk in the host which was shared over the 10.0.0.x internal network. This shared disk was mounted on all the VMs. A Perl script to copy over the contents from the mounted RAM disk (as many times as necessary to achieve a required fill percentage) to the NAS drive was processed. This simulates the NAS being subject to activity from multiple clients in the process of getting filled up.

In the next two sections, we will take a look at the sort of results that this testbed is able to provide us in terms of evaluating NAS performance.

Hardware Build - Chassis and PSU Testbed in Action : Synology DS211+
Comments Locked

74 Comments

View All Comments

  • ganeshts - Thursday, September 6, 2012 - link

    Thanks for unearthing that one.. Fixed now.
  • ypsylon - Thursday, September 6, 2012 - link

    14 SSDs. I know it is only to simulate separate clients, but to be honest this whole test is ultimately meaningless. No reasonable business (not talking about 'man with a laptop' kind of company) will entrust crucial data to SSD(s) (in particular non-industry class standard SSDs). Those disks are far too unreliable and HDDs trounce them in that category every time. Whether you like it or not, HDDs are still here and I'm absolutely certain that they will outlive SSDs by a fair margin. Running a business myself and thank you very much HDDs are the only choice, RAID 10, 6 or 60 depending on a job. Bloody SDDs, hate those to the core (tested). Good for laptops or for geeks who benching system 24/7 not for serious job.
  • ypsylon - Thursday, September 6, 2012 - link

    Dang 12 not 14 , ha, ha.
  • mtoma - Thursday, September 6, 2012 - link

    If you love so much the reliability of HDDs, I must ask you: what SSD brand have failed you? Intel? Samsung? You know, they are statistics that show Intel and Samsung SSD are much more reliable 24/7 than many Enterprise HDDs. I mean, on paper, the enterprise HDDs looks great, but in reality they fail more than they should (in a large RAID array vibration is a maine concern). After all, the same basic technology applies to regular HDDs. On top of that, some (if not all) server manufacturers put refurbished HDDs in new servers (I have seen IBM doing that and I was terrified). Perhaps this is not a widespread practice, but it is truly terrifying.
    So, pardon me if I say: to hell with regular HDDs. Buy enterprise grade SSDs, you get the same 5 year warranty.
  • extide - Thursday, September 6, 2012 - link

    Dude you missed the point ENTIRELY, the machine they built is to TEST NAS's. They DID NOT BUILD A NAS.
  • Wardrop - Saturday, September 8, 2012 - link

    I can't work out whether this guy is trolling or not? A very provocative post without really any detail.
  • AmdInside - Thursday, September 6, 2012 - link

    Isn't Win7 x64 Ultimate a little too much for a VM? Would be nice to see videos.
  • ganeshts - Thursday, September 6, 2012 - link

    We wanted an OS which would support both IOMeter and Intel NASPT. Yes, we could have gone with Windows XP, but the Win 7 installer USB drives were on the top of the heap :)
  • AmdInside - Thursday, September 6, 2012 - link

    Thanks
  • zzing123 - Thursday, September 6, 2012 - link

    Hi Ganesh - Thanks for taking my post a few articles back to heart regarding the NAS performance when fully loaded, as it begins to provide some really meaningful results.

    I have to agree with some of the other posters' comments about the workload though. Playing a movie on one, copying on another, running a VM from a third and working of docs through an SMB share on a fourth would probably be a more meaninful workload in a prosumer's home.

    In light of this, might it be an idea to add a new benchmark to AnandTech's Storage Bench that measures all these factors?

    In terms of your setup, there's a balance to be struck. I really like the concept you're doing of using 12 VM's to replicate a realistic environment in the way you can do. However when an office has 12 clients, they're probably using a proper file server or multiple NAS's. 3-4 clients is probably the most typical set up in a SOHO/home setup.

    10GbE testing is missing, and a lot of NAS's are beginning to ship with 10GbE. With switches like the Cisco SG500X-24 also supporting 10GbE and becoming slowly more affordable, 10GbE is slowly but surely becoming more relevant. 1 SSD and 1 GbE connection isn't going to saturate it - 10 will, and is certainly meaninful in a multi-user context, but this is AnandTech. What about absolute performance?

    How about adding a 13th VM that leashes together all the 12 SSD's and aggregates all the 12 I340 links to provide a beast of RAIDed SSD's and 12GbE connectivity (the 2 extra connections should smoke out net adapters that aren't performing to spec as well).

Log in

Don't have an account? Sign up now