The virtualization infrastructure in our testbed involved running Hyper-V on Windows Server 2008 R2. The benchmarking programs include IOMeter 1.1rc1 / Dynamo and Intel NASPT. While the former is used to test multi-client performance, the latter is used for testing the single client scenario. IOMeter itself runs on the host machine, while Dynamo instances run on each VM. Intel NASPT is run from one of the VMs only. Twelve guest machines were set up with similar configurations as below:

2012 AnandTech NAS Testbed Virtual Machine Configuration
CPU 1 vCPU
Memory Static 2GB
IDE Controller 0 Physical Disk X (64GB OCZ Vertex 4 SSD)
Network Adapter 0 Local Area Connection X (ESA-I340 Port)
Network Adapter 1 Internal Network
OS Windows 7 Ultimate x64

Each VM gets one vCPU. By default, each VMs gets 2GB of RAM. However, when performing the robocopy tests to transfer a Blu-ray folder structure to and from the NAS, we choose to increase the amount of memory in that particular VM to 24GB (since only one VM is active in this case). This enables the creation of a RAM disk to house the folder structure (10.7GB). Under all scenarios, the host OS has a 32GB RAM disk mounted at startup. The RAM disks were created and mounted with Passmark Software’s free utility, OSFMount.

Two network interfaces are set up for each VM. While one of them connects to the NAS through one of the physical LAN ports provided by the Intel ESA-I340 and gets its IP address through DHCP in the 192.168.1.x subnet, the other is an internal network between the VMs and the host machine. With the host taking the static IP 10.0.0.1, each VM’s internal network interface is set up with a static IP of the form 10.0.0.x, with x between 2 and 13. The 32GB RAM disk mounted on the host is shared over this internal network for the VMs and the host to exchange information on the fly.

The guest OS on each of the VMs is Windows 7 Ultimate x64. The intention of the build is to determine how the performance of the NAS under test degrades when multiple clients begin to access it. This degradation might be in terms of increased response time or a decrease in available bandwidth. Both of these can be measured using IOMeter. While IOMeter is the controlling program installed in the host, each of the VMs run the Dynamo workload generator component. Dynamo and IOMeter communicate through the internal network to ensure that there is no effect on the benchmark runs.

Four distinct workloads corresponding to the storage performance evaluation metrics outlined here were set up to run on each machine. In the first pass, only the first VM runs the workload and reports the results. In the second pass, the first two VMs run the workload and report back and so on, till we conclude the IOMeter benchmark runs with all the twelve VMs running workloads simultaneously and reporting results. Fortunately, all the synchronization aspects are handled by IOMeter itself.

Manually running the dynamo process on each VM and restarting it after the completion of each pass is definitely a cumbersome process. To make things a little easier, we installed SSH servers on all the VMs. Bitvise SSH Server was our software of choice for the ease of use and configurability. After installing Strawberry Perl (on the host as well as all the VMs), we developed a small script to SSH from the host into each of the VMs in order, mount the NAS share and run Dynamo multiple times in sequence. The ICF files used in each of the twelve passes are available for download here.

In our review of the Western Digital Red hard drive, we used this testbed to fill up the NAS to varying levels. For this purpose, some files and folders were copied onto the RAM disk in the host which was shared over the 10.0.0.x internal network. This shared disk was mounted on all the VMs. A Perl script to copy over the contents from the mounted RAM disk (as many times as necessary to achieve a required fill percentage) to the NAS drive was processed. This simulates the NAS being subject to activity from multiple clients in the process of getting filled up.

In the next two sections, we will take a look at the sort of results that this testbed is able to provide us in terms of evaluating NAS performance.

Hardware Build - Chassis and PSU Testbed in Action : Synology DS211+
Comments Locked

74 Comments

View All Comments

  • dertechie - Wednesday, September 5, 2012 - link

    That is a helluva test bench.

    I'd love to see what a HP N40L Microserver does with 4 disks in it if you throw that at it (use the on-motherboard USB port for the OS). It's certainly not a plug-and-play solution like most NAS boxes, but assuming the performance is there it should be a far more flexible one for the money if you throw a *nix based OS on it.
  • bsd228 - Wednesday, September 5, 2012 - link

    I've taken advantage of the 5th internal port of the N36L to add an SSD that is used by ZFS for both read and write caching. Strictly speaking, mirrored write caches are advised, but it's connected to a UPS to eliminate much of that risk.

    I think HP has given us the perfect platform for low power, high performance with flexibility.
  • extide - Thursday, September 6, 2012 - link

    Cache? or L2ARC?

    Mirrored Cache drives are NOT suggested for ZFS, but Mirrored L2ARC devices are.
  • coder543 - Wednesday, September 5, 2012 - link

    running Windows Server.........
  • ganeshts - Wednesday, September 5, 2012 - link

    What alternatives do you have in mind?

    We needed a platform which was well supported by the motherboard. To tell the truth, I found Hyper-V and the virtualization infrrastructure to be really good and easy to use compared to VMWare's offerings.
  • ender8282 - Wednesday, September 5, 2012 - link

    I assume coder543 was going for a Linux based host, and possibly Linux based clients as well. If you had gone with Linux you wouldn't have needed extra software for SSH or the ram disk. It even looks like IOMeter is supported for Linux. Had you gone that route you likely could have automated the whole task so that it was just a matter of typing go on the host and coming back hours later to collect the results. OTOH most of your audience is probably more likely to be using Windows clients so it probably makes more sense to provide information clearly relevant to the average reader.

    I found the article interesting. The one thing that I'd be curious about is whether or not there were any major performance differences using Samba/CIFS type shares vs NFS, or a mixture of the two.

    I'd love to see more Linux coverage in general, but I respect that you know your audience and write the articles that they generally want to read.

    Great Job keep it up!
  • Ratman6161 - Thursday, September 6, 2012 - link

    I should run on that platform just great. On the other hand, when all is said and done, as nice as this setup is, to me it is basically a full blown server/virtualization platform; not really a "NAS" at all. I would typically think of a NAS as being a dedicated storage device - possibly used as an IScsi target with the brains of the operation living elsewhere.
  • ganeshts - Thursday, September 6, 2012 - link

    This is a testbed for evaluating NAS units, not a NAS. Not sure why readers are getting an impression that this is a NAS by itself.
  • bsd228 - Wednesday, September 5, 2012 - link

    Ganesh- I think this test bed sets up very well for testing the $500-1000 4 bay type NAS devices we've been seeing of late that could actually serve a small office. However, I'm less sure that it delivers meaningful data to the home crowd. Like with your SSD tests, I see a place for a "light" load versus the heavy. I think testing against 4 VMs with, for sake of example, the following load types would work:
    1- 2 VMs streaming video - 1 DVD, 1 H.264 HDTV - are there any interruptions?
    2- 1 VM streaming audio off a mt-daapd (or actual itunes since you're using windows as the server) - again, is there any dropoffs.
    3- same VM as #2 is also doing content creation - like importing 1000 RAW images into Lightroom using this storage space
    4- last VM is copying large files (or small) to the storage server.

    The Thecus 4800 should handle this with ease, but there are many cheaper solutions out there that may or may not meet this level of need. I got so tired of poorly performing consumer units that 4 years ago I switched to an AMD x2 4800 running solaris, and more recently to the HP36L and 40L. At $300 plus $60 for 8 gigs of ECC I think this is a better value than the Thecus for those who can run solaris or even Windows Home Server. You're not reliant on the release of modules to support a particular service.

    Also, it seems that all of these benchmarks are based on SMB transfers. It's worth checking to see if nfs and iscsi performance (when made available by the NAS) shows different numbers. In the past, it certainly did, especially on the consumer devices where NFS smoked SMB1. But perhaps this is a moot point with SMB2/windows 7 where it seems like the NIC or the hard drives are the limiting factors, not the transfer protocol.
  • Rick83 - Thursday, September 6, 2012 - link

    I agreem test the different protocols provided by the devices.
    iSCSI, SMB, NFS as well as the media streaming protocols, FTP and whatever else it offers.
    If encrypted transfers are offered, test those as well (eg. sshfs / scp).

    Additionally, have a look at one of the cluster-ssh solutions, that allows simultaneous connections/commands to all machines.

Log in

Don't have an account? Sign up now