Feedback from readers indicates that the single client Intel NASPT benchmarking results presented in our NAS reviews tell only part of the story. How does the NAS behave under real SMB conditions, i.e, multiple clients simultaneously accessing it? What is the average response time? We believe these are valid concerns for 4-bay and higher configuration units. With that in mind, we have started revamping our NAS testbed.

Our intermediate testbed configuration is provided in the table below:

NAS Benchmarking Testbed Setup [ Q2 2012 ]
Processor Intel i7-3770K CPU - 4C/8T - 3.50GHz, 8MB Cache
Motherboard Asus P8H77-M Pro
OS Hard Drive Seagate Barracuda XT 2 TB
Secondary Drive Kingston SSDNow 128GB (Offline in Host OS)
Memory G.SKILL ECO Series 4GB (2 x 2GB) SDRAM DDR3 1333 (PC3 10666) F3-10666CL7D-4GBECO CAS 7-7-7-21
PCI-E Slot Quad-Port GbE Intel ESA-I340
Case Antec VERIS Fusion Remote Max
Power Supply Antec TruePower New TP-550 550W
Host Operating System Windows Server 2008 R2 Enterprise

Two virtual machines were set up using Hyper-V with the following configuration

Windows 7 Ultimate x64 : Guest OS
Processor Single Physical Core of Intel i7-3770K
OS Hard Drive VHD File on Seagate Barracuda XT 2 TB
Secondary Hard Drive Kingston SSDNow 128GB
Memory 1 GB
CentOS 6.2 x86_64 : Guest OS
Processor Single Physical Core of Intel i7-3770K
OS Hard Drive VHD File on Seagate Barracuda XT 2 TB
Secondary Hard Drive Kingston SSDNow 128GB
Memory 1 GB

The usage of VMs as NAS clients allows us to test Samba and NFS performance from a single host machine. While Intel NASPT can run on Windows (and has to be restricted to 2 GB of RAM in order to avoid caching effects), IOMeter / Dynamo can be used to measure performance in Linux.

The Kingston SSDNow 128 GB SSD we used in the earlier testbed has been reused here. In the host OS, this disk is set to offline, and is made available to the Hyper-V VMs as a physical drive. Note that we don't do any teaming in the Intel ESA-I340 in this testbed. Each VM gets its own physical Ethernet port in the ESA-I340, and the host OS uses the motherboard's built-in GbE port. All the Ethernet ports are connected to a ZyXel GS2200-24 switch.

In order to rule out VMs affecting the results by a wide margin, we first ran the Intel NASPT benchmarks using the old testbed and compared with what was obtained by running it on the Windows 7 VM. The numbers were off by +- 3-4 MBps, with the deviation being less than 1 MBps for tests such as Content Creation. Effects such as physical machine performance being consistently better than the VM performance were not observed.

For measurement of performance in Linux, dynamo was run on the Linux VM and connected to an IOMeter instance run on the host OS. Four tests were run to determine the characteristics of the NAS as a storage system for the client. In order to completely rule out caching effects, a special build of IOMeter with O_DIRECT access mode for NFS shares was used.

The robocopy / rsync benchmarks (transferring a 10.7 GB folder structure backup of the HQV 2.0 Benchmark Blu-ray to and from the NAS to the internal SSD) were also run in both the VMs.

System Teardown and Analysis Windows Performance : CIFS and iSCSI


View All Comments

  • DukeRobillard22 - Tuesday, May 29, 2012 - link

    The question I always have about a NAS, and which is hard to find out, is "what filesystem does it use?" Like, when its power supply dies, can I pull one of the mirrored disks out, plug it into a SATA port on my Linux box, and get at the data? While it's true the the disks themselves are probably the mostly likely thing to fail, they're not the only thing.

    Currently, I use an old PC running Fedora with software RAID, just so I can do that when some piece of hardware lets out the magic smoke.
  • KLC - Tuesday, May 29, 2012 - link

    Every time I read an NAS review I'm struck by how expensive they are. More than 2 years ago I bought an Acer Windows Home Server box. It has 4 hot swappable drive bays, an atom processor with 1 gb of memory and Windows Home Server V1. With one 1tb drive it cost me $350 on sale, regular price was $399. Two years later and I see systems with less capability than that one yet they are much more expensive. Why do NAS systems defy Moore's law of more computing capability for less money over time? Reply
  • EddieBoy - Wednesday, May 30, 2012 - link

    I keep thinking that I need something to replace my aging Windows Home Server setup. This looks like it might do the trick.

    But now I am concerned about the Seagate acquisition and whether that might affect their quality and customer support.

    Any thoughts on how the acquisition might affect this company?

  • Zak - Sunday, June 03, 2012 - link

    Do these overheat and fry their electronics like most of LaCie enclosures? Reply
  • klassobanieras - Tuesday, June 12, 2012 - link

    As the owner of a 4-disk ReadyNAS NV I always felt quite smug about my data until the box itself went bad. This taught me to ask certain awkward questions:
    - What if the box fails? Do I need to buy another identical box to get my data off my disks or will (e.g.) a Linux machine understand them ok?
    - Is it susceptible to the RAID write-hole? Do I need a UPS?
    - What kind of data-integrity does it provide, relative to the state-of-the-art (ZFS, btrfs et al)?

    Respectfully, I'd suggest that if you're going to seriously test NASes you need to (a) repeatedly yank the power-cord in the middle of metadata-heavy writes, (b) try getting your data off the disks without the use of the NAS itself, (c) see how it deals with a flaky drive and (d) test for data integrity, not just filesystem integrity.

    Finally, NASes should be judged in the context of what you can get from an el-cheapo PC running FreeNAS with ZFS, which IMHO puts most consumer NAS boxes to shame.

Log in

Don't have an account? Sign up now