Feedback from readers indicates that the single client Intel NASPT benchmarking results presented in our NAS reviews tell only part of the story. How does the NAS behave under real SMB conditions, i.e, multiple clients simultaneously accessing it? What is the average response time? We believe these are valid concerns for 4-bay and higher configuration units. With that in mind, we have started revamping our NAS testbed.

Our intermediate testbed configuration is provided in the table below:

NAS Benchmarking Testbed Setup [ Q2 2012 ]
Processor Intel i7-3770K CPU - 4C/8T - 3.50GHz, 8MB Cache
Motherboard Asus P8H77-M Pro
OS Hard Drive Seagate Barracuda XT 2 TB
Secondary Drive Kingston SSDNow 128GB (Offline in Host OS)
Memory G.SKILL ECO Series 4GB (2 x 2GB) SDRAM DDR3 1333 (PC3 10666) F3-10666CL7D-4GBECO CAS 7-7-7-21
PCI-E Slot Quad-Port GbE Intel ESA-I340
Case Antec VERIS Fusion Remote Max
Power Supply Antec TruePower New TP-550 550W
Host Operating System Windows Server 2008 R2 Enterprise
.

Two virtual machines were set up using Hyper-V with the following configuration

Windows 7 Ultimate x64 : Guest OS
Processor Single Physical Core of Intel i7-3770K
OS Hard Drive VHD File on Seagate Barracuda XT 2 TB
Secondary Hard Drive Kingston SSDNow 128GB
Memory 1 GB
CentOS 6.2 x86_64 : Guest OS
Processor Single Physical Core of Intel i7-3770K
OS Hard Drive VHD File on Seagate Barracuda XT 2 TB
Secondary Hard Drive Kingston SSDNow 128GB
Memory 1 GB

The usage of VMs as NAS clients allows us to test Samba and NFS performance from a single host machine. While Intel NASPT can run on Windows (and has to be restricted to 2 GB of RAM in order to avoid caching effects), IOMeter / Dynamo can be used to measure performance in Linux.

The Kingston SSDNow 128 GB SSD we used in the earlier testbed has been reused here. In the host OS, this disk is set to offline, and is made available to the Hyper-V VMs as a physical drive. Note that we don't do any teaming in the Intel ESA-I340 in this testbed. Each VM gets its own physical Ethernet port in the ESA-I340, and the host OS uses the motherboard's built-in GbE port. All the Ethernet ports are connected to a ZyXel GS2200-24 switch.

In order to rule out VMs affecting the results by a wide margin, we first ran the Intel NASPT benchmarks using the old testbed and compared with what was obtained by running it on the Windows 7 VM. The numbers were off by +- 3-4 MBps, with the deviation being less than 1 MBps for tests such as Content Creation. Effects such as physical machine performance being consistently better than the VM performance were not observed.

For measurement of performance in Linux, dynamo was run on the Linux VM and connected to an IOMeter instance run on the host OS. Four tests were run to determine the characteristics of the NAS as a storage system for the client. In order to completely rule out caching effects, a special build of IOMeter with O_DIRECT access mode for NFS shares was used.

The robocopy / rsync benchmarks (transferring a 10.7 GB folder structure backup of the HQV 2.0 Benchmark Blu-ray to and from the NAS to the internal SSD) were also run in both the VMs.

System Teardown and Analysis Windows Performance : CIFS and iSCSI
POST A COMMENT

15 Comments

View All Comments

  • zzing123 - Monday, May 28, 2012 - link

    Apparently a lot of these SOHO NAS's begin to have problems when they fill up, due to both using the inner tracks of the HDD platters, as well as the CPU overhead from software RAID. Rather than benchmarking absolute performance at new, can you begin to see what performance is like with an 85% full drive after a tortuous series of production IO? The reason being is a lot of people are increasingly using these NAS's for iSCSI and this doesn't help matters.

    See here for more info: http://www.servethehome.com/cost-nas-boxes-perform...

    Furthermore, while technologies such as bcache (http://www.phoronix.com/scan.php?page=news_item&am... and BTRFS are nearing kernel inclusion, or even using an OpenIndiana based embedded OS to provide ZFS (like EON), I see very little from the NAS manufacturers that they are even considering these advanced filesystems and SSD tiering, except for Drobo who are wildly overpriced and underperformant.
    Reply
  • ganeshts - Monday, May 28, 2012 - link

    Thanks for the note. We will keep this in mind for future NAS reviews.

    In fact, I tried to do something similar to expose QNAP's kernel problem [ http://forum.qnap.com/viewtopic.php?f=189&t=51... ], but left that effort hanging once QNAP owned up to the problem. Maybe it is time to work more on that aspect :)
    Reply
  • guste - Monday, May 28, 2012 - link

    Ganesh, thanks for the great review. I was wondering if it's possible, next time can you pick colours for the graph that aren't so similar? Reply
  • JarredWalton - Monday, May 28, 2012 - link

    How's that? Reply
  • guste - Monday, May 28, 2012 - link

    Cheers, Jarred. Thanks kindly. Reply
  • ggathagan - Monday, May 28, 2012 - link

    It would be interesting to see if your list of desired features are present on the LaCie "Professional" products that use NAS OS 2.

    It may be that the focus for their non-"professional" devices is ease of use, as opposed to full features.

    I think the review blurb LaCie uses on their web page for the 2big summarizes their target:
    “...5/5 – this really is a well made, cool looking NAS that can do pretty much everything you need it to do. My only real problem with it is that I have to give it back!”

    Like Apple, LaCie has always focused as much effort on the aesthetics of their products as they have the functionality. Also like Apple, I would expect that mindset to extend to how much of the inner workings of the OS are exposed to the user.

    Math nitpick from the unpacking page:
    "On the rear side, we have four square slots behind which the fan's exhaust pipe sits"

    I see six.
    Reply
  • GrizzledYoungMan - Monday, May 28, 2012 - link

    Some of my clients are those sorts of people (ie, Lacie customers). And man, it's crazy.

    They've all suffered a huge identity crisis in the last few years because Apple so clearly doesn't give a shit about its professional users anymore, abandoning FCP and eventually the desktop. Reflexively they want to keep buying Macs because hey, that's what 'creative' people do (never mind that they best pros I've met don't give a shit what type of computer they use). But logically they are running out of reasons to.

    I predict mass suicides.
    Reply
  • GrizzledYoungMan - Monday, May 28, 2012 - link

    I don't know if it's too pricey to make sense for your audience, but you all may want to check out Open-E's DSS V6 NAS software platform.

    It uses a heavily modified version of FreeBSD (I believe) and runs on a really wide variety of hardware, and provides nearly all of the failover, security and management features of those atomic powered high end enterprise NAS appliances for a fraction of the price (ie, thousands instead of tens of thousands).

    I've installed a bunch of these things for clients ranging from SOHO (with heavy storage needs, like video) to SMB all the way up to legit mid-tier enterprise work. They take a bit more knowledge to install than, say, Drobo, but it's the kind of stuff that anyone who works with gray-box appliances routinely will be well versed in.

    Coming from things like Windows Storage Server, Drobo, etc the performance is pretty amazing, you really feel like you're getting the most out of the hardware. With basic hardware (a modern low power Xeon mobo, LSI SAS RAID controller populated with 7200 rpm enterprise SATA drives) I routinely see wire speed on transfers from NAS to client machines over gig-e. In the small handful of installations I've done with 10 Gbe present, shit gets crazy.

    Most importantly, I've never seen a client lose data thanks to trouble with the software and support from the company is incredible, to the point where they will write unique small patches for specific clients, regardless of size. Between the two, it feels solid like a rock, in a way that many NAS and SAN systems simply don't.
    Reply
  • secretmanofagent - Monday, May 28, 2012 - link

    I can't help but see the turret. If they make the blue light red, slap an Aperture Science on the side, and they'll get the geeks to swarm over it. Reply
  • sleepeeg3 - Tuesday, May 29, 2012 - link

    Probably the last product before they are swallowed by Seagate. Reply

Log in

Don't have an account? Sign up now