Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the CIFS share:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

QNAP TS-853 Pro - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 78 79
Re-Write 83 81
Read 45 115
Re-Read 48 114
Random Read 29 64
Random Write 81 78
Backward Read 28 51
Record Re-Write 1648* 1653*
Stride Read 44 108
File Write 83 80
File Re-Write 82 81
File Read 32 93
File Re-Read 33 94
*: Benchmark number skewed due to caching effect
Single Client Performance - CIFS & iSCSI on Windows Multi-Client Performance - CIFS on Windows
Comments Locked

58 Comments

View All Comments

  • lorribot - Monday, December 29, 2014 - link

    Two things strike me $210 for 8GB ram, how can anyone justify that? Even Apple aren't that expensive.
    Raid 5 really? With 4TB SATA disks if you are going to bother with rundundancy then raid 6 please. from painful experience Raid 5 no longer cuts the mustard for protection given SATA's poor data verification and the huge rebuild time on a 4TB based array I really wouldn't bother, if you data is that important then you need to be backing up the changes or use a proper storage syatem.
    Pro NAS boxes like these are overpriced for what they offer, which in reality is not a lot, as for running a VMs off of it I personally wouldn't bother.
    Halve the price and offer some form of asyncronous replcation and you may just be on to something.
    As it is one of HP's micro servers with a bunch of disks in it would offer better value.
  • mhaubr2 - Monday, December 29, 2014 - link

    Seriously not trolling here - trying to better understand. Coming from the original Windows Home Server and its Drive Pool concept has me spoiled. I'm now using WHS2011 and Drive Bender, and it seems like the way to go. With pooled drives I can expand capacity easily using mix-and-match drives of different brands, sizes and vintages. This seems far less risky than using 3 or more identical drives in a RAID-5 or 6 array. I don't have to worry about getting a bad batch of drives or having a second (or third) drive fail on rebuild. This is how I see it, but I know there are plenty of folks out there that are proponents of RAID-x. I'm looking to build a new media server, so why should I consider a RAID setup over drive pooling?
  • PEJUman - Monday, December 29, 2014 - link

    I actually have the same thought process as you. but my mindset was setup around a single family file server demands. where the single drive with duplication would be sufficient in terms of performance/reliabilty. The Raid arrays allows much higher theoretical performance compared to Drive Bender's, not to mention better than N/2 efficiency for single disk failure tolerance.

    I personally like Drive Bender's solution for my needs, but will not use it for business oriented needs: 100% uptime, high performance and multi disk failure tolerant setup.
  • DanNeely - Tuesday, December 30, 2014 - link

    Between long rebuild times and the risk of an URE bringing down the array, RAID10 (or it's equivalents) have largely replaced RAID5/6 in larger arrays and SANs.
  • DanNeely - Tuesday, December 30, 2014 - link

    FWIW I'm running WHS2011 but with DrivePool instead. Quite happy with it so far, but it's only 16 months until end of life; and with the WHS series seemingly dead as well I've been paying closer attention to the rest of the nas world hoping to find a suitable replacement. So far without much luck.

    ZFS seems like it's the closest option; but unless I've missed something (or newer features have been added since the blogs/etc that I've read) options for expanding are limited to: Swapping out all the drives one at a time for larger ones rebuilding each time and only getting more usable space after all the drives have been replaced; or by adding a minimum of two drives (in a raid1 analog) as a separate sub array.

    Aside from Drobo, which has recovery issues due to its proprietary FS (no option to pull drives and stick into a normal PC to get data off if it goes down) and is reported to slow down severely at it fills to near capacity, I'm not aware of anything else on the market that would allow for creating and expanding a mirrored storage pool out of mismatched disks the way WHSv1 did or WHS2011 does with 3rd party disk management addons.
  • Brett Howse - Tuesday, December 30, 2014 - link

    If you are happy with WHS 2011 (that's what I run too) you may want to check out Storage Spaces in Windows 8/8.1 and Server 2012/2012 R2.
    http://technet.microsoft.com/en-us/library/hh83173...

    It's like WHS v1's drive extender but done right. You can do mirror or parity to one or more drives, as well as mix and match the drives including SSDs for different speed tiers. Might be worth your time to check out.

    Because this is all available on Windows 8.1, you can do it for a low cost compared to buying Windows Server. What you'd lose though (and this is why I haven't moved off WHS yet) is the amazing full device backup that WHS offers. This is only available in Windows Server Essentials as far as I know, which is a big licensing fee compared to what WHS used to retail for.
  • Gigaplex - Wednesday, December 31, 2014 - link

    It's not done right. If you have a parity pool and add one more drive later, well, you can't. If you started with 3 drives, the only way to expand is to add 3 drives at a time.
  • jabber - Tuesday, December 30, 2014 - link

    Why do folks keep bleating on about RAID5? It's been classed as obsolete for nearly 5 years.

    Move on folks.
  • fackamato - Friday, January 2, 2015 - link

    Because it's still applicable for small drives e.g. SSD's or sub-2TB.
  • chocosmith - Tuesday, December 30, 2014 - link

    i have the ts-453 pro, as a nas its great but i also got it for the hdmi so i could kill two birds with one stone and use it as a media box.
    unforuntately there is a huge amount of video tearing and the power supply fan is too loud for it to hang near the tv. overall if i was doing it again i'd simply get a celeron chip a small case and build it myself, i'd also probably use windows.

    also as others noted with the raid setup. After failing a raid 1 during rebuild i now simply use no raid. one disk can flood a 1gb lan so speed isn't an issue.
    Instead i just have the two disk, one is shared the other isn't. At 2am every morning i copy the changed files to the other. this gives me also some "opppp i deleted something" breathing space. I don't need critical raid.
    my primary is a ssd, its also used for torrents and other chatty stuff

Log in

Don't have an account? Sign up now