Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the CIFS share:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

QNAP TS-853 Pro - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 78 79
Re-Write 83 81
Read 45 115
Re-Read 48 114
Random Read 29 64
Random Write 81 78
Backward Read 28 51
Record Re-Write 1648* 1653*
Stride Read 44 108
File Write 83 80
File Re-Write 82 81
File Read 32 93
File Re-Read 33 94
*: Benchmark number skewed due to caching effect
Single Client Performance - CIFS & iSCSI on Windows Multi-Client Performance - CIFS on Windows
Comments Locked

58 Comments

View All Comments

  • ap90033 - Wednesday, December 31, 2014 - link

    RAID is not a REPLACEMENT for BACKUP and BACKUP is not a REPLACEMENT for RAID.... RAID 5 can be perfectly fine... Especially if you have it backed up. ;)
  • shodanshok - Wednesday, December 31, 2014 - link

    I think you should consider raid10: recovery is much faster (the system "only" need to copy the content of a disk to another) and URE-imposed threat is way lower.

    Moreover, remember that large RAIDZ arrays have the IOPS of a single disk. While you can use a large ZIL device to transform random writes into sequential ones, the moment you hit the platters the low IOPS performance can bite you.

    For reference: https://blogs.oracle.com/roch/entry/when_to_and_no...
  • shodanshok - Wednesday, December 31, 2014 - link

    I agree.

    The only thing to remember when using large RAIDZ system is that, by design, RAIDZ arrays have the IOPS of a single disk, no matter how much disks you throw at it (throughput will linearly increase, though). For increased IOPS capability, you should construct your ZPOOL from multiple, striped RAIDZ arrays (similar to how RAID50/RAID60 work).

    For more information: https://blogs.oracle.com/roch/entry/when_to_and_no...
  • ap90033 - Friday, January 2, 2015 - link

    That is why RAID is not Backup and Backup is not RAID. ;)
  • cjs150 - Wednesday, January 7, 2015 - link

    Totally agree. As a home user, Raid 5 on a 4 bay NAS unit is fine, but I have had it fall over twice in 4 yrs, once when a disk failed and a second time when a disk worked loose (probably my fault). Failure was picked up, disk replaced and riad rebuilt. Once you have 5+ discs, Raid 5 is too risky for me.
  • jwcalla - Monday, December 29, 2014 - link

    Just doing some research and it's impossible to find out if this has ECC RAM or not, which is usually a good indication that it doesn't. (Which is kind of surprising for the price.)

    I don't know why they even bother making storage systems w/o ECC RAM. It's like saying, "Hey, let's set up this empty fire extinguisher here in the kitchen... you know... just in case."
  • Brett Howse - Monday, December 29, 2014 - link

    The J1900 doesn't support ECC:
    http://ark.intel.com/products/78867/Intel-Celeron-...
  • icrf - Monday, December 29, 2014 - link

    I thought the whole "ECC required for a reliable file system" was really only a thing for ZFS, and even then, only barely, with dangers generally over-stated.
  • shodanshok - Wednesday, December 31, 2014 - link

    It's not over-stated: any filesystem that proactively scrubs the disk/array (BTRFS and ZFS, at the moment) subsystem _need_ ECC memory.

    While you can ignore this fact on a client system (where the value of the corrupted data is probably low), on NAS or multi-user storage system ECC is almost mandatory.

    This is the very same reason why hardware RAID cards have ECC memory: when they scrubs the disks, any memory-related corruption can wreak havoc on array (and data) integrity.

    Regards.
  • creed3020 - Monday, December 29, 2014 - link

    I hope that Synology is working on something similar to the QvM solution here. The day I started my Synology NAS was the day I shutdown my Windows Server. I would, however, still love to have an always on Windows machine for the use cases that my NAS cannot perform or would be onerous to set up and get running.

Log in

Don't have an account? Sign up now