Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the CIFS share:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Seagate NAS Pro 4-bay - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 66 64
Re-Write 67 70
Read 31 117
Re-Read 31 121
Random Read 19 55
Random Write 62 70
Backward Read 18 43
Record Re-Write 771* 1173*
Stride Read 28 101
File Write 64 74
File Re-Write 67 76
File Read 22 89
File Re-Read 22 90
*: Benchmark number skewed due to caching effect

 

Single Client Performance - CIFS & iSCSI on Windows Multi-Client CIFS Performance
Comments Locked

13 Comments

View All Comments

  • StickyIcky - Wednesday, August 27, 2014 - link

    I still don't know if I'm sold on hardware RAID for these types of storage solutions. What if this thing dies in 5 years? Am I going hunting on eBay for an old unit and hope it can get my data back? It's too much data to not be so forward thinking.
  • ganeshts - Wednesday, August 27, 2014 - link

    This is not hardware RAID. It is mdadm-based, i.e, software RAID on Linux. If the unit dies, you can always connect the drives to a PC to recover the data (similar to what we did for with a Synology NAS last week).
  • StickyIcky - Wednesday, August 27, 2014 - link

    That is certainly a different story. Thanks so much Ganesh! Great work as always.
  • Samus - Wednesday, August 27, 2014 - link

    RAID is to protect from a disk failure (usually just one disk.) If your entire unit or multiple disks fail, you'd need to recover from a backup no matter what the solution (unless you want to spend $700+ per disk on RAID data recovery)

    Always have a backup. Just pickup a 5TB external for <$200 (or two for $400 and put them in a dual bay JBOD for 10TB) and plug it into the NAS USB port. All modern NAS devices have a USB port for backup.
  • Death666Angel - Wednesday, August 27, 2014 - link

    Hardware-RAID cards or mainboards with hardware RAID (do they still exist?) can fail and in contrast to software RAID make it harder to retrieve the data. But that has been a topic of hundreds of pages of heated debate on home server enthusiast websites, this isn't the place to have that discussion. :D
  • Gigaplex - Wednesday, August 27, 2014 - link

    If it's just the controller that's busted, you do not normally need to recover from backup.
  • creed3020 - Wednesday, August 27, 2014 - link

    Overall I am left very impressed with this NAS. It just makes me wonder where Synology is because they are really missing products with this Rangely SoC. I really do like DSM but I have to wonder what the future holds when I need to upgrade from my DS 212j. If anything I hope that this increased competition benefits us consumers.
  • Oyster - Wednesday, August 27, 2014 - link

    Ganesh, somewhat off-topic, but can you share what your scheduled tasks for SMART tests look like? I run a rapid test every night and a complete test on a weekly basis on my QNAP. I wonder if the SMART polls stress the disks? Maybe you can throw some light on this.
  • Laststop311 - Thursday, August 28, 2014 - link

    I have used a lot of hard drives in my day. I have personally sworn off seagate as a brand. I had 5 yes count em 5 seagate hard drives fail within 18 months of each other. I rma it and the new ones they send me continue to fail one after the other. I'm currently using 4x 1TB wd red drives in my nas but I am running out of space. I want to make 1 big upgrade to 4x 6TB drives. Are the HGST he6 helium filled drives all they are cracked up to be? Do they really run noticeably quieter and cooler? Are they worth it to spend 420 dollars each on them?
  • AntonyIndia - Friday, August 29, 2014 - link

    Having only one NAS system with any number of disks or RAID is not going to rescue you in case of trouble as the weakest chain is the mutual board / power supply/ fan in that box. better buy two identical single disk NAS boxes and back A up onto B once a day (or week) and keep B in another room offline even without power most of the time.

Log in

Don't have an account? Sign up now