Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:

mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

Note that these areslightly different from what we used to run in our previous NAS reviews. We have also shifted from IOMeter to IOZone for evaluating performance under Linux. The following IOZone command was used to benchmark the shares:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here. These numbers will gain relevance as we benchmark more NAS units with similar configuration.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

Some scenarios exhibit client caching effects, and these are evident in the gallery below.

The IOZone CSV output can be found here for those interested in the exact numbers.

Single Client Performance - CIFS and iSCSI on Windows Multi-Client Performance - CIFS
Comments Locked

51 Comments

View All Comments

  • Gigaplex - Saturday, December 28, 2013 - link

    No, you recover from backup. RAID is to increase availability in the enterprise, it is not a substitute for a backup.
  • P_Dub_S - Thursday, December 26, 2013 - link

    Please read that 3rd link and tell me if RAID 5 makes any sense with todays drive sizes and costs.
  • Gunbuster - Thursday, December 26, 2013 - link

    Re: that 3rd link. Who calls it resilvering? Sounds like what a crusty old unix sysadmin with no current hardware knowledge would call it.
  • P_Dub_S - Thursday, December 26, 2013 - link

    Whatever the name it doesn't really matter its the numbers that count and in TB drive sizes now a days RAID 5 makes zero sense.
  • Kheb - Saturday, December 28, 2013 - link

    No it doesnt. Not at all. First, you are taking into account only huge arrays used to store data and not to run applications (so basically only mechanical SATA, that is).Second, you are completeley ignoring costs (raid 5 or raid 6 vs raid 10). Third, you are assuming the raid 5 itself is not backed up or with some sort of software\hardware redundancy or tiering at lower levels (see SANs).

    So while I can agree that THEORETICALLY having raid 10 everywhere would indeed be safer, the costs (hdds + enclosures + controllers + backplanes) make this, and this time for real, have zero sense.
  • Ammaross - Thursday, December 26, 2013 - link

    "Resilvering" is the ZFS term for rebuilding data on a volume. It's very much a current term still, but it does give us an insight into the current bias of the author, who apparently favors ZFS for his storage until something he proposes as better is golden.
  • hydromike - Thursday, December 26, 2013 - link

    How many times have you had to rebuild a RAID5 in your lifetime? I have over 100 times on over 10 major HARDWARE RAID vendors.

    "And when you go to rebuild that huge RAID 5 array and another disk fails your screwed."

    The other drive failing is a very small possibility in an enterprise environment that I was talking about, because of enterprise grade drives vs consumer. That is why most have either the raid taken offline for a much faster rebuild. Besides during that rebuild the RAID is still functional just degraded.

    Also my point is lots of us still have hardware that is 2-5 years old that is still just working. The newest Arrays that I have setup as of late are 20 to 150 TB in size and we went with Freenas with ZFS which puts all other to shame. NetApp Storage appliances rebuild times are quite fast 6-12 hours for 40TB LUNS. It all depends upon the redundancy that you need. Saying that raid 5 needs to die is asinine. What if the data you are storing is all available in the public domain but have a local copy speeds up the data access rates. The rebuild is faster with a degraded LUN vs retrieving all of the data from the public domain again. There are many use cases for each RAID level just because one level does not fit YOUR uses it does not need to die!
  • P_Dub_S - Thursday, December 26, 2013 - link

    So if you were to buy this NAS for a new implementation would you even consider throwing 10-12 disks in it and building a RAID 5 array? just asking. Even in your own post you state how you use Freenas with ZFS for your new arrays. RAID 5 is the dodo here let it go extinct.
  • Ammaross - Thursday, December 26, 2013 - link

    For all you know, he's running ZFS using raidz1 (RAID5 essentially). Also, saying RAID5 needs to die, one must then assume you also think RAID0 is beyond worthless, since it has NO redundancy? Obviously, you can (hopefully) cite the use-cases for RAID0. Your bias just prevents you from seeing the usefulness of RAID5.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    It does happen though. I've had to rebuild 2 servers alone this year because of multiple drive failures. One server had 3 drives fail. But that's because of neglect. Us engineers only have so much time. Especially with the introduction to lean manufacturing.

    RAID 5 + Global spare though is usually pretty safe bet if it's a critical app server. Otherwise RAID 5 is perfectly fine.

Log in

Don't have an account? Sign up now