Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:

mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

Note that these areslightly different from what we used to run in our previous NAS reviews. We have also shifted from IOMeter to IOZone for evaluating performance under Linux. The following IOZone command was used to benchmark the shares:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here. These numbers will gain relevance as we benchmark more NAS units with similar configuration.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

Some scenarios exhibit client caching effects, and these are evident in the gallery below.

The IOZone CSV output can be found here for those interested in the exact numbers.

Single Client Performance - CIFS and iSCSI on Windows Multi-Client Performance - CIFS
Comments Locked

51 Comments

View All Comments

  • iAPX - Thursday, December 26, 2013 - link

    2000+ MB/s ethernet interface (2x10Gb/s), 10 hard-drives able to to delivers at least 500MB/s EACH (grand total of 5000MB/s), Xeon quad-core CPU, and tested with ONE client, it delivers less than 120MB/s?!?
    That's what I expect from an USB 3 2.5" external hard-drive, not a SAN of this price, it's totally deceptive!
  • Ammaross - Thursday, December 26, 2013 - link

    Actually, 120MB/s is remarkably exactly what I would expect from a fully-saturated 1Gbps link (120MB/s * 8 bits = 960Mbps). Odd how that works out.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    That's because the PC only has a gigabit NIC. That's actually what you should expect.
  • BrentfromZulu - Thursday, December 26, 2013 - link

    For the few who know, I am the Brent that brought up Raid 5 on the Mike Tech Show (saying how it is not the way to go in any case)

    Raid 10 is the performance king, Raid 1 is great for cheap redundancy, and Raid 10, or OBR10, should be what everyone uses in big sets. If you need all the disk capacity, use Raid 6 instead of Raid 5 because if a drive fails during a rebuild, then you lose everything. Raid 6 is better because you can lose a drive. Rebuilding is a scary process with Raid 5, but Raid 1 or 10, it is literally copying data from 1 disk to another.

    Raid 1 and Raid 10 FTW!
  • xdrol - Thursday, December 26, 2013 - link

    From the drives' perspective, rebuilding a RAID 5 array is exactly the same as rebuilding a RAID 1 or 10 array: Read the whole disk(s) (or to be more exact, sectors with data) once, and write the whole target disk once. It is only different for the controller. I fail to see why is one scarier than the other.

    If your drive fails while rebuilding a RAID 1 array, you are exactly as screwed. The only thing why R5 is worse here is because you have n-1 disks unprotected while rebuilding, not just one, giving you approximately (=negligibly smaller than) n-1 times data loss chance.
  • BrentfromZulu - Friday, December 27, 2013 - link

    Rebuilding a Raid 5 requires reading data from all of the other disks, whereas Raid 10 requires reading data from 1 other drive. Raid 1 rebuilds are not complex, nor Raid 10. Raid 5/6 rebuilding is complex, requires activity from other disks, and because of the complexity has a higher chance of failure.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    You take a big hit on performance with RAID 6.
  • Ajaxnz - Thursday, December 26, 2013 - link

    I've got one of these with 3 extra shelves of disks and 1TB of SSD cache.
    There's a limit of 3 shelves in a single volume, but 120TB (3 shelves of 12 4Tb disks, raid5 on each shelf) with the SSD cache performs pretty well.
    For reference, NFS performance is substantially better than CIFS or iSCSI.

    It copes fine with the 150 virtual machines that support a 20 person development team.

    So much cheaper than a NetAPP or similar - but I haven't had a chance to test the multi-NAS failover - to see if you truly get enterprise quality resilience.
  • jasonelmore - Friday, December 27, 2013 - link

    well at least half a dozen morons got schooled on the different types of RAID arrays. gg, always glad to see the experts put the "less informed" (okay i'm getting nicer) ppl in their place.
  • Marquis42 - Friday, December 27, 2013 - link

    I'd be interested in knowing greater detail on the link aggregation setup. There's no mention of the load balancing configuration in particular. The reason I ask is because it's probably *not* a good idea to bond 1Gbps links with 10Gbps links in the same bundle unless you have access to more advanced algorithms (and even then I wouldn't recommend it). The likelihood of limiting a single stream to ~1Gbps is fairly good, and may limit overall throughput depending on the number of clients. It's even possible (though admittedly statistically unlikely) that you could limit the entirety of the system's network performance to saturating a single 1Gbe connection.

Log in

Don't have an account? Sign up now