Single Client Performance - CIFS, NFS and iSCSI

The single client CIFS performance of the Synology DS1812+ was evaluated on the Windows platforms using Intel NASPT and our standard robocopy benchmark. This was run from one of the virtual machines in our NAS testbed. All data for the robocopy benchmark on the client side was put in a RAM disk (created using OSFMount) to ensure that the client's storage system shortcomings wouldn't affect the benchmark results

Synology DS1812+ CIFS Performance - Windows

We created a 250 GB iSCSI target and mapped it on the Windows VM. The same benchmarks were run and the results are presented below.

Synology DS1812+ iSCSI Performance - Windows

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:

mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER -o async,hard,proto=tcp,noac

mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER -o directio

Note that we have not tried to optimize NFS performance for the NAS. In order to keep things as uniform as possible, we will just look at comparative performance on the basis of the default mounting options combined with the above (which are set to avoid caching effects on the client side). The scripts available here were run on IOMeter. The graph below presents the various results.

Synology DS1812+ Linux Performance

Testbed Setup and Testing Methodology Multi-Client Performance - CIFS
Comments Locked

93 Comments

View All Comments

  • MadMan007 - Thursday, June 13, 2013 - link

    AES-NI support would be a great addition for a real SMB NAS...even SMB's should be concerned with security. What are the chances NAS manufacturers will come out with devices based on AMD Kabini? AMD does a lot less feature segmentation in their chips and Kabini has AES-NI so it seems like a better solution until Intel matches that with Atoms (low TDP Haswells will be too expensive.)
  • JDG1980 - Thursday, June 13, 2013 - link

    The two features I look for in off-the-shelf NASes are ECC RAM, and the ZFS file system. Unfortunately, it seems that none so far have saw fit to include them.
  • pwr4wrd - Friday, June 14, 2013 - link

    I completely agree with you, Even for home/SOHO use, what good is a NAS unit if you dont have data integrity.
  • Samus - Saturday, June 15, 2013 - link

    This will change with the Atom family supporting ECC. I don't know of any real advantages ZFS has over ext4 for home/soho.
  • phoenix_rizzen - Monday, June 17, 2013 - link

    Instantaneous near-unlimited number of snapshots, end-to-end checksums, integrated raid features without requiring RAID controllers, integrated volume management, storage pooling, etc, etc, etc.

    Once you get beyond 1 harddrive, using anything other than ZFS (or other pooled storage system) is heavenly. There's just no comparison to ext*+LVM.
  • Jeff7181 - Thursday, June 13, 2013 - link

    I wonder how multi-user performance would scale if it had a 10 Gbps uplink to a 1 Gbps access switch. Maybe I'm out of touch with arrays of this size, but those numbers seem low for an 8-disk array. Maybe it has to do with the Atom CPU? Maybe the RAID controller has no cache? Honestly I'd be highly disappointed if I spent $1000 on the chassis and another $1000-2000 on hard drives and could barely reach 1 Gbps under the best circumstances.
  • DigitalFreak - Thursday, June 13, 2013 - link

    There is no RAID controller. The SATA ports are either off of the Intel embedded ports, or more likely off of a 3rd party controller.
  • SirGCal - Thursday, June 13, 2013 - link

    Try again, They used 8x WD4000FYYZ, They run $410 each... If you get a deal on them. Upwards of $500 if you go to a brick and mortar store... at 400 each, that's $3200 just for the drives for their enterprise class drives for this test. Most people aren't going to use them.
  • Gigaplex - Thursday, June 13, 2013 - link

    That just backs up their point even more. Spending $1k-2k instead isn't likely to get you faster drives.
  • SirGCal - Friday, June 14, 2013 - link

    No, you missed my other point... The 8-drive RAID 5 is a failure waiting to happen, enterprise class or not. When a drive does fail, you'll have to repair it. During that 38+ hours... That is the MOST likely time (also when all the drives are old, warn, and getting their tails kicked in doing massively hard reads) that another one is going to fail... Then you lose the entire array and all of your data. That was the point I was trying to make.

Log in

Don't have an account? Sign up now