Single Client Performance - CIFS & NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the CIFS share:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Synology DS1815+ - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 85 79
Re-Write 83 79
Read 49 115
Re-Read 51 114
Random Read 30 66
Random Write 80 78
Backward Read 30 53
Record Re-Write 50 1681*
Stride Read 46 116
File Write 83 79
File Re-Write 83 80
File Read 33 94
File Re-Read 36 95
*: Benchmark number skewed due to caching effect
Single Client Performance - CIFS & iSCSI on Windows Multi-Client Performance - CIFS on Windows
Comments Locked

65 Comments

View All Comments

  • DigitalFreak - Wednesday, November 19, 2014 - link

    Not everyone is poor like you.
  • chaos215bar2 - Wednesday, November 19, 2014 - link

    I see a lot of comments like this, and I can only imagine you're assuming that:
    1) The NAS is only being used as a file server with the most basic setup.
    2) Updates are not an issue.

    I agree that building a custom NAS box is a fun project and can save a lot of money. However, not everyone wants to deal with the complications that can arise from setting up multiple services and keeping them up-to-date.

    Say you want an email server. To install and fully configure Synology's Mail Station takes no more than 10 minutes. If you want webmail to go with it, just install a second package. There's almost zero setup required. Sure, you'll have more options on a generic Linux installation, but setting up a fully functional and securely configured email system takes quite a lot of research if you're just doing it one time.

    Of course, all of that time spent properly configuring your custom-built server is worthless if you don't keep it up to date. As of DSM 5.1, Synology will automatically install either all updates or just security updates, and you know that the updated components have been tested and work together. I have never had a problem with a service going down due to a Synology update. With full Linux distributions, not so much. Most of the time updates work fine, but I would never trust something as critical as my primary email server to automatic updates.
  • shodanshok - Friday, November 21, 2014 - link

    Hi,
    while I agree on the simplicity argument (installing postfix, dovecon and roundcube surely require some time), RedHat and CentOS distros are very good from an update standpoint. I had very little problems with many server (100+) administered over the past years, even with automatic update enabled. Moreover, with the right yum plugin you can install security updates only, if you want.

    Nowadays, and with a strong backup strategy, I feel confident enough to enable yum auto-update on all server except the ones used as hypervisors (I had a single hypervisor with auto-update enabled for testing purpose, and anyway it run without a problem).

    Sadly, with Debian and Ubuntu LTS distros I had some more problems regarding updates, but perhaps it is only a unfortunately coincidence...

    Regards,
  • shodanshok - Tuesday, November 18, 2014 - link

    Hi agree with people saying that similar units are primarily targeted at users that want a clean and simple "off-the-shell" experience. With units as the one reviewed, you basically need to insert the disks, power on the device and follow one or two wizards.

    That said, a custom-build NAS has vastly better performance/price ratio. One of our customer bought a PowerEdge R515 (6 core Piledriver @ 3.1+ GHz) with 16 GB of ECC RAM, a PERC H700 RAID card with 512 MB of NVRAM cache memory and a 3 years on-site warranty. Total price: about 1600 euros (+ VAT).

    He then installed 8x 2TB WD RE and I configured it with a 11+ TB RAID6 volumes with thin LVM volumes and XFS filesystem. It serves both as a backup server (deduplicated via hard links and rsync) and as a big storage for non-critical things (eg: personal files).

    Our customer is VERY satisfied of how it works, but hey - face the reality: a skilled people did all the setup work for him, and (obviously) he paid us...
  • Beany2013 - Thursday, November 27, 2014 - link

    This is about the most sensible comment on this entire review.

    If his budget halved and he couldn't necessarily afford support from you on a regular basis (or at least wanted his hourly callout charge to be lowered) I'm guessing you'd be more tempted to push him in the direction of a device like this, though?

    (I've been there, done that, and swapped out more than a few Windows SBS/standard+exchange boxes for Syno units over the last few years for this very reason, natch - the Windows license costs themselves pretty much pay for one of these)
  • eximius - Thursday, December 11, 2014 - link

    I have to agree with these previous two comments.

    I have an 1813+ sitting next to my heavily modded (aka needed to use a dremel) case with 15 hot swap disks (currently Linux + btrfs + samba & NFS). I have and use (and love) both. There are use cases for both, but I would certainly not hand my custom solution over to someone random and expect it to just work. I have automated updates and reboots (all hail "if [ -f /var/run/reboot-required ]") but occasionally something does not work right. No normal person is going to be able to figure that out in a reasonable amount of time.

    Also 30 minutes to install and configure it yourself is total BS. I have saltstack and automated PXE installs at home and 30 minutes is still stretching it for me for a full stack install and configure. Linux + Samba + backup + updates + RAID and/or mdadm and/or zfs and/or btrfs installed and *configured* in 30 minutes is beyond optimistic, even for technical people. $800 does not cover 8 hours of my time, so ya, I recommend Synology for certain (mostly home/SOHO) scenarios.

    I can expect my 70 year old dad to be able to keep his synology up to date, but not a Linux or BSD distro. That is just a ridiculous thing to expect.
  • DustinT - Tuesday, November 18, 2014 - link

    Ganesh, thanks for the thoughtful review. I am very interested in seeing how SSD caching affects performance. Take 2 drives out, replace them with 240gb SSDs and retest. Synology is putting a lot of emphasis on ssd caching, and I will be making my buying decision largely based on that aspect alone.
  • eximius - Thursday, December 11, 2014 - link

    It depends on your use case. For large sequential transfers SSDs are not going to help you very much since a couple of spinning metal drives can easily saturate a gigabit link. If you need a lot more IOPS then an SSD cache will help you out here but only so much, since again the limit is gigabit ethernet (16000 IOPS or so).

    *note* this applies to gigabit links, 10+ gigabit ethernet or infiniband connected devices can see an improvement with SSDs.
  • mervincm - Tuesday, November 18, 2014 - link

    Yes, please , test Read and Write Cache effect. In my 1813+ (4GB) on DSM 5.0 I installed a 2 disk Read/Write SSD Cache. Strangely streaming performance dropped, and since my use case is highly dependent on streaming, I removed the cache. I wonder now if things are better with 5.1 or with the read only cache.
  • eximius - Thursday, December 11, 2014 - link

    First, your bottleneck is the gigabit LAN. A couple of spinning rust drives can easily saturate a gigabit link so an SSD cache is not going to accelerate a streaming (aka sequential read) operation over gigabit ethernet. If you need more IOPS then an SSD cache will help (gigabit ethernet tops out somewhere around 16000 IOPS), though at the cost of reduced throughput.

    IOPS and throughput are at opposite ends of the spectrum, an increase in one means a decrease in the other. If your use case is sequential reads and writes, don't bother with the SSDs. On DAS (direct attached storage) you can improve both IOPS and throughput with an SSD cache since it takes a whole lot of platters to equal the performance of a single 850 pro SSD.

    Also note that this problem has nothing to do with Synology, you have the same constraints even if you had 24+ thread CPU(s) and 128 GB+ RAM with <insert favourite redundancy technology here>. Gigabit ethernet is slow, period.

Log in

Don't have an account? Sign up now