Single Client Performance - CIFS, NFS and iSCSI

The single client CIFS performance of the Synology DS1812+ was evaluated on the Windows platforms using Intel NASPT and our standard robocopy benchmark. This was run from one of the virtual machines in our NAS testbed. All data for the robocopy benchmark on the client side was put in a RAM disk (created using OSFMount) to ensure that the client's storage system shortcomings wouldn't affect the benchmark results

Synology DS1812+ CIFS Performance - Windows

We created a 250 GB iSCSI target and mapped it on the Windows VM. The same benchmarks were run and the results are presented below.

Synology DS1812+ iSCSI Performance - Windows

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:

mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER -o async,hard,proto=tcp,noac

mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER -o directio

Note that we have not tried to optimize NFS performance for the NAS. In order to keep things as uniform as possible, we will just look at comparative performance on the basis of the default mounting options combined with the above (which are set to avoid caching effects on the client side). The scripts available here were run on IOMeter. The graph below presents the various results.

Synology DS1812+ Linux Performance

Testbed Setup and Testing Methodology Multi-Client Performance - CIFS
Comments Locked

93 Comments

View All Comments

  • SirGCal - Friday, June 14, 2013 - link

    UPDATE: After looking carefully over these screenshots - I think their review might be SERIOUSLY lacking... I see a RAID 6 option in the setup for the box. But it's greyed out. Probably because they didn't have any drives in it when they were there is my guess. need 4-5 drives MINIMUM to do it to start. But with this many drives, Even testing a RAID 5 is just honestly a bit stupid. It should have been tested RAID 6 and in that situation, might actually be a more attractive option if it is capable and performs.

    But then again, RAID 5 generally is faster then RAID 6 due to the added calculations for the extra parity.. And it's RAID 5 performance was pretty weak unless I'm reading the numbers wrong. That is, if the RAID 6 is actually activatable within this device and not just an option within their software that is disabled in this device all together. But I would have thought this review would have tested that mode since that is what an 8-drive setup should have been setup for.
  • ganeshts - Friday, June 14, 2013 - link

    The benchmarks were done with all 8-bays filled with WD RE Drives in RAID 5.

    The screenshots show that we can have disk groups. So, for example, you could allocate 4 disks to one disk group and run a RAID 5 volume on it. Then, the other 4 disks could be in another group and you could run a RAID 6 volume in that group.

    What is the problem with performance that you are seeing? These Atom-based NAS units basically saturate the network link (accounting for overheads). Remember two links teamed is 2 Gbps in this case and that translates to a maximum of 250 MBps. Accounting for overhead, I see units saturate between 210 - 230 MBps and never have had any unit go above that unless I am teaming 4 ports or more (as you can see in our QNAP TS-EC1279U-RP review)

    I will take your feedback about RAID-6 evaluation into consideration in the next round of benchmarks.
  • Jeff7181 - Monday, June 17, 2013 - link

    How is single client, 1.5 MB/s throughput at about 100 ms latency "stellar?" That sounds absolutely abysmal to me. I'm curious to know how you set up IOMeter... I'd like to repeat the test on my own box and see how it fares.
  • mitchdbx - Saturday, June 15, 2013 - link

    There comes a time in your life where you just want things to work without the hassle of them breaking every time you turn around. I OWN the 5bay unit (for over a year now) and can say that the UX is wonderful on these. They configure to let you know when something goes wrong (send email, beep, send SMS, etc) so you can fix the issue. Please look at the product before you make conclusions that they are only "dumb" boxes. You can run Plex, and Many other media servers in addition to a DNS, DHCP, Web server with PHP and various CMS installs. Photo Management, Surveillance, etc....

    On another note, a inexperienced individual commented that an issue will arise when a drive fails and the array must rebuild. If you are using quality drives and constantly spinning the drives, the chance of a two drive failure is very low. As anyone that has years of experience with computers, keep the drives spinning and things will be fine, it is when you shut down and start up that issues come into play.
  • mitchdbx - Saturday, June 15, 2013 - link

    More FYI about the RAID levels....
    http://forum.synology.com/wiki/index.php/What_is_S...
  • Micke O - Monday, June 17, 2013 - link

    Synology aren't using some "nonstandard raid" with SHR. They are using mdadm

    This is how to restore an array in standard PC using linux if your DiskStation would fail:

    http://www.synology.com/support/faq_show.php?lang=...

    I'd say that's even better than using some H/W Raid controller. Good luck replacing one of those with something else than an identical controller with the very same firmware etc.
  • Insomniator - Thursday, June 13, 2013 - link

    Wow great timing! Been looking for a NAS with huge storage capabilities to transfer data offsite. Haven't seen many around... Buffalo Terastation looks good but I haven't seen reviews for those or any other modern NAS systems. Thanks for the review!
  • SirGCal - Thursday, June 13, 2013 - link

    Did I miss it? But I didn't see it support Raid 6? But Raid 5, ESPECIALLY with large drives, is just asking for failure. I personally have one 8-drive array, building my 2nd now. First with 2TB drives, new one with 4TB drives. Both are Raid 6. Old one 12TB, new one will be 24TB. Ya you lose 2 drives of usable space but that creates 3-drive failure protection. Or basically, when a drive fails and you're rebuilding, you have protection from another drive failing. Cause THAT is what it will happen...

    But I didn't see anything in the whole thing about Raid 6 at all. I would Never build an 8-drive system with Raid 5... Not especially with consumer grade hardware... Without Raid 6... It's just not worth it for large array...
  • Gigaplex - Thursday, June 13, 2013 - link

    No, it only creates 2-drive failure protection. Lose 3 drives in RAID6, and you're toast.
  • SirGCal - Friday, June 14, 2013 - link

    3-drive failure, as it it takes 3 to kill the array.. Point is you can be repairing one, if another one fails, your not dead yet... As you would be with RAID 5...

Log in

Don't have an account? Sign up now