Single Client Performance - CIFS, NFS and iSCSI

The single client CIFS performance of the Synology DS1812+ was evaluated on the Windows platforms using Intel NASPT and our standard robocopy benchmark. This was run from one of the virtual machines in our NAS testbed. All data for the robocopy benchmark on the client side was put in a RAM disk (created using OSFMount) to ensure that the client's storage system shortcomings wouldn't affect the benchmark results

Synology DS1812+ CIFS Performance - Windows

We created a 250 GB iSCSI target and mapped it on the Windows VM. The same benchmarks were run and the results are presented below.

Synology DS1812+ iSCSI Performance - Windows

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:

mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER -o async,hard,proto=tcp,noac

mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER -o directio

Note that we have not tried to optimize NFS performance for the NAS. In order to keep things as uniform as possible, we will just look at comparative performance on the basis of the default mounting options combined with the above (which are set to avoid caching effects on the client side). The scripts available here were run on IOMeter. The graph below presents the various results.

Synology DS1812+ Linux Performance

Testbed Setup and Testing Methodology Multi-Client Performance - CIFS
Comments Locked

93 Comments

View All Comments

  • saiyan - Sunday, June 16, 2013 - link

    A single hard drive is also a failure waiting to happen, enterprise class or not. When a drive does fail, you don't even get the benefit of 24/7 uptime provided by RAID-5 even when the array is degraded. You don't even have the chance to rebuild your RAID array.

    Seriously, RAID is NOT a backup.
  • SirGCal - Monday, June 17, 2013 - link

    I don't think anyone here ever claimed it was... If they did, I missed it. It's all about keeping data during a repair. Drives won't last forever and 38 hours is a long time to beat on the array to rebuild. On old drives, odds of a second failure go up drastically.
  • Duckhunt2 - Saturday, February 15, 2014 - link

    You building something yourself and someone else buying it aint the best comparison. You have to set up so many things. Time is money. Who has time to do that?
  • SirGCal - Thursday, June 13, 2013 - link

    Sorry, can't edit comments... But ya, performance on this is weak. One of mine, of which empty cost the same, but supports Raid 6, can hold transfers much faster including 400M writes, 600M reads, etc. and that's using 5400 RPM consumer grade drives... 700/900M using performance based hardware or more. Mine is a media share server only needing to serve the house so 4-6 Pure HD sources (all legal, sorry, I do not agree with piracy) at the same time is plenty and this is way more then enough. But this is actually the 'slowest' way I could build it... I went for green since I didn't need any speed in this setup... speed in a real Raid is very easy. Writing is a bit slower, especially in Raid 6 due to the complicated error bit calculations... Reading is butter.
  • santiagoanders - Friday, June 14, 2013 - link

    You have a 10G network to run media sharing? Overkill much?
  • SirGCal - Friday, June 14, 2013 - link

    For short distance, Cat-6 works fine. My whole house is wired Cat-6 for < $800 minus the electrician who was also a friend of mine. So complain all ya like... Just cause you wanna sit there and do wi-fi isn't my fault.
  • santiagoanders - Monday, June 17, 2013 - link

    And how much did you pay for the 10Gbe adapters and switch?
  • Guspaz - Thursday, June 13, 2013 - link

    Is it just me, or is the price of this thing not listed anywhere in the article? Benchmarks are meaningless without a price to give them context.
  • DigitalFreak - Thursday, June 13, 2013 - link

    The 1812+ runs around $999, and the 1813+ is $1099.
  • SirGCal - Friday, June 14, 2013 - link

    To me, that's just too much. I can build the core box itself, FAR more powerful, albiet a bit larger, BUT capable of far more then just sitting there. Can serve as a Subsonic or Plex server, MEDIA stream, Media extender server to Xbox, etc. Even do it's own data workload (handbrake/etc. while running OSx or Windows or even Linux. Anything I choose.). It doesn't have to be a dummy box. And I have two of these running 24/7 and they use VERY little power while doing file server duties. If I load up the CPU to do other tasks, then they'll obviously load up a bit more but...

    Anyhow, I can make, right now, say an A6 5400K (3.6G dual-core APU) with 16G 1866 CAS10, a Seasonic 620 modular, Fractal Design insulated (silent) tower to hold 8 fast swapable bays and a boot drive, an A75 USB3 board, AND the Areca ARC-1223, 6G Raid 6 card. (SAS cards break down to control SATA drives for those thinking about that...) all for $944.94 right now. And that comes with one giga-bit NIC already. Add more if ya want, or more whatever... That's the point. Plus these cases are dead silent. I even have the one with windows and you can't hear anything from them. They are a bit more expensive and you could save $50 going with cheaper options though but I was being frivolous. Here's a screenshot of one I just did for a core for a small one at work: http://www.sirgcal.com/images/misc/raid6coreexampl...

    * The whole point is; I don't understand these 'boxes'. They use nonstandard raid for one. Synology Raid. Which also means if it fails you can't put it on a regular RAID controller to retrieve your data. At least that's how they used to be. Perhaps not anymore.

    * But their price is SO high it doesn't make sense. You can build one yourself, better capabilities all the way around in every way, cheaper. And if you ONLY want raid 5, you can knock about $300 off the price tag. Raid 6 is the bulk of that cost... But honestly IMHO necessary with those sizes, and that many drives in the array...

    If you actually have no clue how to build a PC, perhaps... But find your neighborhood nerd to help ya. Still without RAID 6, these just don't serve a purpose. Get two smaller arrays instead. 4-drives or less for raid 5. Can these even do hot-spares? At least that would be something... It would be a live drive waiting to take over in case of a failure. Not quite RAID 6, but sorta kinda a bit more helpful, at least for safety. They didn't mention it.

Log in

Don't have an account? Sign up now