Multi-Client Performance - CIFS

We put the Synology DS1812+ through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. IOMeter also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. Some of the random access benchmarks don't fit in the graphs below. The scales were not altered in order to make the comparison against other NAS units (which fit in the scale) easier. Readers interested in the actual values can refer to our evaluation metrics table available here.

Bandwidth and response times can be compared against NAS units from other vendors based on the same platform (Atom D2700). One thing to keep in mind when analyzing the results above is that the LaCie 5big Pro is a 5-bay unit and the Thecus N4800 is a 4-bay unit, while the DS1812+ is a 8-bay. Sequential performance doesn't seem to reach that of the competitors, but the DS1812+ is stellar in the real life tests / random accesses (again, it is necessary to remember that the units have different number of hard drives being accessed during the test). 

Single Client Performance - CIFS, NFS and iSCSI Encryption Support Evaluation
Comments Locked

93 Comments

View All Comments

  • saiyan - Sunday, June 16, 2013 - link

    A single hard drive is also a failure waiting to happen, enterprise class or not. When a drive does fail, you don't even get the benefit of 24/7 uptime provided by RAID-5 even when the array is degraded. You don't even have the chance to rebuild your RAID array.

    Seriously, RAID is NOT a backup.
  • SirGCal - Monday, June 17, 2013 - link

    I don't think anyone here ever claimed it was... If they did, I missed it. It's all about keeping data during a repair. Drives won't last forever and 38 hours is a long time to beat on the array to rebuild. On old drives, odds of a second failure go up drastically.
  • Duckhunt2 - Saturday, February 15, 2014 - link

    You building something yourself and someone else buying it aint the best comparison. You have to set up so many things. Time is money. Who has time to do that?
  • SirGCal - Thursday, June 13, 2013 - link

    Sorry, can't edit comments... But ya, performance on this is weak. One of mine, of which empty cost the same, but supports Raid 6, can hold transfers much faster including 400M writes, 600M reads, etc. and that's using 5400 RPM consumer grade drives... 700/900M using performance based hardware or more. Mine is a media share server only needing to serve the house so 4-6 Pure HD sources (all legal, sorry, I do not agree with piracy) at the same time is plenty and this is way more then enough. But this is actually the 'slowest' way I could build it... I went for green since I didn't need any speed in this setup... speed in a real Raid is very easy. Writing is a bit slower, especially in Raid 6 due to the complicated error bit calculations... Reading is butter.
  • santiagoanders - Friday, June 14, 2013 - link

    You have a 10G network to run media sharing? Overkill much?
  • SirGCal - Friday, June 14, 2013 - link

    For short distance, Cat-6 works fine. My whole house is wired Cat-6 for < $800 minus the electrician who was also a friend of mine. So complain all ya like... Just cause you wanna sit there and do wi-fi isn't my fault.
  • santiagoanders - Monday, June 17, 2013 - link

    And how much did you pay for the 10Gbe adapters and switch?
  • Guspaz - Thursday, June 13, 2013 - link

    Is it just me, or is the price of this thing not listed anywhere in the article? Benchmarks are meaningless without a price to give them context.
  • DigitalFreak - Thursday, June 13, 2013 - link

    The 1812+ runs around $999, and the 1813+ is $1099.
  • SirGCal - Friday, June 14, 2013 - link

    To me, that's just too much. I can build the core box itself, FAR more powerful, albiet a bit larger, BUT capable of far more then just sitting there. Can serve as a Subsonic or Plex server, MEDIA stream, Media extender server to Xbox, etc. Even do it's own data workload (handbrake/etc. while running OSx or Windows or even Linux. Anything I choose.). It doesn't have to be a dummy box. And I have two of these running 24/7 and they use VERY little power while doing file server duties. If I load up the CPU to do other tasks, then they'll obviously load up a bit more but...

    Anyhow, I can make, right now, say an A6 5400K (3.6G dual-core APU) with 16G 1866 CAS10, a Seasonic 620 modular, Fractal Design insulated (silent) tower to hold 8 fast swapable bays and a boot drive, an A75 USB3 board, AND the Areca ARC-1223, 6G Raid 6 card. (SAS cards break down to control SATA drives for those thinking about that...) all for $944.94 right now. And that comes with one giga-bit NIC already. Add more if ya want, or more whatever... That's the point. Plus these cases are dead silent. I even have the one with windows and you can't hear anything from them. They are a bit more expensive and you could save $50 going with cheaper options though but I was being frivolous. Here's a screenshot of one I just did for a core for a small one at work: http://www.sirgcal.com/images/misc/raid6coreexampl...

    * The whole point is; I don't understand these 'boxes'. They use nonstandard raid for one. Synology Raid. Which also means if it fails you can't put it on a regular RAID controller to retrieve your data. At least that's how they used to be. Perhaps not anymore.

    * But their price is SO high it doesn't make sense. You can build one yourself, better capabilities all the way around in every way, cheaper. And if you ONLY want raid 5, you can knock about $300 off the price tag. Raid 6 is the bulk of that cost... But honestly IMHO necessary with those sizes, and that many drives in the array...

    If you actually have no clue how to build a PC, perhaps... But find your neighborhood nerd to help ya. Still without RAID 6, these just don't serve a purpose. Get two smaller arrays instead. 4-drives or less for raid 5. Can these even do hot-spares? At least that would be something... It would be a live drive waiting to take over in case of a failure. Not quite RAID 6, but sorta kinda a bit more helpful, at least for safety. They didn't mention it.

Log in

Don't have an account? Sign up now