Multi-Client Performance - CIFS

We put the Synology DS1812+ through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. IOMeter also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. Some of the random access benchmarks don't fit in the graphs below. The scales were not altered in order to make the comparison against other NAS units (which fit in the scale) easier. Readers interested in the actual values can refer to our evaluation metrics table available here.

Bandwidth and response times can be compared against NAS units from other vendors based on the same platform (Atom D2700). One thing to keep in mind when analyzing the results above is that the LaCie 5big Pro is a 5-bay unit and the Thecus N4800 is a 4-bay unit, while the DS1812+ is a 8-bay. Sequential performance doesn't seem to reach that of the competitors, but the DS1812+ is stellar in the real life tests / random accesses (again, it is necessary to remember that the units have different number of hard drives being accessed during the test). 

Single Client Performance - CIFS, NFS and iSCSI Encryption Support Evaluation
Comments Locked

93 Comments

View All Comments

  • MadMan007 - Thursday, June 13, 2013 - link

    AES-NI support would be a great addition for a real SMB NAS...even SMB's should be concerned with security. What are the chances NAS manufacturers will come out with devices based on AMD Kabini? AMD does a lot less feature segmentation in their chips and Kabini has AES-NI so it seems like a better solution until Intel matches that with Atoms (low TDP Haswells will be too expensive.)
  • JDG1980 - Thursday, June 13, 2013 - link

    The two features I look for in off-the-shelf NASes are ECC RAM, and the ZFS file system. Unfortunately, it seems that none so far have saw fit to include them.
  • pwr4wrd - Friday, June 14, 2013 - link

    I completely agree with you, Even for home/SOHO use, what good is a NAS unit if you dont have data integrity.
  • Samus - Saturday, June 15, 2013 - link

    This will change with the Atom family supporting ECC. I don't know of any real advantages ZFS has over ext4 for home/soho.
  • phoenix_rizzen - Monday, June 17, 2013 - link

    Instantaneous near-unlimited number of snapshots, end-to-end checksums, integrated raid features without requiring RAID controllers, integrated volume management, storage pooling, etc, etc, etc.

    Once you get beyond 1 harddrive, using anything other than ZFS (or other pooled storage system) is heavenly. There's just no comparison to ext*+LVM.
  • Jeff7181 - Thursday, June 13, 2013 - link

    I wonder how multi-user performance would scale if it had a 10 Gbps uplink to a 1 Gbps access switch. Maybe I'm out of touch with arrays of this size, but those numbers seem low for an 8-disk array. Maybe it has to do with the Atom CPU? Maybe the RAID controller has no cache? Honestly I'd be highly disappointed if I spent $1000 on the chassis and another $1000-2000 on hard drives and could barely reach 1 Gbps under the best circumstances.
  • DigitalFreak - Thursday, June 13, 2013 - link

    There is no RAID controller. The SATA ports are either off of the Intel embedded ports, or more likely off of a 3rd party controller.
  • SirGCal - Thursday, June 13, 2013 - link

    Try again, They used 8x WD4000FYYZ, They run $410 each... If you get a deal on them. Upwards of $500 if you go to a brick and mortar store... at 400 each, that's $3200 just for the drives for their enterprise class drives for this test. Most people aren't going to use them.
  • Gigaplex - Thursday, June 13, 2013 - link

    That just backs up their point even more. Spending $1k-2k instead isn't likely to get you faster drives.
  • SirGCal - Friday, June 14, 2013 - link

    No, you missed my other point... The 8-drive RAID 5 is a failure waiting to happen, enterprise class or not. When a drive does fail, you'll have to repair it. During that 38+ hours... That is the MOST likely time (also when all the drives are old, warn, and getting their tails kicked in doing massively hard reads) that another one is going to fail... Then you lose the entire array and all of your data. That was the point I was trying to make.

Log in

Don't have an account? Sign up now