Testbed Setup and Testing Methodology

The testbed for the Synology DS1812+ involves the SMB / SOHO NAS testbed we built last year. Performance evaluation is done under both single and multiple client scenarios. In all cases, the two network ports are teamed with 802.11ad dynamic link aggregation. Our rackmount NAS reviews use SSDs typically, but, in the desktop form factor (for units based on ARM / PowerPC SoCs or the Atom series, typically), we use hard drives. Even though our review unit came bundled with 1 TB Seagate drives, we chose to go with the Western Digital RE (WD4000FYYZ) drives that have been used in our other NAS reviews. This allows us to keep benchmark figures consistent across different NAS units.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evoluion 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

In order to evaluate single client performance, we booted up one VM in our testbed and ran Intel NASPT on the CIFS share in the NAS. iSCSI support evaluation was also done in a similar manner with a 250 GB iSCSI LUN mapped on the VM. For NFS, we ran IOMeter benchmarks in Linux. For evaluation of multiple client performance, we accessed a CIFS share from multiple VMs simultaneously using IOMeter and gathered data on how the performance changed with the number of clients / access pattern. Without further digression, let us move on to the performance numbers.

Unboxing and Setup Impressions Single Client Performance - CIFS, NFS and iSCSI
Comments Locked


View All Comments

  • MadMan007 - Thursday, June 13, 2013 - link

    AES-NI support would be a great addition for a real SMB NAS...even SMB's should be concerned with security. What are the chances NAS manufacturers will come out with devices based on AMD Kabini? AMD does a lot less feature segmentation in their chips and Kabini has AES-NI so it seems like a better solution until Intel matches that with Atoms (low TDP Haswells will be too expensive.)
  • JDG1980 - Thursday, June 13, 2013 - link

    The two features I look for in off-the-shelf NASes are ECC RAM, and the ZFS file system. Unfortunately, it seems that none so far have saw fit to include them.
  • pwr4wrd - Friday, June 14, 2013 - link

    I completely agree with you, Even for home/SOHO use, what good is a NAS unit if you dont have data integrity.
  • Samus - Saturday, June 15, 2013 - link

    This will change with the Atom family supporting ECC. I don't know of any real advantages ZFS has over ext4 for home/soho.
  • phoenix_rizzen - Monday, June 17, 2013 - link

    Instantaneous near-unlimited number of snapshots, end-to-end checksums, integrated raid features without requiring RAID controllers, integrated volume management, storage pooling, etc, etc, etc.

    Once you get beyond 1 harddrive, using anything other than ZFS (or other pooled storage system) is heavenly. There's just no comparison to ext*+LVM.
  • Jeff7181 - Thursday, June 13, 2013 - link

    I wonder how multi-user performance would scale if it had a 10 Gbps uplink to a 1 Gbps access switch. Maybe I'm out of touch with arrays of this size, but those numbers seem low for an 8-disk array. Maybe it has to do with the Atom CPU? Maybe the RAID controller has no cache? Honestly I'd be highly disappointed if I spent $1000 on the chassis and another $1000-2000 on hard drives and could barely reach 1 Gbps under the best circumstances.
  • DigitalFreak - Thursday, June 13, 2013 - link

    There is no RAID controller. The SATA ports are either off of the Intel embedded ports, or more likely off of a 3rd party controller.
  • SirGCal - Thursday, June 13, 2013 - link

    Try again, They used 8x WD4000FYYZ, They run $410 each... If you get a deal on them. Upwards of $500 if you go to a brick and mortar store... at 400 each, that's $3200 just for the drives for their enterprise class drives for this test. Most people aren't going to use them.
  • Gigaplex - Thursday, June 13, 2013 - link

    That just backs up their point even more. Spending $1k-2k instead isn't likely to get you faster drives.
  • SirGCal - Friday, June 14, 2013 - link

    No, you missed my other point... The 8-drive RAID 5 is a failure waiting to happen, enterprise class or not. When a drive does fail, you'll have to repair it. During that 38+ hours... That is the MOST likely time (also when all the drives are old, warn, and getting their tails kicked in doing massively hard reads) that another one is going to fail... Then you lose the entire array and all of your data. That was the point I was trying to make.

Log in

Don't have an account? Sign up now