Testbed Setup and Testing Methodology

The testbed for the Synology DS1812+ involves the SMB / SOHO NAS testbed we built last year. Performance evaluation is done under both single and multiple client scenarios. In all cases, the two network ports are teamed with 802.11ad dynamic link aggregation. Our rackmount NAS reviews use SSDs typically, but, in the desktop form factor (for units based on ARM / PowerPC SoCs or the Atom series, typically), we use hard drives. Even though our review unit came bundled with 1 TB Seagate drives, we chose to go with the Western Digital RE (WD4000FYYZ) drives that have been used in our other NAS reviews. This allows us to keep benchmark figures consistent across different NAS units.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evoluion 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

In order to evaluate single client performance, we booted up one VM in our testbed and ran Intel NASPT on the CIFS share in the NAS. iSCSI support evaluation was also done in a similar manner with a 250 GB iSCSI LUN mapped on the VM. For NFS, we ran IOMeter benchmarks in Linux. For evaluation of multiple client performance, we accessed a CIFS share from multiple VMs simultaneously using IOMeter and gathered data on how the performance changed with the number of clients / access pattern. Without further digression, let us move on to the performance numbers.

Unboxing and Setup Impressions Single Client Performance - CIFS, NFS and iSCSI
Comments Locked

93 Comments

View All Comments

  • SirGCal - Friday, June 14, 2013 - link

    Ohh, and you could do it with ZFS, I just like RAID and am more familiar with it over ZFS
  • SirGCal - Friday, June 14, 2013 - link

    Or you're talking ZFS compression over RAID? I was thinking about something completely different... haven't slept in 36 hours... Twins teething... fun... sorry. But that should work fine on any of these RAID cards.
  • Peroxyde - Friday, June 14, 2013 - link

    @SirGCal Thank you for all the info you gave. Coincidentally, I have decided to go with the Fractal Define R4 for silence, exactly as you stated. Regarding ZFS, I think this article might be of your interest, in particular the section "What ZFS Gives You that Controllers Can't"

    http://constantin.glez.de/blog/2010/01/home-server...
  • SirGCal - Friday, June 14, 2013 - link

    I have two of those cases myself. Three in the office. It's so quiet. Love it. Mine has windows too. Still very silent and cool with 8 drives running 24'7 (add more fans).

    As for the RAID-Z, they only compare it in that article to RAID5. while I agree in that case sure it's better. Much is. They don't compare it to RAID 6 where I think it's performance and failover won't keep up. But this particular method I'm not familiar with so I'd have to play with it to know for sure to run comparisons. I am not a RAID 5 fan at all since arrays have grown beyond the 4 TB range overall size to be honest. In those cases, this would likely be my choice.
  • JDG1980 - Friday, June 14, 2013 - link

    The appropriate comparison would be RAID-Z vs RAID-5, and RAID-Z2 vs RAID-6. In each case, ZFS wins if you're dedicating the same amount of space to parity data.
  • SirGCal - Sunday, June 16, 2013 - link

    I'll check out RAID-Z2. My only immediate pause would be moving it to another RAID card from a card failure... That is something worth considering if you run a large array. But other then that. When I get ready to build this next array, if possible I will run some tests.
  • danbi - Monday, June 17, 2013 - link

    You could also look at raidz3 which is triple parity.

    ZFS works file for small number of disks, but it really shines with larger numbers. Avoid "RAID controllers" as much as possible -- "simple" HBA is way better choice -- performance wise.
  • Hakker9nl - Friday, June 14, 2013 - link

    god glad I made a ZFS server. This thing is expensive, slow and more power hungry than my system.
    For reference I built mine for a third of the prices. Reach internally 300 MB+ speeds externally limited to the 1 Gbit port and uses 60 watt when resilvering.
  • SirGCal - Friday, June 14, 2013 - link

    EXACTLY my point above. Thanks for help me illustrate it. I tend to be long winded trying to explain things completely...
  • t-rexky - Friday, June 14, 2013 - link

    A word of caution for Mac users. I researched a NAS "to death" before purchasing the DS1512+ about six months ago. I have a large number of computer systems including vintage Unix based machines, OS X, Linux and Windows. SAMBA and NFS appear to work reasonably well with the Synology DSM, but there is a fundamental issue with AFP support that remains uncorrected in the latest DSM 4.2 build - the support for Unix style file permissions is broken and DSM overrides the OS X permissions with default values.

    Synology did improve the behaviour in DSM 4.2 and at least the execute bit can now be correctly set on the remote mounts, but the read and write permissions still do not work. I was extremely disappointed to find such a fundamental issue with a system that is advertised as fully OS X compatible and also widely recommended for Mac users.

    For anyone interested in more details, here is the full story: http://forum.synology.com/enu/viewtopic.php?f=64&a...

Log in

Don't have an account? Sign up now