Testbed Setup and Testing Methodology

Our rackmount NAS testbed uses the same infrastructure and methodology as the other units with a tower form factor. Performance evaluation is done under both single and multiple client scenarios. In the multiple client scenario, we run tests with all available network ports teamed with 802.3ad dynamic link aggregation. For these tests, we use the SMB / SOHO NAS testbed described earlier. This is the first 10 GbE-equipped NAS we have evaluated. Special mention must be made of the Netgear ProSafe GSM7352S-200 in our setup. It provided us with the necessary infrastructure to properly evaluate the capabilities of the Synology RS10613xs+.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evoluion 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

Supermicro was gracious to loan us their mini rack (CSE-RACK14U). An interesting aspect of the mini rack is the fact that its height is that of the standard workplace desk (30.64"). This allowed us to use our existing NAS testbed (tower form factor) and power measurement unit easily along with the rackmount components (the NAS under test, the Netgear ProSafe switch etc.)

We have been using the Western Digital 4TB RE (WD4000FYYZ) disks as test hard drives for NAS reviews. As we saw in our previous reviews, RAID rebuilds take days to get done. With a large number of bays, usage of hard disks was going to be very cumbersome. In addition, hard disks just don't bring out the performance potential of the rackmount units. Therefore, evaluation of the Synology RS10613xs+ was done by setting up a RAID-5 volume with twelve OCZ Vector 4 120 GB SSDs. Tests were also done using Intel SSD 520 240 GB disks that were supplied by Synology along with the review unit. However, to keep benchmark results consistent across different NAS units, the results we present are those obtained using the OCZ Vector SSDs.

Thank You!

We thank the following companies for helping us out with our rackmount NAS evaluation:

In order to evaluate single client performance, we booted up one VM in our testbed and ran Intel NASPT on the CIFS share in the NAS. iSCSI support evaluation was also done in a similar manner with a 250 GB iSCSI LUN mapped on the VM. For NFS, we ran IOMeter benchmarks in Linux. For evaluation of multiple client performance, we accessed a CIFS share from multiple VMs simultaneously using IOMeter and gathered data on how the performance changed with the number of clients / access pattern. Without further digression, let us move on to the performance numbers.

Introduction and Setup Impressions Single Client Performance - CIFS and iSCSI on Windows
Comments Locked

51 Comments

View All Comments

  • Gigaplex - Saturday, December 28, 2013 - link

    No, you recover from backup. RAID is to increase availability in the enterprise, it is not a substitute for a backup.
  • P_Dub_S - Thursday, December 26, 2013 - link

    Please read that 3rd link and tell me if RAID 5 makes any sense with todays drive sizes and costs.
  • Gunbuster - Thursday, December 26, 2013 - link

    Re: that 3rd link. Who calls it resilvering? Sounds like what a crusty old unix sysadmin with no current hardware knowledge would call it.
  • P_Dub_S - Thursday, December 26, 2013 - link

    Whatever the name it doesn't really matter its the numbers that count and in TB drive sizes now a days RAID 5 makes zero sense.
  • Kheb - Saturday, December 28, 2013 - link

    No it doesnt. Not at all. First, you are taking into account only huge arrays used to store data and not to run applications (so basically only mechanical SATA, that is).Second, you are completeley ignoring costs (raid 5 or raid 6 vs raid 10). Third, you are assuming the raid 5 itself is not backed up or with some sort of software\hardware redundancy or tiering at lower levels (see SANs).

    So while I can agree that THEORETICALLY having raid 10 everywhere would indeed be safer, the costs (hdds + enclosures + controllers + backplanes) make this, and this time for real, have zero sense.
  • Ammaross - Thursday, December 26, 2013 - link

    "Resilvering" is the ZFS term for rebuilding data on a volume. It's very much a current term still, but it does give us an insight into the current bias of the author, who apparently favors ZFS for his storage until something he proposes as better is golden.
  • hydromike - Thursday, December 26, 2013 - link

    How many times have you had to rebuild a RAID5 in your lifetime? I have over 100 times on over 10 major HARDWARE RAID vendors.

    "And when you go to rebuild that huge RAID 5 array and another disk fails your screwed."

    The other drive failing is a very small possibility in an enterprise environment that I was talking about, because of enterprise grade drives vs consumer. That is why most have either the raid taken offline for a much faster rebuild. Besides during that rebuild the RAID is still functional just degraded.

    Also my point is lots of us still have hardware that is 2-5 years old that is still just working. The newest Arrays that I have setup as of late are 20 to 150 TB in size and we went with Freenas with ZFS which puts all other to shame. NetApp Storage appliances rebuild times are quite fast 6-12 hours for 40TB LUNS. It all depends upon the redundancy that you need. Saying that raid 5 needs to die is asinine. What if the data you are storing is all available in the public domain but have a local copy speeds up the data access rates. The rebuild is faster with a degraded LUN vs retrieving all of the data from the public domain again. There are many use cases for each RAID level just because one level does not fit YOUR uses it does not need to die!
  • P_Dub_S - Thursday, December 26, 2013 - link

    So if you were to buy this NAS for a new implementation would you even consider throwing 10-12 disks in it and building a RAID 5 array? just asking. Even in your own post you state how you use Freenas with ZFS for your new arrays. RAID 5 is the dodo here let it go extinct.
  • Ammaross - Thursday, December 26, 2013 - link

    For all you know, he's running ZFS using raidz1 (RAID5 essentially). Also, saying RAID5 needs to die, one must then assume you also think RAID0 is beyond worthless, since it has NO redundancy? Obviously, you can (hopefully) cite the use-cases for RAID0. Your bias just prevents you from seeing the usefulness of RAID5.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    It does happen though. I've had to rebuild 2 servers alone this year because of multiple drive failures. One server had 3 drives fail. But that's because of neglect. Us engineers only have so much time. Especially with the introduction to lean manufacturing.

    RAID 5 + Global spare though is usually pretty safe bet if it's a critical app server. Otherwise RAID 5 is perfectly fine.

Log in

Don't have an account? Sign up now