Testbed Setup and Testing Methodology

Our rackmount NAS testbed uses the same infrastructure and methodology as the other units with a tower form factor. Performance evaluation is done under both single and multiple client scenarios. In the multiple client scenario, we run tests with all available network ports teamed with 802.3ad dynamic link aggregation. For these tests, we use the SMB / SOHO NAS testbed described earlier. This is the first 10 GbE-equipped NAS we have evaluated. Special mention must be made of the Netgear ProSafe GSM7352S-200 in our setup. It provided us with the necessary infrastructure to properly evaluate the capabilities of the Synology RS10613xs+.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evoluion 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

Supermicro was gracious to loan us their mini rack (CSE-RACK14U). An interesting aspect of the mini rack is the fact that its height is that of the standard workplace desk (30.64"). This allowed us to use our existing NAS testbed (tower form factor) and power measurement unit easily along with the rackmount components (the NAS under test, the Netgear ProSafe switch etc.)

We have been using the Western Digital 4TB RE (WD4000FYYZ) disks as test hard drives for NAS reviews. As we saw in our previous reviews, RAID rebuilds take days to get done. With a large number of bays, usage of hard disks was going to be very cumbersome. In addition, hard disks just don't bring out the performance potential of the rackmount units. Therefore, evaluation of the Synology RS10613xs+ was done by setting up a RAID-5 volume with twelve OCZ Vector 4 120 GB SSDs. Tests were also done using Intel SSD 520 240 GB disks that were supplied by Synology along with the review unit. However, to keep benchmark results consistent across different NAS units, the results we present are those obtained using the OCZ Vector SSDs.

Thank You!

We thank the following companies for helping us out with our rackmount NAS evaluation:

In order to evaluate single client performance, we booted up one VM in our testbed and ran Intel NASPT on the CIFS share in the NAS. iSCSI support evaluation was also done in a similar manner with a 250 GB iSCSI LUN mapped on the VM. For NFS, we ran IOMeter benchmarks in Linux. For evaluation of multiple client performance, we accessed a CIFS share from multiple VMs simultaneously using IOMeter and gathered data on how the performance changed with the number of clients / access pattern. Without further digression, let us move on to the performance numbers.

Introduction and Setup Impressions Single Client Performance - CIFS and iSCSI on Windows
Comments Locked

51 Comments

View All Comments

  • iAPX - Thursday, December 26, 2013 - link

    2000+ MB/s ethernet interface (2x10Gb/s), 10 hard-drives able to to delivers at least 500MB/s EACH (grand total of 5000MB/s), Xeon quad-core CPU, and tested with ONE client, it delivers less than 120MB/s?!?
    That's what I expect from an USB 3 2.5" external hard-drive, not a SAN of this price, it's totally deceptive!
  • Ammaross - Thursday, December 26, 2013 - link

    Actually, 120MB/s is remarkably exactly what I would expect from a fully-saturated 1Gbps link (120MB/s * 8 bits = 960Mbps). Odd how that works out.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    That's because the PC only has a gigabit NIC. That's actually what you should expect.
  • BrentfromZulu - Thursday, December 26, 2013 - link

    For the few who know, I am the Brent that brought up Raid 5 on the Mike Tech Show (saying how it is not the way to go in any case)

    Raid 10 is the performance king, Raid 1 is great for cheap redundancy, and Raid 10, or OBR10, should be what everyone uses in big sets. If you need all the disk capacity, use Raid 6 instead of Raid 5 because if a drive fails during a rebuild, then you lose everything. Raid 6 is better because you can lose a drive. Rebuilding is a scary process with Raid 5, but Raid 1 or 10, it is literally copying data from 1 disk to another.

    Raid 1 and Raid 10 FTW!
  • xdrol - Thursday, December 26, 2013 - link

    From the drives' perspective, rebuilding a RAID 5 array is exactly the same as rebuilding a RAID 1 or 10 array: Read the whole disk(s) (or to be more exact, sectors with data) once, and write the whole target disk once. It is only different for the controller. I fail to see why is one scarier than the other.

    If your drive fails while rebuilding a RAID 1 array, you are exactly as screwed. The only thing why R5 is worse here is because you have n-1 disks unprotected while rebuilding, not just one, giving you approximately (=negligibly smaller than) n-1 times data loss chance.
  • BrentfromZulu - Friday, December 27, 2013 - link

    Rebuilding a Raid 5 requires reading data from all of the other disks, whereas Raid 10 requires reading data from 1 other drive. Raid 1 rebuilds are not complex, nor Raid 10. Raid 5/6 rebuilding is complex, requires activity from other disks, and because of the complexity has a higher chance of failure.
  • xxsk8er101xx - Friday, December 27, 2013 - link

    You take a big hit on performance with RAID 6.
  • Ajaxnz - Thursday, December 26, 2013 - link

    I've got one of these with 3 extra shelves of disks and 1TB of SSD cache.
    There's a limit of 3 shelves in a single volume, but 120TB (3 shelves of 12 4Tb disks, raid5 on each shelf) with the SSD cache performs pretty well.
    For reference, NFS performance is substantially better than CIFS or iSCSI.

    It copes fine with the 150 virtual machines that support a 20 person development team.

    So much cheaper than a NetAPP or similar - but I haven't had a chance to test the multi-NAS failover - to see if you truly get enterprise quality resilience.
  • jasonelmore - Friday, December 27, 2013 - link

    well at least half a dozen morons got schooled on the different types of RAID arrays. gg, always glad to see the experts put the "less informed" (okay i'm getting nicer) ppl in their place.
  • Marquis42 - Friday, December 27, 2013 - link

    I'd be interested in knowing greater detail on the link aggregation setup. There's no mention of the load balancing configuration in particular. The reason I ask is because it's probably *not* a good idea to bond 1Gbps links with 10Gbps links in the same bundle unless you have access to more advanced algorithms (and even then I wouldn't recommend it). The likelihood of limiting a single stream to ~1Gbps is fairly good, and may limit overall throughput depending on the number of clients. It's even possible (though admittedly statistically unlikely) that you could limit the entirety of the system's network performance to saturating a single 1Gbe connection.

Log in

Don't have an account? Sign up now