Testbed Setup and Testing Methodology

Our rackmount NAS testbed uses the same infrastructure and methodology as the other units with a tower form factor. Performance evaluation is done under both single and multiple client scenarios. In the multiple client scenario, we run tests with two ports teamed with the second pair used as a backup and also with all four ports teamed with 802.3ad dynamic link aggregation. For these tests, we use the SMB / SOHO NAS testbed described earlier.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evoluion 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

Our testing environment also required some updates for evaluation of rackmount units. Supermicro was gracious to loan us their mini rack (CSE-RACK14U). An interesting aspect of the mini rack is the fact that its height is that of the standard workplace desk (30.64"). This allowed us to use our existing NAS testbed (tower form factor) and power measurement unit easily along with the rackmount components (the NAS under test, the Netgear ProSafe switch etc.)

We have been using the Western Digital 4TB RE (WD4000FYYZ) disks as test hard drives for NAS reviews. As we saw in our previous reviews, RAID rebuilds take days to get done. With a large number of bays, usage of hard disks was going to be very cumbersome. In addition, hard disks just don't bring out the performance potential of the rackmount units. Therefore, evaluation of the QNAP TS-EC1279U-RP was done by setting up a RAID-5 volume with twelve OCZ Vector 4 120 GB SSDs. Various shares and iSCSI LUNs were configured in this 1285 GB volume.

Thank You!

We thank the following companies for helping us out with our rackmount NAS evaluation:

In order to evaluate single client performance, we booted up one VM in our testbed and ran Intel NASPT on the CIFS share in the NAS. iSCSI support evaluation was also done in a similar manner with a 250 GB iSCSI LUN mapped on the VM. For NFS, we ran IOMeter benchmarks in Linux. For evaluation of multiple client performance, we accessed a CIFS share from multiple VMs simultaneously using IOMeter and gathered data on how the performance changed with the number of clients / access pattern. Without further digression, let us move on to the performance numbers.

Unboxing and Setup Impressions Single Client Performance - CIFS, NFS and iSCSI
Comments Locked

23 Comments

View All Comments

  • Jeff7181 - Tuesday, April 30, 2013 - link

    EMC, Hitachi and NetApp provide enterprise class NAS and SAN arrays. This, nor any, QNAP product is anywhere near that level.
  • Walkeer - Thursday, May 9, 2013 - link

    agreed, plus NAS is not really enterprise anyway since these is SAN
  • davegraham - Tuesday, April 30, 2013 - link

    Ganesh,

    having worked in the storage industry (and now working for an enterprise and carrier networking company doing data center architecture and design) QNAP, Drobo, et al. aren't names that carry any weight for enterprise-class storage. The systems I deal with (for example, EMC Symmetrix VMAX 40K) are considered "enterprise class" storage systems (99.999% uptime, SSD caching and tiering, finely tuned atomic memory and storage access, multiple active processing storage engines/directors, fibre channel/FCoE/iSCSI front ends, extensive API command/control sets, replication [local & remote], snapshotting/cloning, etc.). As Jeff7181 notes below, these stand alone in a class by themselves.

    cheers,

    D
  • Walkeer - Thursday, May 9, 2013 - link

    agreed, this is a SOHO toy...
  • jaziniho - Wednesday, May 1, 2013 - link

    Unless this comes in a model with dual controllers (not just dual PSUs), then it's squarely in the SMB rather than enterprise space.

    Support for SAS as well as SATA disks would also be high on list of potential requirements for enterprise. With RAID rebuild times on large drives so long, you need disks with decent reliability to give you more confidence in making it through the rebuild.
  • aloginame - Saturday, May 11, 2013 - link

    I agree with the fact that this QNAP is not really a "Enterprise" or "High-End" solution for NAS, however, I have to disagree when it is being compared to something like EMC Symmetrix VMAX 40K, for those are really SAN solutions and not NAS.
  • golemite - Monday, April 29, 2013 - link

    Hi Ganesh, any chance of getting reviews of lower end rackmount NAS systems like the Synology RS812/812+?
  • ganeshts - Wednesday, May 1, 2013 - link

    We have the Synology RS10613sx+ in the pipeline, but it costs approx. twice that of the TS-EC1279U-RP and caters to users who require more performance / features.
  • mmayrand - Tuesday, April 30, 2013 - link

    So, you spend $3500 for box plus 12 SSD (not free) and you get the 1/3 of the effective bandwidth of a single SSD plugged in a $300 PC. Is there a point to these NAS boxes?
  • davegraham - Tuesday, April 30, 2013 - link

    Mmayrand,

    the concept behind a NAS box is shareable storage across N-number of users in a SoHo or SMB environment. at that point, it makes more sense to have a common pool of storage that can be "protected" (remember, RAID is NOT backup) and utilized more efficiently, than a scattered or siloed collection of independent disk in a laptop or desktop.

    it also is a basic requirement for most virtualization (the concept of shared storage) solutions to maintain high availability and portability for virtual machines within a cluster. As a standalone box, you're right, you can hit better performance #'s because you're just straddling a PCIe bus vs. ethernet. however, change the venue and you're looking at a more ideal solution.

    D

Log in

Don't have an account? Sign up now