Testbed Setup and Testing Methodology

Our rackmount NAS testbed uses the same infrastructure and methodology as the other units with a tower form factor. Performance evaluation is done under both single and multiple client scenarios. In the multiple client scenario, we run tests with two ports teamed with the second pair used as a backup and also with all four ports teamed with 802.3ad dynamic link aggregation. For these tests, we use the SMB / SOHO NAS testbed described earlier.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evoluion 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

Our testing environment also required some updates for evaluation of rackmount units. Supermicro was gracious to loan us their mini rack (CSE-RACK14U). An interesting aspect of the mini rack is the fact that its height is that of the standard workplace desk (30.64"). This allowed us to use our existing NAS testbed (tower form factor) and power measurement unit easily along with the rackmount components (the NAS under test, the Netgear ProSafe switch etc.)

We have been using the Western Digital 4TB RE (WD4000FYYZ) disks as test hard drives for NAS reviews. As we saw in our previous reviews, RAID rebuilds take days to get done. With a large number of bays, usage of hard disks was going to be very cumbersome. In addition, hard disks just don't bring out the performance potential of the rackmount units. Therefore, evaluation of the QNAP TS-EC1279U-RP was done by setting up a RAID-5 volume with twelve OCZ Vector 4 120 GB SSDs. Various shares and iSCSI LUNs were configured in this 1285 GB volume.

Thank You!

We thank the following companies for helping us out with our rackmount NAS evaluation:

In order to evaluate single client performance, we booted up one VM in our testbed and ran Intel NASPT on the CIFS share in the NAS. iSCSI support evaluation was also done in a similar manner with a 250 GB iSCSI LUN mapped on the VM. For NFS, we ran IOMeter benchmarks in Linux. For evaluation of multiple client performance, we accessed a CIFS share from multiple VMs simultaneously using IOMeter and gathered data on how the performance changed with the number of clients / access pattern. Without further digression, let us move on to the performance numbers.

Unboxing and Setup Impressions Single Client Performance - CIFS, NFS and iSCSI
Comments Locked

23 Comments

View All Comments

  • Evadman - Wednesday, May 1, 2013 - link

    Why is this so expensive for the performance, and why is single client performance so bad? Granted I deal with actual enterpise class SAN devices from EMC and the like, but even my ~4 year old personal server can beat this box. My crappy home server is a 20 rotational Hitachi 3 TB GST Deskstar 0S03230 disks in RAID 60, a E5200 CPU and an Adaptec 52445 running on MS Server 2008, not even close to being decent for enterprise level. Besides the disks, it cost under a grand and will max out a quadlinked 4gbps connection with one client, I don't need to add 3 or 4 as your graphs show that this box needs. There is no excuse for a 20 rotational disk device to beat this 12 disk SSD NAS/SAN before hitting the network limit. I should get a dozen SSD's and a 10 gig switch and see what my crappy box can do just for kicks. *makes notes to see if a spare switch can be found in the office*
  • ganeshts - Wednesday, May 1, 2013 - link

    The single client performance is for a single client with a 1 GbE link (so it can't max out a 4GbE link obviously). Client machines usually have only a single GbE port.

    Our multi-client graphs show performance with multiple clients and indicate limitation because of the network link bandwidth on the NAS side
  • Evadman - Thursday, May 2, 2013 - link

    I must be misreading the graphs being presented then. This real world graph: http://images.anandtech.com/doci/6922/qnap_ts1279u... shows 5 clients, each at ~20MB/s for a total of 80 MB/s. Theoretical maximum is 125 MB/s, Adding the control data to the payload of the frame, and you should have about 97.5% data. So it looks like it taking more than 5 clients to get to the 1gbps limit. On the single client CIFS graph here: http://images.anandtech.com/graphs/graph6922/54437... only 2 of the performance benchmarks appear network limited at 123 MB/s. Office Productivity is low at 25-28 MB/s, as that is probably what a small business is going to be doing the most of. Is this a client/CIFS issue and not a NAS/SAN issue?

Log in

Don't have an account? Sign up now