Testbed Setup and Testing Methodology

Evaluation of NAS performance under both single and multiple client scenarios is done using the SMB / SOHO NAS testbed we described earlier. Tower / desktop form factor NAS units are usually tested with Western Digital RE drives (WD4000FYYZ). However, the presence of 10-GbE on the ReadyNAS 716 meant that SSDs had to be used to bring out the maximum possible performance. Therefore, evaluation of the Netgear RN716X was done by setting up a RAID-5 volume with six OCZ Vector 4 120 GB SSDs.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evoluion 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

Netgear XS712T

Our primary testbed switch, the GSM 7352S, doesn't support 10GBase-T. Its 10GbE ports are SFP+ needing copper direct attached cables. We could have gone in for SFP+ to 10GBase-T converters, but, keeping in mind the growing popularity of 10GBase-T, a dedicated 10GBase-T switch made more sense. Netgear came forward with the XS712T, a 12-port 10GBase-T switch. The unit also has two SFP+ copper ports to allow stacking / uplinking.

In our testbed, the SFP+ ports on both the GSM 7352S as well as the XS712T are link aggregated and connected to each other. The GSM 7352S acts as a DHCP server and provides an IP to the XS712T. The 10GBase-T ports of the NAS were also connected to the XS712T (which acts as a DHCP forwarder) and they obtained an IP address in the same subnet as the virtual machines connected to the ports of the GSM 7352S. For teaming purposes, link trap and STP mode were enabled. The mode was set to 802.3ad dynamic link aggregation and the hash mode was set to 'Src/Dest MAC, VLAN, EType,Incoming Port'.

Thank You!

  •   Thanks to Netgear for sponsoring the XS712T for use in our 10GBase-T NAS reviews.
Introduction and Setup Impressions Single Client Performance - CIFS and iSCSI on Windows
Comments Locked

24 Comments

View All Comments

  • Runiteshark - Wednesday, January 1, 2014 - link

    Some tests being multi-client CIFS. Look at the throughput he's getting on a single client. I'm pushing 180MBps cifs and 200MBps through NFS LAGGing dual 1gs to a single client. Host pushing this data is a 72 bay Supermicro chassis w/ Dual e5-2697v2's, 256GB of RAM, 72 Seagate 5900rpm NAS drives x4 Samsung 840 Pro 512GB SSDs, 3 LSI 2308 controllers, and a single Intel X520-T2 double 10G nic hooked up to an Extreme X670V over twinax with a frame size of 9216. Typical file type are medium size files at roughly 150mb each, copying with 48 threads of rsync.

    One thing that I didn't see in the test bed was the configuration of jumbo frames which definitely changes the characteristics of single client throughput. I'm not sure if you can run large jumbo frames on the Netgear switch.

    If I need 10g which I don't because the disks/proc in the microserver couldn't push much more, I could toss in a double 10G intel adapter for roughly $450.
  • imsabbel - Thursday, January 2, 2014 - link

    Thats because his single client tests only use a single 1 GBit connection on the client side.. I know, its stupid, but the fact that ALL transfor tests are literally limited to something like 995Mbits/s should have given you a clue that Anandtech does strange things with their testing.
  • Runiteshark - Friday, January 3, 2014 - link

    I didn't even see that! What the hell was the point of the test then?
  • Gigaplex - Wednesday, January 1, 2014 - link

    Am I reading this correctly? You used 1GbE, not 10GbE adapters on the test bed? I'd like to see single client speeds using 10GbE.
  • ZeDestructor - Wednesday, January 1, 2014 - link

    6 quad-port NICs + 1 on-board NIC, so 25 gigabit ports split over 25VMs.

    As for single-client speeds, it should be possible to get that using LAGs and is a worthy point to mention, easily possible even with the current setup, although I would like to see some Intel X540 cards in use myself...
  • BMNify - Thursday, January 2, 2014 - link

    hmm am i missing something here ?
    you only use a 6 x Intel ESA I-340 Quad-GbE Port Network Adapter

    as in only using 4 "1GbE" ports and NO actual "10GbE" card to max out the end to end connection ?

    dont get me wrong, its nice to finally get a commercial SOHO type unit that's actually got 10GbE as standard after decades of nothing but antiquated 1GbE cards at reasonable prices but you also NEED that new extra 10GbE card to put in your PC alongside those 10GbE router/switch so this 3K NAS is way to expensive for SOHO masses today alas.
  • ganeshts - Thursday, January 2, 2014 - link

    6x quad ports = 24 1-GbE ports + one onboard 1GbE = 25 GbE in total.
  • BMNify - Thursday, January 2, 2014 - link

    oh right so its 25 "1GbE" ports and NO actual "10GbE" card to max out the end to end connection
  • BMNify - Thursday, January 2, 2014 - link

    it still seems very odd to have a collection of 24 threads over a dual socket 6 core 12 thread test bench with 10GbE router/switch and this 3K NAS with dual "10GbE" card that could be bonded together at both end's and yet AT just test the kit to the 1GbE port bottle neck, and dont even install another dual "10GbE" card in the pc end then try for instance starting several concurrent ffmpeg upscaling and encoding high profile/bitrate 1080P content to UHD over iSCSI etc to the "10GbE" NAS to max out the all the 12 cores/24 threads SIMD or other options to try and push that exclusive "10GbE" connection rather than any old combination of antiquated "1GbE" cards
  • hoboville - Thursday, January 2, 2014 - link

    I hate sounding like a naysayer, but these boxes are so expensive. You can build a system with similar specs for much less under FreeNAS and ZFS (as other commentators have noted). Supermicro makes some great boards, and with the number of case options you get when DIY, expandability is very much an option if you need it further down the road. Then again, alot of the cost comes from 10 gbit NICs which cost a lot.

Log in

Don't have an account? Sign up now