Introduction and Setup Impressions

Netgear launched the 6-bay ReadyNAS 716 10-GbE desktop NAS in November. To our knowledge, this is the first off-the-shelf NAS in a desktop tower form factor to come with built-in support for 10GBase-T. With an Intel Xeon CPU under the hood, the unit packs quite a punch within a small form factor. 10-GbE solutions have remained in realm of rackmount units so far, but, Netgear, along with QNAP (in its TS-x70 series), aims to bring them to desktop form factors also. While the QNAP TS-x70's 10-GbE capabilities are optional (the end user has to change the PCIe add-on card for that purpose), the ReadyNAS 716 comes with a two-port 10GBase-T NIC installed.

Netgear's storage lineup has been in a bit of flux over the last few years. While the x86-based versions have been quite stable, their SOHO / home consumer lineup had to shift from SPARC-based units to ARM-based ones. We covered the reboot of the ReadyNAS OS for ARM-based units in the review of the ReadyNAS NV+ v2. Despite Netgear's promise to work on adding features and making the new ReadyNAS OS better to use, here we are, two years later, looking at yet another reboot of the ReadyNAS OS. The reboot aims to unify the product features across both ARM and x86-based units. We already reviewed the ReadyNAS 312, which happens to be the entry-level x86-based NAS in Netgear's 2013 lineup. The ReadyNAS 716 (RN716X) is Netgear's flagship in the 2013 desktop form factor category. Based on the Intel Xeon E3-1265L processor, the unit runs the completely revamped ReadyNAS OS 6.

In terms of redesign, ReadyNAS OS 6.0 is the most ambitious yet. Unlike other NAS vendors who opt for the safety of the proven EXT3 / EXT4 file system for internal volumes, Netgear has opted for the cutting-edge BTRFS. The benefits of BTRFS over EXT3 / EXT4 are numerous. These include checksumming for integrity, in-built snapshotting mechanisms, continuous defragmentation, online data compression, scrubbing with error correction and built-in storage pools which make a separate LVM unnecessary. Compared to ZFS, the memory requirements are more manageable, but, deduplication support in ZFS is much better. However, there is an impression amongst some IT folks that btrfs is not stable enough for deployment in production environments. Netgear indicates that btrfs is part of Oracle's production kernel and, currently, the same code and kernel are being used in ReadyNAS OS 6.x.

The specifications of the RN716X are provided below:

Netgear ReadyNAS 716 (RN716x) Specifications
Processor Intel Xeon E3-1265L V2 (4C/8T, 2.5 GHz)
RAM 16 GB DDR3 ECC RAM
Drive Bays 6x 3.5"/2.5" SATA 6 Gbps HDD
Network Links 2x 1 GbE + 2x 10GBase-T (via add-on PCIe card)
USB Slots 1x USB 2.0 + 2x USB 3.0
eSATA Ports 3x (compatible for expansion with EDA500)
Maximum Capacity 6-bays + 15-bays via three EDA500 units for a total of 21 bays
VGA / Console / HDMI HDMI out
Full Specifications Link Netgear RN716X Specifications (PDF)
Suggested Retail Pricing US $3000

The ReadyNAS 716 runs a Linux kernel (v3.0.93). Other interesting aspects of the system can be gathered after obtaining SSH access to the unit.

The RN716X has an in-built 200W PSU. While enterprise rackmount units with similar platforms have redundant PSUs, the form factor of the RN716X precludes that feature. The motherboard has two native GbE ports, while the 10GBase-T ports are provided by an add-on PCIe card. After connection to the network, the unit could be discovered by RAIDar (offline discovery tool that has been around since the first generation ReadyNAS units). In addition, Netgear has also incorporated cloud discovery using the ReadyCLOUD service. I had faced issues in trying to start out with previously used disks while evaluating the RN312, but the issue was not much of a problem with the RN716X. The front panel has a LCM display as well as a touch-sensitive interface to navigate the options on the display.

We have already touched upon the various features of ReadyNAS 0S 6.x in our ReadyNAS 312 review. The snapshotting mechanism used by Netgear is quite advantageous in the market segment that the RN716X targets. The only surprise in our setup process was the fact that the Flex-RAID configuration (allowing users to manually control the RAID level) didn't allow for RAID-1 to RAID-5 migration when adding a third disk to an already existing two-disk volume. However, switching to X-RAID enabled this option without data loss. The volume could be converted back to Flex-RAID after completion of the migration.

In the rest of the review, we will cover the testbed setup and testing methodology, putting focus on our updates to enable testing 10GBase-T equipment. This is followed by benchmarks in single and multi-client modes. For single client scenarios, we have both Windows and Linux benchmarks with CIFS and NFS shares. We will also have some performance numbers with encryption enabled. In the final section, power consumption numbers as well as RAID rebuild times will be covered along with some closing notes.

Testbed Setup and Testing Methodology
Comments Locked

24 Comments

View All Comments

  • Runiteshark - Wednesday, January 1, 2014 - link

    Some tests being multi-client CIFS. Look at the throughput he's getting on a single client. I'm pushing 180MBps cifs and 200MBps through NFS LAGGing dual 1gs to a single client. Host pushing this data is a 72 bay Supermicro chassis w/ Dual e5-2697v2's, 256GB of RAM, 72 Seagate 5900rpm NAS drives x4 Samsung 840 Pro 512GB SSDs, 3 LSI 2308 controllers, and a single Intel X520-T2 double 10G nic hooked up to an Extreme X670V over twinax with a frame size of 9216. Typical file type are medium size files at roughly 150mb each, copying with 48 threads of rsync.

    One thing that I didn't see in the test bed was the configuration of jumbo frames which definitely changes the characteristics of single client throughput. I'm not sure if you can run large jumbo frames on the Netgear switch.

    If I need 10g which I don't because the disks/proc in the microserver couldn't push much more, I could toss in a double 10G intel adapter for roughly $450.
  • imsabbel - Thursday, January 2, 2014 - link

    Thats because his single client tests only use a single 1 GBit connection on the client side.. I know, its stupid, but the fact that ALL transfor tests are literally limited to something like 995Mbits/s should have given you a clue that Anandtech does strange things with their testing.
  • Runiteshark - Friday, January 3, 2014 - link

    I didn't even see that! What the hell was the point of the test then?
  • Gigaplex - Wednesday, January 1, 2014 - link

    Am I reading this correctly? You used 1GbE, not 10GbE adapters on the test bed? I'd like to see single client speeds using 10GbE.
  • ZeDestructor - Wednesday, January 1, 2014 - link

    6 quad-port NICs + 1 on-board NIC, so 25 gigabit ports split over 25VMs.

    As for single-client speeds, it should be possible to get that using LAGs and is a worthy point to mention, easily possible even with the current setup, although I would like to see some Intel X540 cards in use myself...
  • BMNify - Thursday, January 2, 2014 - link

    hmm am i missing something here ?
    you only use a 6 x Intel ESA I-340 Quad-GbE Port Network Adapter

    as in only using 4 "1GbE" ports and NO actual "10GbE" card to max out the end to end connection ?

    dont get me wrong, its nice to finally get a commercial SOHO type unit that's actually got 10GbE as standard after decades of nothing but antiquated 1GbE cards at reasonable prices but you also NEED that new extra 10GbE card to put in your PC alongside those 10GbE router/switch so this 3K NAS is way to expensive for SOHO masses today alas.
  • ganeshts - Thursday, January 2, 2014 - link

    6x quad ports = 24 1-GbE ports + one onboard 1GbE = 25 GbE in total.
  • BMNify - Thursday, January 2, 2014 - link

    oh right so its 25 "1GbE" ports and NO actual "10GbE" card to max out the end to end connection
  • BMNify - Thursday, January 2, 2014 - link

    it still seems very odd to have a collection of 24 threads over a dual socket 6 core 12 thread test bench with 10GbE router/switch and this 3K NAS with dual "10GbE" card that could be bonded together at both end's and yet AT just test the kit to the 1GbE port bottle neck, and dont even install another dual "10GbE" card in the pc end then try for instance starting several concurrent ffmpeg upscaling and encoding high profile/bitrate 1080P content to UHD over iSCSI etc to the "10GbE" NAS to max out the all the 12 cores/24 threads SIMD or other options to try and push that exclusive "10GbE" connection rather than any old combination of antiquated "1GbE" cards
  • hoboville - Thursday, January 2, 2014 - link

    I hate sounding like a naysayer, but these boxes are so expensive. You can build a system with similar specs for much less under FreeNAS and ZFS (as other commentators have noted). Supermicro makes some great boards, and with the number of case options you get when DIY, expandability is very much an option if you need it further down the road. Then again, alot of the cost comes from 10 gbit NICs which cost a lot.

Log in

Don't have an account? Sign up now