Introduction and Setup Impressions

Netgear launched the 6-bay ReadyNAS 716 10-GbE desktop NAS in November. To our knowledge, this is the first off-the-shelf NAS in a desktop tower form factor to come with built-in support for 10GBase-T. With an Intel Xeon CPU under the hood, the unit packs quite a punch within a small form factor. 10-GbE solutions have remained in realm of rackmount units so far, but, Netgear, along with QNAP (in its TS-x70 series), aims to bring them to desktop form factors also. While the QNAP TS-x70's 10-GbE capabilities are optional (the end user has to change the PCIe add-on card for that purpose), the ReadyNAS 716 comes with a two-port 10GBase-T NIC installed.

Netgear's storage lineup has been in a bit of flux over the last few years. While the x86-based versions have been quite stable, their SOHO / home consumer lineup had to shift from SPARC-based units to ARM-based ones. We covered the reboot of the ReadyNAS OS for ARM-based units in the review of the ReadyNAS NV+ v2. Despite Netgear's promise to work on adding features and making the new ReadyNAS OS better to use, here we are, two years later, looking at yet another reboot of the ReadyNAS OS. The reboot aims to unify the product features across both ARM and x86-based units. We already reviewed the ReadyNAS 312, which happens to be the entry-level x86-based NAS in Netgear's 2013 lineup. The ReadyNAS 716 (RN716X) is Netgear's flagship in the 2013 desktop form factor category. Based on the Intel Xeon E3-1265L processor, the unit runs the completely revamped ReadyNAS OS 6.

In terms of redesign, ReadyNAS OS 6.0 is the most ambitious yet. Unlike other NAS vendors who opt for the safety of the proven EXT3 / EXT4 file system for internal volumes, Netgear has opted for the cutting-edge BTRFS. The benefits of BTRFS over EXT3 / EXT4 are numerous. These include checksumming for integrity, in-built snapshotting mechanisms, continuous defragmentation, online data compression, scrubbing with error correction and built-in storage pools which make a separate LVM unnecessary. Compared to ZFS, the memory requirements are more manageable, but, deduplication support in ZFS is much better. However, there is an impression amongst some IT folks that btrfs is not stable enough for deployment in production environments. Netgear indicates that btrfs is part of Oracle's production kernel and, currently, the same code and kernel are being used in ReadyNAS OS 6.x.

The specifications of the RN716X are provided below:

Netgear ReadyNAS 716 (RN716x) Specifications
Processor Intel Xeon E3-1265L V2 (4C/8T, 2.5 GHz)
RAM 16 GB DDR3 ECC RAM
Drive Bays 6x 3.5"/2.5" SATA 6 Gbps HDD
Network Links 2x 1 GbE + 2x 10GBase-T (via add-on PCIe card)
USB Slots 1x USB 2.0 + 2x USB 3.0
eSATA Ports 3x (compatible for expansion with EDA500)
Maximum Capacity 6-bays + 15-bays via three EDA500 units for a total of 21 bays
VGA / Console / HDMI HDMI out
Full Specifications Link Netgear RN716X Specifications (PDF)
Suggested Retail Pricing US $3000

The ReadyNAS 716 runs a Linux kernel (v3.0.93). Other interesting aspects of the system can be gathered after obtaining SSH access to the unit.

The RN716X has an in-built 200W PSU. While enterprise rackmount units with similar platforms have redundant PSUs, the form factor of the RN716X precludes that feature. The motherboard has two native GbE ports, while the 10GBase-T ports are provided by an add-on PCIe card. After connection to the network, the unit could be discovered by RAIDar (offline discovery tool that has been around since the first generation ReadyNAS units). In addition, Netgear has also incorporated cloud discovery using the ReadyCLOUD service. I had faced issues in trying to start out with previously used disks while evaluating the RN312, but the issue was not much of a problem with the RN716X. The front panel has a LCM display as well as a touch-sensitive interface to navigate the options on the display.

We have already touched upon the various features of ReadyNAS 0S 6.x in our ReadyNAS 312 review. The snapshotting mechanism used by Netgear is quite advantageous in the market segment that the RN716X targets. The only surprise in our setup process was the fact that the Flex-RAID configuration (allowing users to manually control the RAID level) didn't allow for RAID-1 to RAID-5 migration when adding a third disk to an already existing two-disk volume. However, switching to X-RAID enabled this option without data loss. The volume could be converted back to Flex-RAID after completion of the migration.

In the rest of the review, we will cover the testbed setup and testing methodology, putting focus on our updates to enable testing 10GBase-T equipment. This is followed by benchmarks in single and multi-client modes. For single client scenarios, we have both Windows and Linux benchmarks with CIFS and NFS shares. We will also have some performance numbers with encryption enabled. In the final section, power consumption numbers as well as RAID rebuild times will be covered along with some closing notes.

Testbed Setup and Testing Methodology
POST A COMMENT

25 Comments

View All Comments

  • Guspaz - Wednesday, January 01, 2014 - link

    Yikes, that's a highly questionable decision, to go with btrfs instead of ZFS as the default file system. ZFS has been in production use for seven years now, proven through widespread deployments and available on every *nix platform you can think of, while btrfs is still beta quality (without even an official stable release) and nowhere near feature-competitive with ZFS... Reply
  • JDG1980 - Wednesday, January 01, 2014 - link

    Agreed. This is a full-fledged Xeon PC with ECC RAM, so why not go with ZFS? It would seem to be the obvious choice for a high-quality, time-tested software RAID system.

    By the way, it would really be better if you listed the suggested retail price on the first page of reviews along with the other specs. (A quick Google search seems to indicate that the street price is $2500-$3000.)
    Reply
  • Runiteshark - Wednesday, January 01, 2014 - link

    Probably because it takes a bit more effort to get ZFS running in Linux than btrfs, but not that much. It recently went stable and is working just fine on a 72 bay Supermicro chassis I have in test for the past 3 months. All this being said, why didn't they just go with a BSD solution? Reply
  • nafhan - Thursday, January 02, 2014 - link

    While, BTRFS has been supported as a root file system in SLE and Oracle Linux since 2012. ZFS: not available from the vendor on either (even though Solaris is owned by Oracle). That's probably it right there. Reply
  • shodanshok - Friday, January 03, 2014 - link

    I agree. While BTRFS is quite stable now, considering the critical role assigned to a filesystem I would go with a FS with a proven track record (and fsck). Moreover being a CoW filesystem, BTRFS tend to be extremely fragmentation prone in some circumstances, basically everytime a file rewrite is required, for example a database or a virtual machine (but I think that a similar NAS units is primarily assigned with archiving role). Reply
  • SirGCal - Wednesday, January 01, 2014 - link

    Yup, I have two 8-disk systems myself. One running hardware LSI controller for RAID 6 and one using ZFS for the same effective protection. Sure the hardware controller is actually a tiny bit faster at hard reads, but for the $600 price tag, so what. All of my current systems are going to be ZFS. These arrays in a box are interesting until they decide to go with some other pooling system... If there is a real comparable reason and argument for BTRFS instead of ZFS, I'd like to see it. Reply
  • Runiteshark - Wednesday, January 01, 2014 - link

    I tested btrfs recently with a large disk array (read 45 4TB drives) and the performance was very poor. Ended up going with JFS and shunned XFS because it's not stable in the event of power issues. Reply
  • shodanshok - Friday, January 03, 2014 - link

    Hi,
    from my understanding JFS and JFS2 are more or less unsupported from some time now.

    What problem did you have with XFS? It is designed to manage the exact case you describe: a lot of space spread over a lot of spinning disks. When using XFS, the only two thing that can lead to data loss are:
    1. no barrier/FUA support in the disk/controller combo
    2. an application that rewrite files with truncate and do _not_ use fsyncs

    Case n.1 is common to all filesystem: if your disk lies about cache flush, then no filesystem can save you. The only thing that can somewhat lessen the risk is journal checksumming, that is implemented in both XFS, EXT4 and BRTFS, but I don't know about JFS.

    Case n.2 is really an application shortcoming, but EXT4 and BTRFS choice here are the more sensible one: detect such corner case and apply a work-around. Anyway, with application the properly use fsync, XFS is rock stable.

    Regards.
    Reply
  • Runiteshark - Wednesday, January 01, 2014 - link

    On one hand, I'm happy that 10g is becoming more prevalent slowly for the con/prosumer grade market, however products like this make my head hurt. The performance that you were able to get out of this host were nothing short of embarrassing, and could of easily been handled by a single gigabit link. I think this primarly stems from vendors still using software RAID without using good quality HBAs. You can most certainly have a fantastic software solution that is high performance without a real RAID controller or even a high end HBA, however it requires you use ceph, or ZFS.

    The performance you are seeing out of this is actually very similar to a HP Microserver that I have running on FreeNAS with 2GB of ram, LAGG gigabit ports, 4x4TB 7200rpm Seagates + 32GB USB3 OS drive, granted the entire unit cost no more than $1800, and only has 4 slots instead of 6. Without a doubt if I was going to build something bigger, I'd use a Supermicro X9DR7-TF+, same as what I use in production for $800, get a decent chassis, the LSI BBU and have support for up to 16 drives with 2 10G ports with an Intel X540 chipset, which all toghether would still be significantly less than this solution, and obviously blow the performance of this out of the water.
    Reply
  • hpglow - Wednesday, January 01, 2014 - link

    Runiteshark not good at reading or convertng bits to bytes? With some of the tests pushing over 600 MB/sec a 1G ethernet port would be saturated more than 4 times over not including packet overhead. A 1Gb ethernet port is good for only 125 MB/sec. Reply

Log in

Don't have an account? Sign up now