Introduction

The SMB / SOHO / consumer NAS market is expected to experience good growth over the next few years. With declining PC sales and increase in affordability of SSDs, hard drive vendors have scrambled to make up for the deficit and increase revenue by targeting the NAS market. Hard drive models specifically catering to 1-5 bay consumer NAS units have been introduced by both Western Digital and Seagate. Seagate took the lead in the capacity segment with the launch of the 4 TB NAS HDD in June 2013. Western Digital achieved parity with the launch of the second generation WD Red models yesterday.

The higher end SATA DAS/NAS storage segments have been served by 4 TB models for quite some time now. The WD Re (targeting applications where durability under heavy workloads is important) has been available in a 4 TB version since September 2012, while the WD Se (targeting applications where scalability and capacity are important) was introduced in May 2013.

The correct choice of hard drives for a NAS system is influenced by a number of factors. These include expected workloads, performance requirements and power consumption restrictions, amongst others. In this review, we will discuss some of these aspects while evaluating four different hard drives targeting the NAS market:

  • Western Digital Red 4 TB [ WDC WD40EFRX-68WT0N0 ]
  • Seagate 4 TB NAS HDD [ ST4000VN000-1H4168 ]
  • Western Digital Se 4 TB [ WDC WD4000F9YZ-09N20L0 ]
  • Western Digital Re 4 TB [ WDC WD4000FYYZ-01UL1B0 ]

While the WD Red and Seagate NAS HDD compete against each other in the same market segment (consumer / SOHO NAS units with 1-5 bays), the WD Re and WD Se are portrayed as complementary offerings for higher end NAS units. We will also try to determine how they differ in the course of this article.

Western Digital provided us with at least two drives each of the WD Red, WD Se and WD Re, but, Seagate came forward with only one disk. Readers of our initial WD Red 3 TB review would have found us evaluating the disks in multiple NAS units with multiple RAID configurations. Unfortunately, Seagate's sampling forced us to rethink our review strategy for these NAS drives. We will start off with a feature set comparison of the four drives followed by a look at the raw performance when connected directly to a SATA 6 Gbps port. A 2-bay Intel Atom-based NAS (LenovoEMC PX2-300D) with single-bay occupancy is then used to evaluate performance in a networked environment. Power consumption numbers and other factors are addressed in the final section with the networked configuration as a point of reference.

We used two testbeds in our evaluation, one for benchmarking the raw drive performance and the other for evaluating performance when placed in a NAS unit.

SATA Drive Benchmarking Testbed Setup
Processor Intel i7-3770K CPU - 4C/8T - 3.50GHz, 8MB Cache
Motherboard Asus P8H77-M Pro
OS Hard Drive Seagate Barracuda XT 2 TB
Secondary Drives Corsair Performance 3 Series™ P3-128 128 GB SSD
WD40EFRX / ST4000VN000 / WD4000F9YZ / WD4000FYYZ
Memory G.SKILL ECO Series 4GB (2 x 2GB) SDRAM DDR3 1333 (PC3 10666) F3-10666CL7D-4GBECO CAS 7-7-7-21
Case Antec VERIS Fusion Remote Max
Power Supply Antec TruePower New TP-550 550W
Operating System Windows 7 Ultimate x64

Our NAS testbed was built for evaluating NAS units when subject to access from multiple clients (virtual machines). We ran the benchmarks presented in this review on one of the twenty five available Windows 7 VMs.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evolution 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

The hard drives under the scanner were placed in a single-drive configuration in the Intel Atom D525-based LenovoEMC PX2-300D. The network links of the PX2-300D were bonded in LACP 802.3ad mode, but that shouldn't have any bearing on the results since we are looking at a single client scenario with a single GbE link.

Feature Set Comparison
Comments Locked

54 Comments

View All Comments

  • Gigaplex - Wednesday, September 25, 2013 - link

    Avoid Storage Spaces from Windows. It's an unproven and slow "re-imagination" of RAID as Microsoft likes to call it. The main selling point is flexibility of adding more drives, but that feature doesn't work as advertised because it doesn't rebalance. If you avoid adding more drives over time it has no benefits over conventional RAID, is far slower, and has had far less real world testing on it.
  • Bob Todd - Monday, September 9, 2013 - link

    For home use I've gone from RAID 5 to pooling + snapshot parity (DriveBender and SnapRAID respectively). It's still one big ass pool so it's easy to manage, I can survive two disks failing simultaneously with no data loss, and even in the event of a disaster where 3+ fail simultaneously I'll only lose whatever data was on the individual disks that croaked. Storage Spaces was nice in theory, but the write speed for the parity spaces is _horrendous_, and it's still striped so I'd risk losing everything (not to mention expansion in multiples of your column size is a bitch for home use).
  • coolviper777 - Tuesday, October 1, 2013 - link

    If you have a good hardware raid card, with BBU and memory, and decent drives, then I think Raid 5 works just fine for home use.

    I currently have a Raid 5 array using a 3Ware 9560SE Raid card, consisting of 4 x 1.5TB WD Black drives. This card has battery backup and onboard memory. My RAID 5 array works beautifully for my home use. I ran into an issue with a drive going bad. I was able to get a replacement, and the rebuild worked well. There's an automatic volume scan once a week, and I've seen it fix a few error quite a while ago. But nothing very recent.

    I get tremendous speed out of my Raid5, and even boot my Windows7 OS from a partition on the Raid 5. Probably, eventually move that to a SSD, but they're still expensive for the size I need for the C: drive.

    My biggest problem with Raid1 is that it's hugely wasteful in terms of disk space, and it can be slower than just a single drive. I can understand for mission critical stuff, Raid5 might give issues. However, for home use, if you combine true hardware Raid5 with backup of important files, I think it's a great solution in terms of reliability and performance.
  • tjoynt - Wednesday, September 4, 2013 - link

    ++ this. At work we *always* use raid-6: nowadays single drive redundancy is a disaster just waiting to happen.
  • brshoemak - Wednesday, September 4, 2013 - link

    "First off, error checking should in general be done by the RAID system, not by the drive electronic."

    The "should in general" port is where the crux of the issue lies. A RAID controller SHOULD takeover the error-correcting functions if the drive itself is having a problem - but it doesn't do it exclusively, it lets the drives have a first go at it. A non-ERC/TLER/CCTL drive will keep working on the problem for too long and not pass the reigns to the RAID controller as it should.

    Also, RAID1 is the most basic RAID level in terms of complexity and I wouldn't have any qualms about running consumer drives in a consumer setting - as long as I had backups. But deal with any RAID level beyond RAID1 (RAID10/6), especially those that require parity data, and you could be in for a world of hurt if you use consumer drives.
  • Egg - Wednesday, September 4, 2013 - link

    No. Hard drives have, for a very very long time, included their own error checking and correcting codes, to deal with small errors. Ever heard of bad blocks?

    RAID 1 exists to deal more with catastrophic failures of entire drives.
  • tjoynt - Wednesday, September 4, 2013 - link

    RAID systems can't do error checking at that level because they don't have access to it: only the drive electronics do.
    The problems with recovering RAID arrays don't usually show up with RAID-1 arrays, but with RAID-5 arrays, because you have a LOT more drives to read.
    I swore off consumer level raid-5 when my personal raid-5 (on an Intel Matrix RAID-5 :P) dropped two drives and refused to rebuild with them even though they were still perfectly functional.
  • Rick83 - Thursday, September 5, 2013 - link

    Just fix it by hand - it's not that difficult. Of course, with pseudo hardware RAID, you're buggered, as getting the required access to the disk, and forcing partial rebuilds isn't easily possible.

    I've had a second disk drop out on me once, and I don't recall how exactly I ended up fixing it, but it was definitely possible. I probably just let the drive "repair" the unreadable sectors by writing 512 rubbish bytes to the relevant locations, and tanked the loss of those few bytes, then rebuilt to the redundancy disk.
    So yeah, there probably was some data loss, but bad sectors aren't the end of the world.

    And by using surface scans you can make the RAID drop drives with bad sectors at the first sign of an issue, then resync and be done with it. 3-6 drive RAID 5 is perfectly okay, if you only have intermediate availability requirements. For high availability RAID 6/RAID 10 arrays with 6-12 disks are a better choice.
  • mooninite - Thursday, September 5, 2013 - link

    Intel chipsets do not offer hardware RAID. The RAID you see is purely software. The Intel BIOS just formats your hard drive with Intel's IMSM (Intel Matrix Storage Manager) format. The operating system has to interpret the format and do all the RAID parity/stripe calculations. Consider it like a file system.

    Calling Intel's RAID "hardware" or "pseudo-hardware" is a misconception I'd like to see die. :)
  • mcfaul - Tuesday, September 10, 2013 - link

    "First off, error checking should in general be done by the RAID system, not by the drive electronic. "

    You need to keep in mind how drives work. they are split into 512b/4k sectors... and each sector has a significant chunk of ECC at the end of the sector, so all drives are continually doing both error checking and error recovery on every single read they do.

    plus, if it is possible to quickly recover an error, then obviously it is advantageous for the drive to do this, as there may not be a second copy of the data available (i.e. when rebuilding a RAID 1 or RAID 5 array)

Log in

Don't have an account? Sign up now