Miscellaneous Aspects and Final Words

In the process of reviewing the Western Digital Red 6 TB drives, we did face one hiccup. Our QNAP testbed NAS finished resyncing a RAID-5 volume with three of those drives, but suddenly indicated an I/O error for one of them.

We were a bit surprised (in all our experience with hard drive review units, we had never had one fail that quickly). To check into the issue, we ran the SMART diagnostics and also a short test from within the NAS UI. Even though both of them passed clean, the NAS still refused to accept the disk for inclusion in the RAID volume. Fortunately, we had a spare drive that we could use to rebuild the volume. Putting the 'failed' drive in a PC didn't reveal any problems either. We are chalking this down to compatibility issues, though it is strange that the rebuilt volume with the same disks completed benchmarking without any problems. In any case, I would advise prospective consumers to ensure that their NAS is in the compatibility list for the drive before moving forward with the purchase.

RAID Resync and Power Consumption

The other aspect of interest when it comes to hard drives and NAS units is the RAID rebuild / resync times and the associated power consumption numbers. The following table presents the relevant values for the resyncing of a RAID-5 volume involving the respective drives.

QNAP TS-EC1279U-SAS-RP RAID-5 Volume Resync
Disk Model Duration Avg. Power
Western Digital Red 6 TB 14h 27m 52s 90.48 W
Seagate Enterprise Capacity 3.5" HDD v4 6 TB 10h 24m 22s 105.42 W
HGST Ultrastar He6 6 TB 12h 34m 20s 95.36 W

Update: We also have some power consumption numbers under different scenarios. In each of these cases, we have three of the drives under consideration configured in a RAID-5 volume in the NAS. The access mode is exercised by running the corresponding IOMeter trace from 25 clients simultaneously.

QNAP TS-EC1279U-SAS-RP RAID-5 Power Consumption
Workload WD Red 6 TB Seagate Enterprise Capacity 3.5" HDD v4 6 TB HGST Ultrastar He6 6 TB
Idle 79.34 W 87.16 W 84.98 W
Max. Throughput
(100% Reads)
93.90 W 107.22 W 97.58 W
Real Life
(60% Random, 65% Reads)
84.04 W 109.25 W 94.03 W
Max. Throughput
(50% Reads)
96.74 W 112.82 W 99.25 W
Random 8 KB
(70% Reads)
85.22 W 105.65 W 91.47 W

As expected, the Seagate Enterprise Capacity 3.5" HDD v4 consumes the most power, while the He6 is much better off thanks to its HelioSeal technology while retaining the same rotational speed. The WD Red, on the other hand, wins the power efficiency battle as expected - a good thing for home consumers who value that over pure performance.

Concluding Remarks

We have taken a look at three different 6 TB drives, but it is hard to recommend any particular one as the clear cut choice unless the particular application is known. The interesting aspect here is that none of the three drives have overlapping use-cases. For home consumers who are interested in stashing their media collection / smartphone-captured photos and videos and expect only four or five clients to simultaneously access the NAS, the lower power consumption as well as the price of the WD Red 6 TB is hard to ignore. For users looking for absolute performance and those who need multiple iSCSI LUNs for virtual machines and other such applications would find the Seagate Enterprise Capacity v4 6 TB a good choice. The HGST Ultrastar He6 is based on upcoming technological advancements, and hence, carries a premium. However, the TCO aspect turns out to be in its favour, particularly when multiple drives running 24x7 are needed. It offers the best balance of power consumption, price and performance.

Multi-Client Access - NAS Environment
Comments Locked

83 Comments

View All Comments

  • sleewok - Monday, July 21, 2014 - link

    Based on my experience with the WD Red drives I'm not surprised you had one fail that quickly. I have a 5 disk (2TB Red) RAID6 setup with my Synology Diskstation. I had 2 drives fail within a week and another 1 within a month. WD replaced them all under warranty. I had a 4th drive seemingly fail, but seemed to fix itself (I may have run a disk check). I simply can't recommend WD Red if you want a reliable setup.
  • Zan Lynx - Monday, July 21, 2014 - link

    If we're sharing anecdotal evidence, I have two 2TB Reds in a small home server and they've been great. I run a full btrfs scrub every week and never find any errors.

    Child mortality is a common issue with electronics. In the past I had two Seagate 15K SCSI drives that both failed in the first week. Does that mean Seagate sucks?
  • icrf - Monday, July 21, 2014 - link

    I had a lot of trouble with 3 TB Green drives, had 2 or 3 early failures in an array of 5 or 6, and one that didn't fail, but silently corrupted data (ZFS was good for letting me know about that). Once all the failures were replaced under warranty, they all did fine.

    So I guess test a lot, keep on top of it for the first few months or year, and make use of their pretty painless RMA process. WD isn't flawless, but I'd still use them.
  • Anonymous Blowhard - Monday, July 21, 2014 - link

    >early failure
    >Green drives

    Completely unsurprised here, I've had nothing but bad luck with any of those "intelligent power saving" drives that like to park their heads if you aren't constantly hammering them with I/O.

    Big ZFS fan here as well, make sure you're on ECC RAM though as I've seen way too many people without it.
  • icrf - Monday, July 21, 2014 - link

    I'm building a new array and will use Red drives, but I'm thinking of going btrfs instead of zfs. I'll still use ECC RAM. Did on the old file server.
  • spazoid - Monday, July 21, 2014 - link

    Please stop this "ZFS needs ECC RAM" nonsense. ZFS does not have any particular need of ECC RAM that every other filer doesn't.
  • Anonymous Blowhard - Monday, July 21, 2014 - link

    I have no intention of arguing with yet another person who's totally wrong about this.
  • extide - Monday, July 21, 2014 - link

    You both are partially right, but the fact is that non ECC RAM on ANY file server can cause corruption. ZFS does a little bit more "processing" on the data (checksums, optional compression, etc) which MIGHT expose you to more issues due to bit flips in memory, but stiff if you are getting frequent memory errors, you should be replacing the bad stick, good memory does not really have frequent bit errors (unless you live in a nuclear power station or something!)

    FWIW, I have a ZFS machine with a 7TB array, and run scrubs at least once a month, preferably twice. I have had it up and running in it's current state for over 2 years and have NEVER seen even a SINGLE checksum error according to zpool status. I am NOT using ECC RAM.

    In a home environment, I would suggest ECC RAM, but in a lot of cases people are re-using old equipment, and many times it is a desktop class CPU which won't support ECC, which means moving to ECC ram might require replacing a lot of other stuff as well, and thus cost quite a bit of money. Now, if you are buying new stuff, you might as well go with an ECC capable setup as the costs aren't really much more, but that only applies if you are buying all new hardware. Now for a business/enterprise setup yes, I would say you should always run ECC, and not only on your ZFS servers, but all of them. However, most of the people on here are not going to be talking about using ZFS in an enterprise environment, at least the people who aren't using ECC!

    tl/dr -- Non ECC is FINE for home use. You should always have a backup anyways, though. ZFS by itself is not a backup, unless you have your data duplicated across two volumes.
  • alpha754293 - Monday, July 21, 2014 - link

    The biggest problem I had with ZFS is its total lack of data recovery tools. If your array bites the dust (two non-rotating drives) on a stripped zpool, you're pretty much hosed. (The array won't start up). So you can't just do a bit read in order to recover/salvage whatever data's still on the magnetic disks/platters of the remaining drives and the're nothing that has told me that you can clone the drive (including its UUID) in its entirety in order to "fool" the system thinking that it's the EXACTLY same drive (when it's actually been replaced) so that you can spin up the array/zpool again in order to begin the data extraction process.

    For that reason, ZFS was dumped back in favor of NTFS (because if an NTFS array goes down, I can still bit-read the drives, and salvage the data that's left on the platters). And I HAD a Premium Support Subscription from Sun (back when it was still Sun), and even they TOLD me that they don't have ANY data recovery tools like that. And they couldn't tell me the procedure for cloning the dead drives either (including its UUID).

    Btrfs was also ruled out for the same technical reasons. (Zero data recovery tools available should things go REALLY farrr south.)
  • name99 - Tuesday, July 22, 2014 - link

    "because if an NTFS array goes down, I can still bit-read the drives, and salvage the data that's left on the platters"
    Are you serious? Extracting info from a bag of random sectors was a reasonable thing to do from a 1.44MB floppy disk, it is insane to imagine you can do this from 6TB or 18 TB or whatever of data.
    That's like me giving you a house full of extremely finely shredded then mixed up paper, and you imaging you can construct a million useful documents from it.

Log in

Don't have an account? Sign up now