Feature Set Comparison

Enterprise hard drives come with features such as real time linear and rotational vibration correction, dual actuators to improve head positional accuracy, multi-axis shock sensors to detect and compensate for shock events and dynamic fly-height technology for increasing data access reliability. For the WD Red units, Western Digital incorporates some features in firmware under the NASware moniker. We have already covered these features in our previous Red reviews. These hard drives also expose some of their interesting firmware aspects through their SATA controller.

A high level overview of the various supported SATA features is provided by HD Tune Pro 5.50

The HGST Ultrastar He6 supports almost all features (except for TRIM - this is obviously not a SSD - and Automatic Acoustic Management - a way to manage the sound levels by adjusting the seek velocity of the heads). The Seagate Enterprise Capacity drive avoids the host protected area and device configuration overlay, as well as the power management features. APM's absence means that the head parking interval can't be set through ATA commands by the NAS OS. Device Configuration Overlay allows for the hard drive to report modified drive parameters to the host. It is not a big concern for most applications. Coming to the WD Red, we find it is quite similar to the Ultrastar He6 in the support department, except for the absence of APM (Advanced Power Management).

We get a better idea of the supported features using FinalWire's AIDA64 system report. The table below summarizes the extra information generated by AIDA64 (that is not already provided by HD Tune Pro).

Supported Features
  WD Red Seagate Enterprise Capacity v4 HGST Ultrastar He6
DMA Setup Auto-Activate Supported, Disabled Supported, Disabled Supported, Disabled
Extended Power Conditions Not Supported Supported, Enabled Supported, Enabled
Free-Fall Control Not Supported Not Supported Not Supported
General Purpose Logging Supported, Enabled Supported, Enabled Supported, Enabled
In-Order Data Delivery Not Supported Not Supported Supported, Disabled
NCQ Priority Information Supported Not Supported Supported
Phy Event Counters Supported Supported Supported
Release Interrupt Not Supported Not Supported Not Supported
Sense Data Reporting Not Supported Supported, Disabled Supported, Disabled
Software Settings Preservation Supported, Enabled Supported, Enabled Supported, Enabled
Streaming Supported, Disabled Not Supported Supported, Enabled
Tagged Command Queuing Not Supported Not Supported Not Supported

Interesting aspects are highlighted in the above table. While the two enterprise drives support the extended power conditions (EPC) extensions for fine-grained power management, the Red lineup doesn't. NCQ priority information adds priority to data in complex workload environments. While WD and HGST have it enabled on their drives, Seagate seems to believe it is unnecessary. The NCQ streaming feature enables isochronous data transfers for multimedia streams while also improving performance of lower priority transfers. This feature could be very useful for media server and video editing use-cases. The Seagate enterprise drive doesn't support it, and, surprisingly, the Red seems to have disabled it by default.

6 TB Face-Off: The Contenders Performance - Raw Drives
Comments Locked

83 Comments

View All Comments

  • sleewok - Monday, July 21, 2014 - link

    Based on my experience with the WD Red drives I'm not surprised you had one fail that quickly. I have a 5 disk (2TB Red) RAID6 setup with my Synology Diskstation. I had 2 drives fail within a week and another 1 within a month. WD replaced them all under warranty. I had a 4th drive seemingly fail, but seemed to fix itself (I may have run a disk check). I simply can't recommend WD Red if you want a reliable setup.
  • Zan Lynx - Monday, July 21, 2014 - link

    If we're sharing anecdotal evidence, I have two 2TB Reds in a small home server and they've been great. I run a full btrfs scrub every week and never find any errors.

    Child mortality is a common issue with electronics. In the past I had two Seagate 15K SCSI drives that both failed in the first week. Does that mean Seagate sucks?
  • icrf - Monday, July 21, 2014 - link

    I had a lot of trouble with 3 TB Green drives, had 2 or 3 early failures in an array of 5 or 6, and one that didn't fail, but silently corrupted data (ZFS was good for letting me know about that). Once all the failures were replaced under warranty, they all did fine.

    So I guess test a lot, keep on top of it for the first few months or year, and make use of their pretty painless RMA process. WD isn't flawless, but I'd still use them.
  • Anonymous Blowhard - Monday, July 21, 2014 - link

    >early failure
    >Green drives

    Completely unsurprised here, I've had nothing but bad luck with any of those "intelligent power saving" drives that like to park their heads if you aren't constantly hammering them with I/O.

    Big ZFS fan here as well, make sure you're on ECC RAM though as I've seen way too many people without it.
  • icrf - Monday, July 21, 2014 - link

    I'm building a new array and will use Red drives, but I'm thinking of going btrfs instead of zfs. I'll still use ECC RAM. Did on the old file server.
  • spazoid - Monday, July 21, 2014 - link

    Please stop this "ZFS needs ECC RAM" nonsense. ZFS does not have any particular need of ECC RAM that every other filer doesn't.
  • Anonymous Blowhard - Monday, July 21, 2014 - link

    I have no intention of arguing with yet another person who's totally wrong about this.
  • extide - Monday, July 21, 2014 - link

    You both are partially right, but the fact is that non ECC RAM on ANY file server can cause corruption. ZFS does a little bit more "processing" on the data (checksums, optional compression, etc) which MIGHT expose you to more issues due to bit flips in memory, but stiff if you are getting frequent memory errors, you should be replacing the bad stick, good memory does not really have frequent bit errors (unless you live in a nuclear power station or something!)

    FWIW, I have a ZFS machine with a 7TB array, and run scrubs at least once a month, preferably twice. I have had it up and running in it's current state for over 2 years and have NEVER seen even a SINGLE checksum error according to zpool status. I am NOT using ECC RAM.

    In a home environment, I would suggest ECC RAM, but in a lot of cases people are re-using old equipment, and many times it is a desktop class CPU which won't support ECC, which means moving to ECC ram might require replacing a lot of other stuff as well, and thus cost quite a bit of money. Now, if you are buying new stuff, you might as well go with an ECC capable setup as the costs aren't really much more, but that only applies if you are buying all new hardware. Now for a business/enterprise setup yes, I would say you should always run ECC, and not only on your ZFS servers, but all of them. However, most of the people on here are not going to be talking about using ZFS in an enterprise environment, at least the people who aren't using ECC!

    tl/dr -- Non ECC is FINE for home use. You should always have a backup anyways, though. ZFS by itself is not a backup, unless you have your data duplicated across two volumes.
  • alpha754293 - Monday, July 21, 2014 - link

    The biggest problem I had with ZFS is its total lack of data recovery tools. If your array bites the dust (two non-rotating drives) on a stripped zpool, you're pretty much hosed. (The array won't start up). So you can't just do a bit read in order to recover/salvage whatever data's still on the magnetic disks/platters of the remaining drives and the're nothing that has told me that you can clone the drive (including its UUID) in its entirety in order to "fool" the system thinking that it's the EXACTLY same drive (when it's actually been replaced) so that you can spin up the array/zpool again in order to begin the data extraction process.

    For that reason, ZFS was dumped back in favor of NTFS (because if an NTFS array goes down, I can still bit-read the drives, and salvage the data that's left on the platters). And I HAD a Premium Support Subscription from Sun (back when it was still Sun), and even they TOLD me that they don't have ANY data recovery tools like that. And they couldn't tell me the procedure for cloning the dead drives either (including its UUID).

    Btrfs was also ruled out for the same technical reasons. (Zero data recovery tools available should things go REALLY farrr south.)
  • name99 - Tuesday, July 22, 2014 - link

    "because if an NTFS array goes down, I can still bit-read the drives, and salvage the data that's left on the platters"
    Are you serious? Extracting info from a bag of random sectors was a reasonable thing to do from a 1.44MB floppy disk, it is insane to imagine you can do this from 6TB or 18 TB or whatever of data.
    That's like me giving you a house full of extremely finely shredded then mixed up paper, and you imaging you can construct a million useful documents from it.

Log in

Don't have an account? Sign up now