Feature Set Comparison

Enterprise hard drives come with features such as real time linear and rotational vibration correction, dual actuators to improve head positional accuracy, multi-axis shock sensors to detect and compensate for shock events and dynamic fly-height technology for increasing data access reliability. These hard drives also expose some of their interesting firmware aspects through their SATA controller, but, before looking into those, let us compare the specifications of the ten drives being considered today. Even though most of the data for all ten drives is available below, readers can view only two at a time side-by-side due to usability issues.

Comparative HDD Specifications
Aspect
Model Number WD4001FFSX WD4001FFSX
Interface SATA 6 Gbps SATA 6 Gbps
Sector Size / AF 512E 512E
Rotational Speed 7200 RPM 7200 RPM
Cache 64 MB 64 MB
Rated Load / Unload Cycles 600 K 600 K
Non-Recoverable Read Errors / Bits Read < 1 in 1014 < 1 in 1014
MTBF 1 M 1 M
Rated Workload ~ 180 TB/yr ~ 180 TB/yr
Operating Temperature Range 5 to 60 C 5 to 60 C
Acoustics (Seek Average - dBA) 34 dBA 34 dBA
Physical Parameters 14.7 x 10.16 x 2.61 cm, 750 g 14.7 x 10.16 x 2.61 cm, 750 g
Warranty 5 years 5 years
Price (in USD, as-on-date) $260 $260

A high level overview of the various supported SATA features is provided by HD Tune Pro.

A brief description of some of the SATA features is provided below:

  • S.M.A.R.T: Most readers are familiar with the SMART (Self-Monitoring, Analysis and Reporting Technology) feature, which provides drive parameters that can server as reliability indicators. Some of these include the reallocated sector count (indication of bad blocks), spin retry count (indication of problems with the spindle motors), command timeout (indication of issues with the power supply or data cable), temperature, power-on hours etc.
  • 48-bit Address: The first ATA standard specified 24 bits for the logical block address (sector), which was later updated to 28 bits. Using 28 bits, one could address up to 137.4 GB (2^28 * 2^9 bytes), which capped the SATA drive size. In 2003, an update to the standard was released to allow 48 bits for the LBA address to get past this issue. No modern SATA drive comes without support for 48-bit addresses.
  • Read Look-Ahead: Drives supporting this feature keep reading ahead even after the current command is completed. The data is transferred to the buffer for faster response to the host in the case of sequential accesses.
  • Write Cache: This feature is pretty much self-explanatory, with data being stored in the buffers prior to being committed to the platters. There is a risk of data loss due to power loss. The feature can be disabled by the end user.
  • Host Protected Area (HPA): Drives supporting this feature have some sectors hidden from the OS. It is usually used by manufacturers to store recovery data, but users can also 'hide' data by allocating sectors to the HPA.
  • Device Configuration Overlay (DCO): Drives supporting this feature can report modified drive parameters to the host.
  • Security Mode: Drives supporting this feature can help protect themselves from illegal accesses or setting of new passwords (by freezing such functions). Readers might have encountered frozen security settings for SSDs while trying to secure erase them..
  • Automatic Acoustic Management: AAM was declared obsolete in the 2010 ATA standards revision. On supported disks, it enables reduction of noise that rise from fast spin-ups of the disk. In general, configure the AAM value to something low would result in a quiet, but slow, disk, while a high value would result in a loud, but fast, disk.
  • Power Management: Support for this feature enables drives to follow specific power management state transitions via commands from the host. Supported modes include IDLE, SLEEP and STANDBY.
  • Advanced Power Management (APM): This feature allows setting of a value to allow for disk spindowns as well as adjustment of head-parking frequency. Some disks have proprietary commands for achieving this functionality (for example, the WDIDLE tool from Western Digital can be used with the Green drives).
  • Interface Power Management: Drives supporting this feature allow for fine-tuning of power consumption by being aware of various interface power modes such as PHY Ready, Partial and Slumber (in the order of power consumption). Transitions from a higher power mode to a lower one usually happen after some period of inactivity. They can be either host-initiated (HIPM) or device-initiated (DIPM). Note that these refer to the SATA interface and not the device itself. As such, they are complementary to the power management feature mentioned earlier.
  • Power-up in Standby: This SATA feature allows drives to be powered up into the Standby state to minimize inrush current at power-up and allow the host to sequence the spin-up of devices. This is particularly useful for NAS units and RAID environments. Desktop drives usually come with power management disabled, but there are jumper settings on the drive to enable controlled spin-up via ATA standard spinup commands. For drives targeting NAS units, power Power-up in Standby is enabled by default.
  • SCT Tables: The SMART Command Transport (SCT) tables feature extends the SMART protocol and provides additional information about the drive when requested by the host.
  • Native Command Queuing (NCQ):  This is an extension to the SATA protocol to allow drives to reorder the received commands for more optimal operation.
  • TRIM: This is a well known feature for readers familiar with SSDs. It is not relevant to any of the drives being discussed today.

We get a better idea of the supported features using FinalWire's AIDA64 system report. The table below summarizes the extra information generated by AIDA64 (that is not already provided by HD Tune Pro).

Comparative HDD Features
Aspect
DMA Setup Auto-Activate Supported, Disabled Supported, Disabled
Extended Power Conditions Supported, Disabled Supported, Disabled
Free-Fall Control Not Supported Not Supported
General Purpose Logging Supported, Enabled Supported, Enabled
In-Order Data Delivery Not Supported Not Supported
NCQ Priority Information Supported Supported
Phy Event Counters Supported Supported
Release Interrupt Not Supported Not Supported
Sense Data Reporting Not Supported Not Supported
Software Settings Preservation Supported, Enabled Supported, Enabled
Streaming Supported, Disabled Supported, Disabled
Tagged Command Queuing Not Supported Not Supported
4 TB NAS and Nearline Drives Face-Off: The Contenders Performance - Raw Drives
Comments Locked

62 Comments

View All Comments

  • sin_tax - Thursday, August 21, 2014 - link

    Transcoding != streaming. Most all NAS boxes are underpowered when it comes to transcoding.
  • jaden24 - Friday, August 8, 2014 - link

    Correction.

    WD Red Pro: Non-Recoverable Read Errors / Bits Read 10^15
  • Per Hansson - Friday, August 8, 2014 - link

    I'm not qualified to say for certain, but I think it's just marketing
    bullshit to make the numbers look better when infact they are the same?

    Non-Recoverable Read Errors per Bits Read:
    <1 in 10^14 WD Red
    <10 in 10^15 WD Red Pro
    <10 in 10^15 WD Se

    Just for reference the Seagate Constellation ES.3, Hitachi Ultrastar He6 &
    7K4000 are all rated for: 1 in 10^15
  • shodanshok - Friday, August 8, 2014 - link

    Hi, watch at the specs carefully: WD claim <10 errors for 10^15 bytes read.
    Previously, it was <1 each 10^14.

    In other word, they increase the exponent (14 vs 15), but the value is (more or less) the same!
  • jaden24 - Friday, August 8, 2014 - link

    Nice catch. I just went and looked up all of them.

    RE = 10 in 10^16
    Red Pro = 10 in 10^15
    SE = 10 in 10^15
    Black = 1 in 10^14
    Red = 1 in 10^14
    Green = 1 in 10^14

    It looks like they switched to this marketing on RE and SE. The terms are in black and white, but it is a deviation from a measurement scheme, and can only be construed as deceiving in my book. I love WD, but this pisses me off.
  • isa - Friday, August 8, 2014 - link

    For my home/home office use, the most important aspect by far for me is reliability/failure rates, mainly because I don't want to invest in more than 2 drives or go beyond Raid 1. I realize the most robust reliability info is based on several years of statistics in the field, but is their any kind of accelerated life test that Anandtech can do or get access to that has been proven to be a fairly reliable indicator of reliability/failure rates differences across the models? I'm aware of the manufacturer specs, but I don't regard those as objective or measured apples-apples across manufacturers.

    If historical reliability data is fairly consistent across multiple drives across a manufacturers' line, perhaps at least provide that as a proxy for predicted actual reliability. Thanks for considering any of this!
  • NonSequitor - Friday, August 8, 2014 - link

    If you're really concerned about reliability, you need double-parity rather than RAID-1. In a double parity situation the system knows which drive is returning bad data. With RAID-1 is doesn't know which drive is right. Of course either way you should have a robust backup infrastructure in place if your data matters.
  • shodanshok - Friday, August 8, 2014 - link

    Mmm... it depends.

    If you speak about silent data corruption (ie: read error that aren't detected by the build in ECC), RAIDZ2 and _some_ RAID6 implementation should be able to isolate the wrond data. However this require that for every read both parity data (P and Q) are re-calculated, hammering the disk subsystem and the controller's CPU. RAIDZ2, by design, do this precise thing (N disks in a single RAIDZ array give you the same IOPS that a single disk), but many RAID6 implementation simply don't do that for performance reason (Linux MDRAID, for example, don't to it).

    If you are referring to UREs, even a single parity scheme as RAID5 is sufficient for non-degraded conditions. The true problem is for the degraded scenario: in this case and on most hardware RAID implementation, a single URE error will kill your entire array (note: MDRAID behave differently and let you to recover _even_ from this scenario, albeit with some corrupted data and in different manner based on its version). In this case, a double parity scheme is a big advantage.

    On the other hand, while mirroring is not 100% free from URE risk, it need to re-read the content of a _single_ disk, not the entire array. In other word, it is less exposed to URE problem then RAID5 simply because it has to read less data to recover from a failure (but with current capacities RAID6 is even less exposed to this kind of error).

    Regards.
  • NonSequitor - Friday, August 8, 2014 - link

    MDRAID will recheck all of the parity during a scrub. That's really enough to catch silent data corruption before your backups go stale. It's not perfect but is a good balance between safety and performance. The problem with RAID5 compared to RAID6 is that in a URE situation the RAID5 will still silently produce garbage as it has no way to know if the data is correct. RAID6 is at least capable of spotting those read errors.
  • shodanshok - Friday, August 8, 2014 - link

    I think you are confusing URE with silent dats corruption. An URE in non-degraded RAID 5 will not produce any data corruption, it will simply trigger a stripe reconstruction.

    An URE in degraded RAID 5 will not produce any data corruption, but it will result in a "dead" or faulty array.

    A regular scrub will prevents unexpected UREs, but if a drive suddenly become returning garbage, even regular scrubs can not do anything.

    In theory, RAID6 can identify what drive is returning garbage because it has two different parity data. However, as stated above, that kind of control is avoided due to the severe performance penalties it implies.

    RAIDZ2 take the safe path and do a complete control for every read, but as results its performance is quite low.

    Regards.

Log in

Don't have an account? Sign up now