6 TB Face-Off: The Contenders

Prior to getting into the performance evaluation, we will take a look at the special aspects and compare the specifications of the three drives being considered today.

Western Digital Red 6 TB

The 6 TB Red's claim to fame is undoubtedly its areal density. While Seagate went in for a six-platter design for its 6 TB drives, Western Digital has managed to cram in 1.2 TB/platter and deliver a 6 TB drive with the traditional five platter design. The costs are also kept reasonable because of the use of traditional PMR (perpendicular magnetic recording) in these drives.

The 6 TB drive has a suggested retail price of $299, making it the cheapest of all the three drives that we are considering today.

Seagate Enterprise Capacity 3.5 HDD v4 6 TB

Seagate was the first to utilize PMR to deliver a 6 TB enterprise drive earlier this year. They achieved this through the use of a six platters (compared to the traditional five that most hard drives use at the maximum). A downside of using six platters was that the center screw locations on either side got shifted, rendering some drive caddies unable to hold them properly. However, we had no such issues when trying to use the QNAP rackmount's drive caddy with the Seagate drive.

Seagate claims best in class performance, and we will be verifying those claims in the course of this review. Pricing ranges from around $450 on Amazon (third party seller) to $560 on Newegg.

HGST Ultrastar He6 6 TB

The HGST Ultrastar He6 is undoubtedly the most technologically advanced drive that we are evaluating today. There are two main patented innovations behind the Ultrastar He6, HelioSeal and 7Stac. The former refers to placement of the platters in a hermetically sealed enclosure filled with helium instead of air. The latter refers to packaging of seven platters in the same 1" high form factor of traditional 3.5" drives.

With traditional designs, we have seen a maximum of six platters in a standard 3.5" drive. The additional platter is made possible in helium filled drives because the absence of air shear reduces flutter and allows for thinner platters. The motor power needed to achieve the same rotation speeds is also reduced, thereby lowering total power dissipation. The hermetically sealed nature of the drives also allows for immersive cooling solutions (placement of the drives in a non-conducting liquid). This is something not possible in traditional hard drives due to the presence of a breather port.

The TCO (total cost of ownership) is bound to be much lower for the Ultrastar He6 compared to other 6 TB drives when large scale datacenter applications are considered (due to lower power consumption, cooling costs etc.). The main issue, from the perspective of the SOHOs / home consumers, is the absence of a tier-one e-tailer carrying these drives. We do see third party sellers on Amazon supplying these drives for around $470.

Specifications

The various characteristics / paper specifications of the drives under consideration are available in the table below.

6 TB NAS Hard Drive Face-Off Contenders
  WD Red Seagate Enterprice Capacity 3.5" HDD v4 HGST Ultrastar He6
Model Number WD60EFRX ST6000NM0024 HUS726060ALA640
Interface SATA 6 Gbps SATA 6 Gbps SATA 6 Gbps
Advanced Format (AF) Yes Yes No (512n)
Rotational Speed IntelliPower (5400 rpm) 7200 rpm 7200 rpm
Cache 64 MB 128 MB 64 MB
Rated Load / Unload Cycles 300K 600K 600K
Non-Recoverable Read Errors / Bits Read 1 per 10E14 1 per 10E15 1 per 10E15
MTBF 1M 1.4 M 2M
Rated Workload ~120 - 150 TB/yr < 550 TB/yr < 550 TB/yr
Operating Temperature Range 0 - 70 C 5 - 60 C 5 - 60 C
Physical Dimensions 101.85 mm x 147 mm x 26.1 mm. / 680 grams 101.85 mm x 147 mm x 26.1 mm / 780 grams 101.6 mm x 147 mm x 26.1 mm / 640 grams
Warranty 3 years 5 years 5 years

The interesting aspects are highlighted above. Most of these are related to the non-enterprise nature of the WD Red. However, two aspects that stand out are the multi-segmented 128 MB cache in the Seagate drive and the HGST He6 drive's lower weight despite having more platters than the other two drives.

Introduction and Testbed Setup Feature Set Comparison
Comments Locked

83 Comments

View All Comments

  • brettinator - Friday, March 18, 2016 - link

    I realize this is years old, but I did indeed use raw i/o on a 10TB fried RAID 6 volume to recover copious amounts of source code.
  • andychow - Monday, November 24, 2014 - link

    @extide, you've just shown that you don't understand how it works. You're NEVER going to have checksum errors if your data is being corrupted by your RAM. That's why you need ECC RAM, so errors don't happen "up there".

    You might have tons of corrupted files, you just don't know it. 4 GB of RAM has a 96% percent chance of having a bit error in three days without ECC RAM.
  • alpha754293 - Monday, July 21, 2014 - link

    Yeah....while the official docs say you "need" ECC, the truth is - you really don't. It's nice, and it'll help to mitigate like bit-flip errors and stuff like that, but I mean...by that point, you're already passing PBs of data through the array/zpool before it's even noticable. And part of that has to do with the fact that it does block-by-block checksumming, which means that given the nature of how people run their systems, it'll probably reduce your ERRs even further, but you might be talking like a third of what's already an INCREDIBLY small percentage.

    A system will NEVER complain if you have ECC RAM (and have ECC enabled, because my servers have ECC RAM, but I've always disabled ECC in the BIOS), but it isn't going to NOT startup if you have ECC RAM, but with ECC disabled.

    And so far, I haven't seen ANY discernable evidence that suggests that ECC is an absolute must when running ZFS, and you can SAY that I am wrong, but you will also need to back that statement up with evidence/data.
  • AlmaFather - Monday, July 28, 2014 - link

    Some information:

    http://forums.freenas.org/index.php?threads/ecc-vs...
  • Samus - Monday, July 21, 2014 - link

    The problem with power saving "green" style drives is the APM is too aggressive. Even Seagate, who doesn't actively manufacture a "green" drive at a hardware level, uses firmware that sets aggressive APM values in many low end and external versions of their drives, including the Barracuda XT.

    This is a completely unacceptable practice because the drives are effectively self-destructing. Most consumer drives are rated at 250,000 load/unload cycles and I've racked up 90,000 cycles in a matter of MONTHS on drives with heavy IO (seeding torrents, SQL databases, exchange servers, etc)

    HDPARM is a tool that you can send SMART commands to a drive and disable APM (by setting the value to 255) overriding the firmware value. At least until the next power cycle...
  • name99 - Tuesday, July 22, 2014 - link

    I don't know if this is the ONLY problem.
    My most recent (USB3 Seagate 5GB) drive consistently exhibited a strange failure mode where it frequently seemed to disconnect from my Mac. Acting on a hunch I disabled the OSX Energy Saver "Put hard disks to sleep when possible" setting, and the problem went away. (And energy usage hasn't gone up because the Seagate drive puts itself to sleep anyway.)

    Now you're welcome to read this as "Apple sux, obviously they screwed up" if you like. I'd disagree with that interpretation given that I've connected dozens of different disks from different vendors to different macs and have never seen this before. What I think is happening is Seagate is not handling a race condition well --- something like "Seagate starts to power down, half-way through it gets a command from OSX to power down, and it mishandles this command and puts itself into some sort of comatose mode that requires power cycling".

    I appreciate that disk firmware is hard to write, and that power management is tough. Even so, it's hard not to get angry at what seems like pretty obvious incompetence in the code coupled to an obviously not very demanding test regime.
  • jay401 - Tuesday, July 22, 2014 - link

    > Completely unsurprised here, I've had nothing but bad luck with any of those "intelligent power saving" drives that like to park their heads if you aren't constantly hammering them with I/O.

    I fixed that the day i bought mine with the wdidle utility. No more excessive head parking, no excessive wear. I've had 3 2TB Greens and 2 3TB Greens with no issues so far (thankfully). Currently running a pair of 4TB Reds, but have not seen any excessive head parking showing up in the SMART data with those.
  • chekk - Monday, July 21, 2014 - link

    Yes, I just test all new drives thoroughly for a month or so before trusting them. My anecdotal evidence across about 50 drives is that they are either DOA, fail in the first month or last for years. But hey, YMMV.
  • icrf - Monday, July 21, 2014 - link

    My anecdotal experience is about the same, but I'd extend the early death window a few more months. I don't know that I've gone through 50 drives, but I've definitely seen a couple dozen, and that's the pattern. One year warranty is a bit short for comfort, but I don't know that I care much about 5 years over 3.
  • Guspaz - Tuesday, July 22, 2014 - link

    I've had a bunch of 2TB greens in a ZFS server (15 of them) for years and none of them have failed. I expected them to fail, and I designed the setup to tolerate two to four of them failing without data loss, but... nothing.

Log in

Don't have an account? Sign up now