6 TB NAS Drives: WD Red, Seagate Enterprise Capacity and HGST Ultrastar He6 Face-Offby Ganesh T S on July 21, 2014 11:00 AM EST
6 TB Face-Off: The Contenders
Prior to getting into the performance evaluation, we will take a look at the special aspects and compare the specifications of the three drives being considered today.
Western Digital Red 6 TB
The 6 TB Red's claim to fame is undoubtedly its areal density. While Seagate went in for a six-platter design for its 6 TB drives, Western Digital has managed to cram in 1.2 TB/platter and deliver a 6 TB drive with the traditional five platter design. The costs are also kept reasonable because of the use of traditional PMR (perpendicular magnetic recording) in these drives.
The 6 TB drive has a suggested retail price of $299, making it the cheapest of all the three drives that we are considering today.
Seagate Enterprise Capacity 3.5 HDD v4 6 TB
Seagate was the first to utilize PMR to deliver a 6 TB enterprise drive earlier this year. They achieved this through the use of a six platters (compared to the traditional five that most hard drives use at the maximum). A downside of using six platters was that the center screw locations on either side got shifted, rendering some drive caddies unable to hold them properly. However, we had no such issues when trying to use the QNAP rackmount's drive caddy with the Seagate drive.
Seagate claims best in class performance, and we will be verifying those claims in the course of this review. Pricing ranges from around $450 on Amazon (third party seller) to $560 on Newegg.
HGST Ultrastar He6 6 TB
The HGST Ultrastar He6 is undoubtedly the most technologically advanced drive that we are evaluating today. There are two main patented innovations behind the Ultrastar He6, HelioSeal and 7Stac. The former refers to placement of the platters in a hermetically sealed enclosure filled with helium instead of air. The latter refers to packaging of seven platters in the same 1" high form factor of traditional 3.5" drives.
With traditional designs, we have seen a maximum of six platters in a standard 3.5" drive. The additional platter is made possible in helium filled drives because the absence of air shear reduces flutter and allows for thinner platters. The motor power needed to achieve the same rotation speeds is also reduced, thereby lowering total power dissipation. The hermetically sealed nature of the drives also allows for immersive cooling solutions (placement of the drives in a non-conducting liquid). This is something not possible in traditional hard drives due to the presence of a breather port.
The TCO (total cost of ownership) is bound to be much lower for the Ultrastar He6 compared to other 6 TB drives when large scale datacenter applications are considered (due to lower power consumption, cooling costs etc.). The main issue, from the perspective of the SOHOs / home consumers, is the absence of a tier-one e-tailer carrying these drives. We do see third party sellers on Amazon supplying these drives for around $470.
The various characteristics / paper specifications of the drives under consideration are available in the table below.
|6 TB NAS Hard Drive Face-Off Contenders|
|WD Red||Seagate Enterprice Capacity 3.5" HDD v4||HGST Ultrastar He6|
|Interface||SATA 6 Gbps||SATA 6 Gbps||SATA 6 Gbps|
|Advanced Format (AF)||Yes||Yes||No (512n)|
|Rotational Speed||IntelliPower (5400 rpm)||7200 rpm||7200 rpm|
|Cache||64 MB||128 MB||64 MB|
|Rated Load / Unload Cycles||300K||600K||600K|
|Non-Recoverable Read Errors / Bits Read||1 per 10E14||1 per 10E15||1 per 10E15|
|Rated Workload||~120 - 150 TB/yr||< 550 TB/yr||< 550 TB/yr|
|Operating Temperature Range||0 - 70 C||5 - 60 C||5 - 60 C|
|Physical Dimensions||101.85 mm x 147 mm x 26.1 mm. / 680 grams||101.85 mm x 147 mm x 26.1 mm / 780 grams||101.6 mm x 147 mm x 26.1 mm / 640 grams|
|Warranty||3 years||5 years||5 years|
The interesting aspects are highlighted above. Most of these are related to the non-enterprise nature of the WD Red. However, two aspects that stand out are the multi-segmented 128 MB cache in the Seagate drive and the HGST He6 drive's lower weight despite having more platters than the other two drives.
Post Your CommentPlease log in or sign up to comment.
View All Comments
jabber - Tuesday, July 22, 2014 - linkQuality of HDDs is plummeting. The mech drive makers have lost interest, they know the writing is on the wall. Five years ago it was rare to get a HDD fail of less than 6 months old. But now I regularly get in drives with bad sectors/failed mechanics in that are less than 6-12 months old.
I personally don't risk using any drives over a terrabyte for my own data.
asmian - Tuesday, July 22, 2014 - linkYou're not seriously suggesting that WD RE drives are the same as Reds/Blacks or whatever colour but with a minor firmware change, are you? If they weren't significantly better build quality to back up the published numbers I'm sure we'd have seen a court case by now, and the market for them would have dried up long ago.
On the subject of my rebuild failure calculation, I wonder whether that is exactly what happened to the failing drive in the article: an unrecoverable bit read error during an array rebuild, making the NAS software flag the drive as failed or failing, even though the drive subsequently appears to perform/test OK. Nothing to do with compatability, just the verification of their unsuitability for use in arrays due to their size increasing the risk of bit read errors occurring at critical moments.
NonSequitor - Tuesday, July 22, 2014 - linkIt's more likely that they are binned than that they are manufactured differently. Think of it this way: you manufacture a thousand 4TB drives, then you take the 100 with the lowest power draw and vibration. Those are now RE drives. Then the rest become Reds.
Regarding the anecdotes of users with several grouped early failures: I tend to blame some of that on low-dollar Internet shopping, and some of it on people working on hard tables. It takes very little mishandling to physically damage a hard drive, and even if the failure isn't initial a flat spot in a bearing will eventually lead to serious failure.
Iketh - Tuesday, July 22, 2014 - linkLOL no
m0du1us - Friday, July 25, 2014 - link@NonSequitor This is exactly how enterprise drives are chosen, as well as using custom firmware.
LoneWolf15 - Friday, July 25, 2014 - linkAren't most of our drives fluid-dynamic bearing rather than ball bearing these days?
asmian - Wednesday, July 23, 2014 - linkJust in case anyone is still denying the inadvisability of using these 6TB consumer-class Red drives in a home NAS, or any RAID array that's not ZFS, here's the maths:
6TB is approx 0.5 x 10^14 bits. That means if you read the entire disk (as you have to do to rebuild a parity or mirrored array from the data held on all the remaining array disks) then there's a 50% chance of a disk read error for a consumer-class disk with 1 in 10^14 unrecoverable read error rate (check the maker's specs). Conversely, that means there's a 50% chance that there WON'T be a read error.
Let's say you have a nice 24TB RAID6 array with 6 of these 6TB Red drives - four for data, two parity. RAID6, so good redundancy right? Must be safe! One of your disks dies. You still have a parity (or two, if it was a data disk that died) spare, so surely you're fine? Unfortunately, the chance of rebuilding the array without ANY of the disks suffering an unrecoverable read error is: 50% (for the first disk) x 50% (for the second) x 50% (for the third) x 50% (for the fourth) x 50% (for the fifth. Yes, that's ** 3.125% ** chance of rebuilding safely. Most RAID controllers will barf and stop the rebuild on the first error from a disk and declare it failed for the array. Would you go to Vegas to play those odds of success?
If those 6TB disks had been Enterprise-class drives (say WD RE, or the HGST and Seagates reviewed here) specifically designed and marketed for 24/7 array use, they have a 1 in 10^15 unrecoverable error rate, an order of magnitude better. How does the maths look now? Each disk now has a 5% chance of erroring during the array rebuild, or a 95% chance of not. So the rebuild success probability is 95% x 95% x 95% x 95% x 95% - that's about 77.4% FOR THE SAME SIZE OF DISKS.
Note that this success/failure probability is NOT PROPORTIONAL to the size of the disk and the URE rate - it is a POWER function that squares, then cubes, etc. given the number of disks remaining in the array. That means that using smaller disks than these 6TB monsters is significant to the health of the array, and so is using disks with much better URE figures than consumer-class drives, to an enormous extent as shown by the probability figure above.
For instance, suppose you'd used an eight-disk RAID6 of 6TB Red drives to get the same 24TB array in the first example. Very roughly your non-error probability per disk full read is now 65%, so the probability of no read errors over a 7-disk rebuild is roughly 5%. Better than 3%, but not by much. However, all other things being equal, using far smaller disks (but more of them) to build the same size of array IS intrinsically safer for your data.
Before anyone rushes to say none of this is significant compared to the chance of a drive mechanically failing in other ways, sure, that's an ADDITIONAL risk of array failure to add to the pretty shocking probabilities above. Bottom line, consumer-class drives are intrinsically UNSAFE for your data at these bloated multi-terabyte sizes, however much you think you're saving by buying the biggest available, since the build quality has not increased in step with the technology cramming the bits into smaller spaces.
asmian - Wednesday, July 23, 2014 - linkApologies for proofing error: "For instance, suppose you'd used an eight-disk RAID6 of 6TB Red drives" - obviously I meant 4TB drives.
KAlmquist - Wednesday, July 23, 2014 - link"6TB is approx 0.5 x 10^14 bits. That means if you read the entire disk (as you have to do to rebuild a parity or mirrored array from the data held on all the remaining array disks) then there's a 50% chance of a disk read error for a consumer-class disk with 1 in 10^14 unrecoverable read error rate (check the maker's specs)."
What you are overlooking is that even though each sector contains 4096 bytes, or 32768 bits, it doesn't follow that to read the contents of the entire disk you have to read the contents of each sector 32768 times. To the contrary, to read the entire disk, you only have to read each sector once.
Taking that into account, we can recalculate the numbers. A 5.457 gigabyte drive contains 1,464,843,750 sectors. If the probability of an unrecoverable read error is 1 in 10^14, and the probability of a read error on one sector is independent of the probability of a read error in any other sector, then the probability of getting a read error at some point when reading the entire disk is 0.00146%. I suspect that the probability of getting a read error in one sector is probably not independent of the probability of getting a read error in any other sector, meaning that the 0.00146% figure is too high. But sticking with that figure, it gives us a 99.99268% probability of rebuilding safely.
I don't know of anyone who would dispute that the correct way for a RAID card to handle an unrecoverable read error is to calculate the data that should have been read, try to write it to the disk, and remove the disk from the array if the write fails. (This assumes that the data can be computed from data on the other disks, as is the case in your example of rebuilding a RAID 6 array after one disk has been replaced.) Presumably a lot of RAID card vendors assume that unrecoverable read errors are rare enough that the benefits of doing this right, rather than just assuming that the write will fail without trying, are too small to be worth the cost.
asmian - Wednesday, July 23, 2014 - linkThat makes sense IF (and I don't know whether it is) the URE rate is independent of the number of bits being read. If you read a sector you are reading a LOT of bits. You are suggesting that you would get 1 single URE event on average in every 10^14 sectors read, not in every 10^14 BITS read... which is a pretty big assumption and not what the spec seems to state. I'm admittedly suggesting the opposite extreme, where the chance of a URE is proportional to the number of bits being read (which seems more logical to me). Since you raise this possibility, I suspect the truth is likely somewhere in the middle, but I don't know enough about how UREs are calculated to make a judgement. Hopefully someone else can weigh in and shed some light on this.
Ganesh has said that previous reviews of the Red drives mention they are masking the UREs by using a trick: "the drive hopes to tackle the URE issue by silently failing / returning dummy data instead of forcing the rebuild to fail (this is supposed to keep the RAID controller happy)." That seems incredibly scary if it is throwing bad data back in rebuild situations instead of admitting it has a problem, potentially silently corrupting the array. That for me would be a total deal-breaker for any use of these Red drives in an array, yet again NOT mentioned in the review, which is apparently discussing their suitability for just that... <sigh>