Miscellaneous Factors & Final Words

Power consumption measurement was done by running our standard IOMeter disk performance bench on a CIFS share in the LenovoEMC PX2-300D (single disk in a JBOD configuration). The following table summarizes the power consumption of the NAS unit at the wall under various operating modes.

4 TB NAS Hard Drive Face-Off: LenovoEMC PX2-300D Power Consumption
Mode WD Red Seagate NAS HDD WD Se WD Re
Idle 18.25 W 19.29 W 22.67 W 23.68 W
Max. Throughput (100% Reads) 19.51 W 20.56 W 23.54 W 24.53 W
Real Life (60% Random, 65% Reads) 19.58 W 20.60 W 23.95 W 24.49 W
Max. Throughput (50% Reads) 19.67 W 20.63 W 24.11 W 24.41 W
Random 8 KB (70% Reads) 19.07 W 20.98 W 23.54 W 23.68 W

The above numbers suggest that the WD Red is the most power-efficient of all the considered models. This was definitely on the cards once it was determined that the WD Red operates at 5400 rpm while the Seagate NAS HDD operates at 5900 rpm. Disks running at 7200 rpm have a significant power penalty.

Concluding Remarks

Coming to the business end of the review, one must note that both Western Digital and Seagate have put forward convincing offerings for the 1-5 bay NAS market. While the Seagate unit manages to win most of the performance tests, it comes at the cost of an increase in power consumption. 1-5 bay NAS system users looking for top performance at lower price points might do well to take a look at the Seagate NAS HDD. On the other hand, if a cool-running system is the need of the hour and performance is not a major concern, the WD Red makes an excellent choice. We have also been very impressed with WD's response to various user complaints about the first generation Red drives. Seagate's track record with the NAS HDD is pretty small since the drives started shipping just a couple of months ago. As the drives get more widespread, compatibility issues (if any) get resolved and more user field reports become public.

Sometimes, the expected workloads become too heavy (> 150 TB/yr) for the consumer NAS drives to handle. Under those circumstances, the WD Se and WD Re are excellent choices. The WD Se can handle up to 180 TB/yr and the WD Re can go up to 550 TB/yr. Thanks to their higher rotational speed (7200 rpm), the enterprise grade drives have much better performance on the whole. We have also been using the WD Re drives for evaluation of various NAS systems. The disks have gone through countless rebuilds for test purposes and are still going strong. We have no qualms in standing behind the WD Re drives for very heavy NAS workloads.

Performance - Networked Environment
Comments Locked

54 Comments

View All Comments

  • dingetje - Thursday, September 5, 2013 - link

    thanks Ganesh
  • Arbie - Wednesday, September 4, 2013 - link

    Ignorant here, but I want to raise the issue. In casual research on a home NAS w/RAID I ran across a comment that regular drives are not suitable for that service because of their threshhold for flagging errors. IIRC the point was that they would wait longer to do so, and in a RAID situation that could make eventual error recovery very difficult. Drives designed for RAID use would flag errors earlier. I came away mostly with the idea that you should only build a NAS / RAID setup with drives (eg the WD Red series) designed for that.

    Is this so?
  • fackamato - Wednesday, September 4, 2013 - link

    Arbie, good point. You're talking about SCTERC. Some consumer drives allow you to alter that timeout, some don't.
  • brshoemak - Wednesday, September 4, 2013 - link

    A VERY broad and simplistic explanation is that "RAID enabled" drives will limit the amount of time they spend attempting to correct an error. The RAID controller needs to stay in constant contact with the drives to make sure the arrays integrity is intact.

    A normal consumer drive will spend much more time trying to correct an internal error. During this time, the RAID controller cannot talk to the drive because it is otherwise occupied . Because the drive is no longer responding to requests from the RAID controller (as it's now doing it's own thing), the controller drops the drive out of the array - which can be a very bad thing.

    Different ERC (error recovery control) methods like TLER and CCTL limit the time a drive spends trying to correct the error so it will be able to respond to requests from the RAID controller and ensure the drive isn't dropped from the array.

    Basically a RAID controller is like "yo dawg, you still there?" - With TLER/CCTL the drive's all like "yeah I'm here" so everything is cool. Without TLER the drive might just be busy fixing the toilet and takes too long to answer so the RAID controller just assumes no one is home and ditches its friend.
  • tjoynt - Wednesday, September 4, 2013 - link

    brshoemak, that was the clearest and most concise (not to mention funniest) explanation of TLER/CCTL that I've come across. For some reason, most people tend to confuse things and make it more complicated than it is.
  • ShieTar - Wednesday, September 4, 2013 - link

    I can't really follow that reasoning, maybe I am missing something. First off, error checking should in general be done by the RAID system, not by the drive electronic. Second off, you can always successfully recover the RAID after replacing one single drive. So the only way to run into a problem is not noticing a damage to one drive before a second drive is also damaged. I've been using cheap drives in RAID-1 configurations for over a decade now, and while several drives have died in that period, I've never had a RAID complain about not being able to restore.
    Maybe it is only relevant on very large RAID seeing very heavy use? I agree, I'd love to hear somebody from AT comment on this risk.
  • DanNeely - Wednesday, September 4, 2013 - link

    "you can always successfully recover the RAID after replacing one single drive."

    This isn't true. If you get any errors during the rebuilt and only had a single redundancy drive for the data being recovered the raid controller will mark the array as unrecoverable. Current drive capacities are high enough that raid5 has basically been dead in the enterprise for several years due to the risk of losing it all after a single drive failure being too high.
  • Rick83 - Wednesday, September 4, 2013 - link

    If you have a home usage scenario though, you can schedule surface scans to run every other day, in that case this becomes essentially a non-issue, At worst you'll lose a handful of KB or so.

    And of course you have backups to cover anything going wrong on a separate array.

    Of course, going RAID 5 beyond 6 disks is being slightly reckless, but that's still 20TB.
    By the time you manage that kind of data, ZFS is there for you.
  • Dribble - Wednesday, September 4, 2013 - link

    My experience for home usage is raid 1, or no raid at all and regular backups is best. Raid 5 is too complex for it's own good and never seems to be as reliable or repair like it's meant too. Because data is spread over several disks if it gets upset and goes wrong it's very hard to repair and you can loose everything. Also because you think you are safe you don't back up as often as you should so you suffer the most.

    Raid 1 or no raid means a single disk has a full copy of the data so is most likely to work if you run a disk repair program over it. No raid also focuses the mind on backups so if it goes chances are you'll have a very recent backup and loose hardly any data.
  • tjoynt - Wednesday, September 4, 2013 - link

    ++ this too. If you *really* need volume sizes larger than 4TB (the size of a single drive or RAID-1), you should bite the bullet and get a pro-class raid-6 or raid-10 system or use a software solution like ZFS or Windows Server 2012 Storage Space (don't know how reliable that is though). Don't mess with consumer-level striped-parity RAID: it will fail when you most need it. Even pro-class hardware fails, but it does so more gracefully, so you can usually recover your data in the end.

Log in

Don't have an account? Sign up now