Performance - Networked Environment

LenovoEMC's PX2-300D (based on the Intel Atom D525) was used as the testbed for evaluating the performance of the drives in a NAS enclosure. The PX2-300D is a 2-bay NAS unit. Only JBOD, RAID-0 and RAID-1 configurations are possible. Due to the lack of multiple samples of the Seagate NAS HDD, we restricted ourselves to evaluating a single disk configuration (JBOD) over the network A CIFS share was set up on the NAS and mapped on a Windows 7 VM. Intel NASPT / robocopy as well as IOMeter traces were run on this CIFS share.

Intel NASPT / robocopy

4 TB NAS Drives Face-Off

As expected, the WD Re takes the lead in many of the benchmarks. However, the Seagate NAS HDD is no slouch, and actually manages to hold its own in most of them (in fact, the read performance is pretty decent in this configuration).

IOMeter

4 TB NAS Drives Face-Off

IOMeter provides even more interesting results. The Seagate NAS HDD actually surpasses the WD Red in three of the four benchmark runs. That said, performance is only half the story in the 1-5 bay NAS market. Power consumption is also a very important metric. How much of a penalty do we have for the increased performance? The next section provides us some answers.

Performance - Raw Drives Miscellaneous Factors & Final Words
Comments Locked

54 Comments

View All Comments

  • dingetje - Thursday, September 5, 2013 - link

    thanks Ganesh
  • Arbie - Wednesday, September 4, 2013 - link

    Ignorant here, but I want to raise the issue. In casual research on a home NAS w/RAID I ran across a comment that regular drives are not suitable for that service because of their threshhold for flagging errors. IIRC the point was that they would wait longer to do so, and in a RAID situation that could make eventual error recovery very difficult. Drives designed for RAID use would flag errors earlier. I came away mostly with the idea that you should only build a NAS / RAID setup with drives (eg the WD Red series) designed for that.

    Is this so?
  • fackamato - Wednesday, September 4, 2013 - link

    Arbie, good point. You're talking about SCTERC. Some consumer drives allow you to alter that timeout, some don't.
  • brshoemak - Wednesday, September 4, 2013 - link

    A VERY broad and simplistic explanation is that "RAID enabled" drives will limit the amount of time they spend attempting to correct an error. The RAID controller needs to stay in constant contact with the drives to make sure the arrays integrity is intact.

    A normal consumer drive will spend much more time trying to correct an internal error. During this time, the RAID controller cannot talk to the drive because it is otherwise occupied . Because the drive is no longer responding to requests from the RAID controller (as it's now doing it's own thing), the controller drops the drive out of the array - which can be a very bad thing.

    Different ERC (error recovery control) methods like TLER and CCTL limit the time a drive spends trying to correct the error so it will be able to respond to requests from the RAID controller and ensure the drive isn't dropped from the array.

    Basically a RAID controller is like "yo dawg, you still there?" - With TLER/CCTL the drive's all like "yeah I'm here" so everything is cool. Without TLER the drive might just be busy fixing the toilet and takes too long to answer so the RAID controller just assumes no one is home and ditches its friend.
  • tjoynt - Wednesday, September 4, 2013 - link

    brshoemak, that was the clearest and most concise (not to mention funniest) explanation of TLER/CCTL that I've come across. For some reason, most people tend to confuse things and make it more complicated than it is.
  • ShieTar - Wednesday, September 4, 2013 - link

    I can't really follow that reasoning, maybe I am missing something. First off, error checking should in general be done by the RAID system, not by the drive electronic. Second off, you can always successfully recover the RAID after replacing one single drive. So the only way to run into a problem is not noticing a damage to one drive before a second drive is also damaged. I've been using cheap drives in RAID-1 configurations for over a decade now, and while several drives have died in that period, I've never had a RAID complain about not being able to restore.
    Maybe it is only relevant on very large RAID seeing very heavy use? I agree, I'd love to hear somebody from AT comment on this risk.
  • DanNeely - Wednesday, September 4, 2013 - link

    "you can always successfully recover the RAID after replacing one single drive."

    This isn't true. If you get any errors during the rebuilt and only had a single redundancy drive for the data being recovered the raid controller will mark the array as unrecoverable. Current drive capacities are high enough that raid5 has basically been dead in the enterprise for several years due to the risk of losing it all after a single drive failure being too high.
  • Rick83 - Wednesday, September 4, 2013 - link

    If you have a home usage scenario though, you can schedule surface scans to run every other day, in that case this becomes essentially a non-issue, At worst you'll lose a handful of KB or so.

    And of course you have backups to cover anything going wrong on a separate array.

    Of course, going RAID 5 beyond 6 disks is being slightly reckless, but that's still 20TB.
    By the time you manage that kind of data, ZFS is there for you.
  • Dribble - Wednesday, September 4, 2013 - link

    My experience for home usage is raid 1, or no raid at all and regular backups is best. Raid 5 is too complex for it's own good and never seems to be as reliable or repair like it's meant too. Because data is spread over several disks if it gets upset and goes wrong it's very hard to repair and you can loose everything. Also because you think you are safe you don't back up as often as you should so you suffer the most.

    Raid 1 or no raid means a single disk has a full copy of the data so is most likely to work if you run a disk repair program over it. No raid also focuses the mind on backups so if it goes chances are you'll have a very recent backup and loose hardly any data.
  • tjoynt - Wednesday, September 4, 2013 - link

    ++ this too. If you *really* need volume sizes larger than 4TB (the size of a single drive or RAID-1), you should bite the bullet and get a pro-class raid-6 or raid-10 system or use a software solution like ZFS or Windows Server 2012 Storage Space (don't know how reliable that is though). Don't mess with consumer-level striped-parity RAID: it will fail when you most need it. Even pro-class hardware fails, but it does so more gracefully, so you can usually recover your data in the end.

Log in

Don't have an account? Sign up now