POST A COMMENT

39 Comments

Back to Article

  • n0b0dykn0ws - Tuesday, July 10, 2012 - link

    I can't wait to see benchmarks, especially if there are some for a 5 drive RAID 5 configuration. Reply
  • Souka - Tuesday, July 10, 2012 - link

    and also power consumption/heat.

    I have a 2-bay NAS running WD Blue in Raid 1, but they're 250GB so 250GB total space.

    with more digital data to backup these days (pictures and family movies)...two 1 to 3TB drives would be nice. :)
    Reply
  • brshoemak - Tuesday, July 10, 2012 - link

    This will really undercut prices of their RE lines, which is fine by me - I never really liked the 'TLER tax' on those drives. Point of references, on Newegg the 2TB RE drives are $230 while the Red version is $140 - and hopefully prices will go down some from there. I know there are differences in the drive lines beyond that, but being designed for RAID arrays is really the only thing that matters to me. Reply
  • Sivar - Tuesday, July 10, 2012 - link

    TLER Tax describes it well. The price bump just for the privilege of the hard drive having less aggressive error recovery seems pretty absurd.
    For most arrays I usually use consumer drives and, if one is reported as dropped, I RMA it. Let the manufacturer eat the expense for the feature they no longer allow users to disable.
    Reply
  • ckevin1 - Tuesday, July 10, 2012 - link

    I'm interested in hearing how the TLER works on these. For instance, is it situational based on ATA command, or is it always on?

    I'm very tempted to buy this instead of a green for a desktop data drive -- the warranty is appealing, as are the improvements to balancing & reliability. My only concern is data loss from the time-limited error recovery. Back before I knew better, I used to use a WD RE as a standalone drive, and I had TLER-related corruption problems (which is understandable, since it was designed to have a RAID controller performing error recovery for it). The marketing materials say that Red drives are supported in a 1-bay NAS, however, which suggests to me that they shouldn't have the same issues.

    Hopefully this feature can be covered in detail with the eventual review!
    Reply
  • creed3020 - Tuesday, July 10, 2012 - link

    I just bought a Synology DS212j and this product launch has me really interested. I have one WD 500GB RE4 in my NAS currently but my second slot is currently empty. With this new product I am eager to hear the reviews focusing specifically on their behaviour with RAID arrays, performance, temperature, and power draw. WD might have a real winner here. I also love how single they keep their product branding making it easy to understand what each product is for. Reply
  • DigitalFreak - Tuesday, July 10, 2012 - link

    Here's a review of the Red from StorageReview.com

    http://www.storagereview.com/western_digital_red_n...
    Reply
  • KonradK - Tuesday, July 10, 2012 - link

    If I understand the second paragraph the HDD can issue bad data silently for sake an uniterrupted work.
    From what HDD knows a purpose of the data it reads? From what it knows that they belong to a watched movie, and not to the file (or filesystem metadata) where error is not allowable?
    Reply
  • JasonInofuentes - Tuesday, July 10, 2012 - link

    So, that component has to be triggered by the application accessing the file, but it is a triggered event. So, a good scenario would be if you had two streams being sent to two different devices while another client is doing a restore from a back-up. The two streams use the ATA streaming command which triggers these different behaviors, including the error tolerance. The restore, though, is being done without that command, so normal error correction is in effect. If you have a single drive in the NAS then you might experience hangs on all three data transactions if the restore experiences an error that requires a few compute cycles. But if one of the video streams experiences an error that can't be resolved in three revolutions of the disk, then the bad data is allowed to slip through and no one gets slowed down.

    The frustrating thing about a lot of these standards and commands is that even if the hardware implements the solution, the software has to include it as well. And the other way around, in fact. But, have no doubt, error tolerance is a controlled variable.

    Jason
    Reply
  • Guspaz - Tuesday, July 10, 2012 - link

    I'm a ZFS user, and my drives are in a raidz array. ZFS has per-block checksums, and because I've got redundancy, any failed read can quickly be recovered from (by rebuilding the data from parity and writing the block elsewhere). I would rather have the drive fail very quickly (either by passing on the bad data or presenting a read error) so that my filesystem can recover the bad data and move on.

    Chances are, if a hard disk can't read the data after the first few tries, it won't do any good to keep trying over and over again, but that's exactly what consumer drives do: they spend large amounts of time re-reading the same block over and over hoping that the read will be good. Unfortunately, the long period of time spent re-trying (I think it's something like 15+ seconds) causes many RAID systems to think the drive is completely dead (because it stops responding for a long period of time). This behaviour is unacceptable for anything used in RAID.
    Reply
  • Metaluna - Tuesday, July 10, 2012 - link

    Do you know if ZFS can reconstruct a bad sector on the fly and return it to the application right away, or will it hang for the full 15+ seconds?

    It sounds like the only way Red drives can avoid the TLER issue is if you have:
    A) An app running server-side that knows when it's trying to stream video or audio
    B) The app knows how to issue the ATA streaming commands, and
    C) the OS has a driver and RAID controller that both support passing the command through to the array.

    The odds of all these conditions occurring on any ZFS-supporting OS any time soon are essentially nil :(.
    Reply
  • Solandri - Tuesday, July 10, 2012 - link

    In redundant arrays like RAID-Z (like RAID 5/6) and mirrors, the filesystem just corrects the error from parity as OP said. So it's instantaneous.

    On striped arrays you have the option of specifying a zpool should store x copies of each file. While this won't protect against a disk failure, it does protect against sector failures. And like the redundant case, errors are detected and corrected from the alternate copy instantaneously.

    You're also supposed to scrub the ZFS drives once a week or so. That double-checks the checksums of all the files and corrects any errors that may have developed from bit rot. So in most cases any errors will be caught before the data is needed that instant.
    Reply
  • tuxRoller - Tuesday, July 10, 2012 - link

    I know bit rot protection is a big feature of zfs but do you know how infrequently it happens? As I recall, it has two causes: cosmic rays (which are extremely low flux, at least of the type we are talking about), and spontaneous thermodynamic field inversion (from the name alone you can guess how frequently that must happen).
    BTW, I'm not trying to slam the feature, or zfs, I just think zfs has other features worth mentioning first.
    Reply
  • chadwilson - Monday, July 16, 2012 - link

    It happens more frequently than you think. When you look at the specs on drives, they have not changed much in the past decade, but the density has significantly increased. There was a study done by Google and they showed at 1TB, you would see on average 1 bit flip per drive per year. Not so important for your blu-ray collection. Significantly so for encrypted customer data :) Reply
  • dealcorn - Saturday, August 18, 2012 - link

    " This behavior is unacceptable for anything used in RAID."

    Sometimes that is true and sometimes it is not. The issue involves conflicting parameters between the RAID controller and the drive. Software RAID such as mdadm and ZSF typically works OK with, say, a SAS/SATA controller configured in IT mode to provide SATA ports because there is no conflict to be had between the drive and the controller. Hardware based RAID solutions commonly cause the problem,

    The "OK" part means its works faster if you use the software and CPU to recover the data without waiting for the drive to sort out it's issue or not. Limiting the drive's error recovery tine means better performance under error conditions with software RAID.
    Reply
  • mcnabney - Tuesday, July 10, 2012 - link

    In almost every installation a NAS is going to be bottlenecked by the gigE network connection - so any speeds beyond about 120MB/s is going to be complete overkill. I do like the power and connectivity benefits of the device.
    Otherwise, the Green drives, which transfer 100-110MB/s on sequential data (media files), are still a better option with a lower price point, low heat, low power consumption, and performance that best fits the limitations of a NAS.
    Reply
  • JasonInofuentes - Tuesday, July 10, 2012 - link

    I agree that the NAS bottleneck is the gigE network, even if you could squeeze in dual-gigE. We're waiting to see whether there's any value in the added transfer speed in managing the RAID. Certainly if you add a drive or are recovering from a failure the added speed could hlep speed that process along. And, if the speed comes free with the power benefits and NASware, then it certainly won't hurt. Reply
  • eanazag - Tuesday, July 10, 2012 - link

    There are some NAS devices with 10GigE interfaces or upgrades. Unfortunately, getting a 10 GigE switch is not so cheap, and neither are the interface cards. It can be done if you have the money and need it. Reply
  • mcnabney - Wednesday, July 11, 2012 - link

    A NAS isn't used for high speed. If you need high speed get a thunderbolt-connected SSD array to sit next to your computer. The purpose of a 'typical' NAS is to reliably hold data with speed as a tertiary concern. Want to attract my attention - increase the MTBF. Reply
  • Solandri - Tuesday, July 10, 2012 - link

    Bear in mind that the advertised sequential read/write speeds are the max you'll ever see - for data written on the outermost platters. HDDs typically have about a 2:1 ratio in radius between the outermost and innermost platters, so data on the innermost platters will have sequential read/write speeds about half that.

    So when you see a green drive advertised as 110 MB/s, it will actually deliver 55-110 MB/s in real-world use. I have 110 MB/s green drives in my NAS. And the average throughput I see writing large files to them is indeed 75-80 MB/s. That's fine for me, but for other applications I can see the extra 35 MB/s (avg) of these red drives to saturate gigabit being important.
    Reply
  • Solandri - Tuesday, July 10, 2012 - link

    Brain fart. Obviously I meant cylinders, not platters. Reply
  • mcnabney - Wednesday, July 11, 2012 - link

    I wasn't talking advertised. I looked up some actual results on Anandtech. I chose the 2MB file size, which is more realistic for a storage drive. The very low numbers from random 4k transfers just don't need to be done on a NAS. Reply
  • Sivar - Tuesday, July 10, 2012 - link

    Gigabit ethernet may be the bottleneck in home movie servers,but not so much in database or multi-users servers. Good luck getting 120MB/sec of random I/O from a 5-unit array of hard drives, especially with RAID5. Reply
  • tjoynt - Tuesday, July 10, 2012 - link

    ++ this!
    If you have a single user with streaming data you'll bottleneck on the gig-e, but add a few heavy users and your platters will slow to a crawl. I've seen half a million $$ disk arrays act slower then my original ipod because of heavy random use.
    Reply
  • clarkn0va - Wednesday, July 11, 2012 - link

    It has to be said that in 2012 the owner of such a system better be taking a serious look at solid state storage options. Reply
  • DigitalFreak - Tuesday, July 10, 2012 - link

    At the 3TB level, the Green is only $10 less than the Red on Newegg. Reply
  • Rick83 - Wednesday, July 11, 2012 - link

    Well, it's useful for getting quick resyncs done.
    While my RAID runs at around 200MB/s off of WD green's (seq read), a resync is around 40-50 MB/s. Long resyncs mean longer periods where your system has degraded performance.
    Reply
  • jwcalla - Tuesday, July 10, 2012 - link

    Hopefully they've given up on this 512b emulation stuff so we can finally use these things in ZFS builds without tearing our hair out. Reply
  • Grebuloner - Tuesday, July 10, 2012 - link

    So with the release of a whole new line, was there any indication that they will be upping the capacities of their Blue and Black lines? They've been stuck at 1 and 2 TB for quite a while now, and if you want 3 TB (or more) of performance storage you have to go with another brand. Reply
  • JasonInofuentes - Tuesday, July 10, 2012 - link

    No word on whether the higher capacities will be coming to Black and Blue, but we should see some of this tech filter down to the other lines. Just not at the expense of the Red line. They see the Red line as a huge growth market, so I'd suspect the NASware features will remain limited to just these drives. But the balancing mechanics and should make the leap, and we could see more firmware optimizations on a per application basis. Reply
  • kenthaman - Tuesday, July 10, 2012 - link

    for the full review!! I'd really like to see these drives put up against the Caviar series (Green, Blue, Black) to confirm WD's rating. I'd also like to see how these drives compare to their AV-GP drives. I've looked at these in the past as an option for my NAS and would like to see how these two compare.

    Also, if possible could testing be done with a SANS Digital enclosure? They weren't on the tested partner list and I'd like to see if/how well these drives operate in their products.

    Cheers!
    Reply
  • p05esto - Tuesday, July 10, 2012 - link

    Where are the 4TB hard drives? I've been sitting here waiting for over a year now. Jumping up to 3TB just isn't enough (from 2TB) for all my movies and stuff, I'd rather go right to 4+. The prices are high and there is little selection right now, the feedback on 4TB looks like they are a little unreliable as well. Reply
  • JasonInofuentes - Tuesday, July 10, 2012 - link

    The top capacity is always a fringe case when it comes to market. The people looking for the absolute largest capacity are likely willing to pay a huge premium for the pleasure. So while their margins stay high, that's partially a result of limiting supply. And there's the risk that they make more of them and they don't move at even a reduced price, which in this period of recovery isn't something they'd want to risk. Just be glad prices are finally starting to drift towards the good old days. Reply
  • tuxRoller - Tuesday, July 10, 2012 - link

    For all you media archivers, take a look at Snapraid. It's not exactly raid since it's not realtime, but it provides single or double parity protection (with the developer working on triple parity). Amongst its biggest advantages are that each disk is its own file system so you can lose more than your number of parity disks but not lose any data other than that which was on the failer disks (since there is no striping). The other nice thing is that you can use different sized disks and you get the sum of their space rather than the sum of the highest common denominator. Lastly, it's oss:)
    Sorry, not trying to sound like an advertisment, but before I discovered snapraid TLER was a concern since I didn't want to purchase new disks for the array. Frankly, an article on it might be interesting. Well, maybe an article comparing the various raid-like solutions such as unraid, flexraid and drobo (I've never used drobo but, iirc, they don't use a standard raid level), as well as snapraid.
    Reply
  • tjoynt - Tuesday, July 10, 2012 - link

    Is there any indication that these drive will also work better on a BYO NAS? I'm specifically hoping for Areca controller support. Reply
  • brshoemak - Tuesday, July 10, 2012 - link

    There's really not 'controller support' needed for hard drives that are designed for hardware RAID controllers. I have found that Areca HBAs are pretty flexible when it comes to drives. Again, just be sure that the drives are made for hardware RAID usage where the drive does not spend much time performing its own error correction. Reply
  • Rumpelstiltstein - Tuesday, July 10, 2012 - link

    Would these be better for a RAID 1 mass storage array to complement an SSD? Reply
  • kextyn - Wednesday, July 11, 2012 - link

    Is it just me or do these drives sound a lot like the AV-GP drives? Similar prices, similar features... It seems like they're just making it easier for the masses to understand what the drives are for. Reply
  • ypsylon - Tuesday, July 17, 2012 - link

    How they cope with RAID environment. Running "Blacks" in RAID setups for few years now (hardware ctrl/TLER enabled) never had any problems with those. Simply refused to pay for RE versions. Price difference was prohibitive for enabled by default TLER function. Reply

Log in

Don't have an account? Sign up now