RAID 1+0 / 0+1

RAID 1+0 (10) or 0+1 attempts to get the best of all worlds: It generally provides the best read and write performance, as well as offering a level of redundancy for its data when compared to RAID 0.


Both RAID 0+1 and RAID 1+0 are considered "nested" solutions, which is to say they use RAID 0's data striping and RAID 1's mirroring capabilities. The difference between the two is that RAID 1+0 (10) creates a striped set from a series of mirrored drives while RAID 0+1 creates a second striped set to mirror the primary striped set.

In practice, the only reason an administrator would choose either RAID 0+1 or 1+0 (10) is in extremely I/O intensive operations which would bottleneck a RAID 5 or RAID 6 array, and where drive cost is not a major concern. The redundancy provided is in reality very low although RAID 1+0 offers better fault tolerance and rebuild capabilities than 0+1.

In an RAID 1+0 array all but one drive from each RAID 1 set could fail without damaging the mirrored data. However, if the failed drive or drives is not replaced, the last working drive in the set then becomes a single point of failure for the entire array. So if that single hard drive fails, all data stored in the entire array is then lost.

The RAID 0+1 array can operate if one or more drives (greater than 4 drives utilized) fail in the same mirror set, However, if two or more drives fail on either side of the mirroring set, then data on the entire array is lost. Also, once a failed drive is replaced, in order to rebuild its data all the disks in the array must participate in the rebuild. In the case of RAID 1+0, it only has to re-mirror the lost drive so the rebuild process is substantially faster.

Pros:
  • Best performance available, as the system disk is essentially a RAID 0 array.
Cons:
  • Expensive in terms of drives.
  • Usable storage space is only half of the total drive capacity.
  • Only minimally fault tolerant.
Conclusion

In the IT world, some level of RAID is virtually guaranteed to be employed on any production server due to the relatively high failure rate of hard disks compared with most other components in the system. For end-users, though, the picture becomes far murkier. Most home computers occupy large amounts of time seeking from small file to small file, with the resulting speed limitation imposed by the physical mechanisms of the drive itself (rotational speed, etc). These limitations are not overcome even by the top-performing RAID 0. The only benefits, therefore, that users can seek in RAID are to increase overall capacity of their single drive, add a level of redundancy for their system, or to improve large-file performance.

The attraction of RAID for users seeking a large single drive is diminishing by the day, due to the massive single drive sizes on the market today. When a capacity conscious user can get a full terabyte of space in a single physical package, the argument becomes one of backing up said data, rather than seeing a 2TB drive on their system.

In the case of redundancy, there is most certainly an argument for taking advantage of the RAID 1 feature found on many motherboards (and even in most operating systems). As stated previously, most users have experienced a hard drive failure at one point in their lives, and as more of our daily work shifts to a computing platform, data integrity is becoming increasingly important. More to the point, however: Should users be more worried about backing up their data to removable media on a periodic basis to protect against the accidental deletion or corruption of data, or in keeping their machine up and running when a complete failure occurs?

This type of question can only be answered by the individual user themselves, and depends on the nature of data being stored on the system. We recently provided a first look at Windows Home Server, which may prove to be a far more compelling backup solution than any form of RAID. That does require the use of an entire computer, but the user-controlled data mirroring, volume shadow copy, and the ability to support multiple systems certainly make it a viable alternative in households with multiple computers.

It also bears mention that redundant storage of data using RAID really isn't a sufficient backup strategy for most businesses, and some form of off-site storage of backups should also be considered. RAID can be useful in making sure that systems remain operational in the event of a hard drive failure, but other catastrophes -- flooding, fire, theft, etc. -- can still claim all of the data on a RAID storage device. If the data is truly important, saving periodic backups to a different medium and storing it at a separate location should be considered.

Large-file performance is likely the most compelling reason to adopt RAID in a home system. For video editing operations, bandwidth in write operations is an absolute must, and RAID 0 fills this need very well. Increasingly, however, hard drives are finding their way into new areas of the home - home theater PCs, PVRs, and home video archival systems are but a few of the "read-often, write-less but always needed" systems which could benefit from a solution like RAID 5 or even the more performance oriented RAID 5+1.

At the end of the day, anyone looking into a more elaborate storage solution owes it to themselves to consider the practical implication of the decisions they make. One size most definitely does not fit all in the world of hard drive storage and RAID, and the wrong choice can certainly be more harmful than helpful in this regard.



We would like to thank Adaptec for providing the charts utilized in our article today.
Data Striping and Parity
Comments Locked

41 Comments

View All Comments

  • Brovane - Friday, September 7, 2007 - link

    Personally we use Raid (0+1) at my work for our Exchange Cluster, SQL cluster and the home drives for our F&P cluster. Were Raid 0+1 is great is in a SAN environment. We have the drives mirror between SAN DAE so we could have a entire DAE fail on our SAN and for example Exchange will remain up and running. Also if you have a drive failure in one of our RAID 0+1 drives the SAN automatically just grabs the hot spare and starts rebuilding the array and pages the the lan team and alerts Dell to ship a new drive. Of course no matter what RAID you have setup you should always have daily tape backups with a copy of those tapes going offsite.
  • Bladen - Friday, September 7, 2007 - link

    Might be asking a bit too much, especially in the case of RAID 5, 6, 0+1, and 1+0, but some SSD raid performance would be nice. They would need more than 2 drives wouldn't they?

    However if we could see some RAID 0 figures from a pair off budget SSD's, and a pair of performance SSD's, that would be awesome.
  • tynopik - Friday, September 7, 2007 - link

    in addition to a WHS comparison i hope it covers

    1. software raid (like built into windows or linux)
    2. motherboard raid solutions (nvraid and intel matrix)
    3. low end products (highpoint and promise)
    4. high end/enterprise products
    5. more exotic raids like raid-z and raid 5ee
    6. performance of mixing raids across same disks like you can with matrix raid and some adaptecs

    and in addition to features/cost/performance i hope it really tries to test how reliable/bulletproof these solutions are

    for instance a ton of people have had problems with nvraid
    http://www.nforcershq.com/forum/image-vp511756.htm...">http://www.nforcershq.com/forum/image-vp511756.htm...

    what happens if you yank the power in the middle of a write?
    how easy is it to migrate an array to a different controller?
    can disks in raid1 be yanked from the array and read directly or does it put header info on the disk that makes this impossible?
  • yyrkoon - Saturday, September 8, 2007 - link

    quote:

    for instance a ton of people have had problems with nvraid


    That would be becasue "a ton of people are idiots'. I have been using nvRAID for a couple of years without issues, and most recently I even swapped motherboards, and the array was picked right up without a hitch once the proper BIOS settings were made. I would suspect that these people who are 'having problems' are the type who expect/believe that having a RAID0 array on their system will give them another 30 frames per second in the latest first person shooter as well . . .
  • tynopik - Saturday, September 8, 2007 - link

    > I would suspect that these people who are 'having problems' are the type who expect/believe that having a RAID0 array on their system will give them another 30 frames per second in the latest first person shooter as well . . .

    the link is in the very top comment

    they were all actually using raid1 and had problems with it constantly splitting the array
  • tynopik - Friday, September 7, 2007 - link

    http://storageadvisors.adaptec.com/">http://storageadvisors.adaptec.com/
    great site with lots of potential topics like:

    desktop vs raid/enterprise drives - is there a difference
    http://storageadvisors.adaptec.com/2006/11/20/desk...">http://storageadvisors.adaptec.com/2006...-drives-...

    Picking the right stripe size
    http://storageadvisors.adaptec.com/2006/06/05/pick...">http://storageadvisors.adaptec.com/2006/06/05/pick...

    Different types of RAID6
    http://storageadvisors.adaptec.com/2005/11/07/a-ta...">http://storageadvisors.adaptec.com/2005/11/07/a-ta...

    other features to consider:
    handling dissimilar drives
    morph online from one RAID level to another
    easily add additional drives/capacity to an existing array
    can you change which port a drive is connected to without messing up the array?

    maybe create a big-honkin features matrix that shows which controllers are missing what?

    performance:
    - cpu hit between software raid, low-end controllers, enterprise controllers (some have reported high cpu usage with highpoint controllers even when using raid-1 which shouldn't cause much load)
    - cpu hit with different busses (PCI, PCI-X, PCIe) and different connections (firewire, sata, scsi, sas, usb)

    maybe even a corruption test. (write terabytes of data out under demanding situations and read back to ensure there was no corruption)

    But most of all I WANT A TORTURE TEST. I want these arrays pushed to their limits and beyond. What does it take to make them fail? How gracefully do they handle it?
  • tynopik - Friday, September 7, 2007 - link

    an article from the anti-raid perspective
    http://www.pugetsystems.com/articles?&id=29">http://www.pugetsystems.com/articles?&id=29
  • tynopik - Saturday, September 8, 2007 - link

    another semi-anti-raid piece

    http://www.bestpricecomputers.co.uk/reviews/home-p...">http://www.bestpricecomputers.co.uk/reviews/home-p...

    "Why? From our survey of a sample of our customers here's how it tends to happen:

    The first and foremost risk is that the RAID BIOS loses the information it stores to track the allocation of the drives. We've seen this caused by all manner of software particularly anti-virus programs. Caught in time a simple recreation of the array (see last page) resolves the problem in over 90% of the cases.

    BIOS changes, flashing the BIOS, resetting the BIOS, updating firmware etc can cause an array to fail. BIOS changes happen not just by hitting delete to enter setup. Software can make changes to the BIOS.

    Disk managers, hard disk utilities, imaging and partitioning software etc. can often confuse a RAID array."

    -------------------------

    http://storagemojo.com/?p=383">http://storagemojo.com/?p=383

    . . . . the probability of seeing two drives in the cluster fail within one hour is four times larger under the real data . . . .

    Translation: one array drive failure means a much higher likelihood of another drive failure. The longer since the last failure, the longer to the next failure. Magic!

    (perhaps intentionally mixing the manufacturers of drives in a raid is a good idea?)

    ------------------

    http://www.lime-technology.com/">http://www.lime-technology.com/

    unRAID

    -----------------

    http://www.miracleas.com/BAARF/">http://www.miracleas.com/BAARF/

    an amusing little page

    -----------------

    it would also be cool if you had a failing drive that behaved erratically/intermittently/partially to test these systems

    -----------------

    if a drive fails in a raid array and you pull the wrong drive, can you stick it back in and still recover or does the controller wig out?

    ------------------

    some parts from the thread at the top that you might have missed

    http://www.nforcershq.com/forum/3-vt61937.html?pos...">http://www.nforcershq.com/forum/3-vt619...=0&p...

    > Someone claims that the nv sata controler (or maybe raid controler) doesn't work properly with the NCQ function of new hard drives (or the tagged queing or whatever WD calls it).

    > if the drives are SATA II drives with 3 G/bps speed and NCQ features NVRAID Controller has know problems with this drives.

    > the first test trying to copy data from the raid to the external firewire drive resulted in not 1 but 2 drives dropping out.

    Luckily the 2 were both 1 half of the mirror meaning i could rebuild the raid. So looks like trying to use the firewire from the raid is the problem. THis may stand to reason as the firewire card is via an add-on card in a PCI slot so maybe there is some weird bottleneck in the bus when doing this causing the nvraid to malfunction.

    (so like check high pci bus competition)

    http://www.nforcershq.com/forum/4-vt61937.html?sta...">http://www.nforcershq.com/forum/4-vt61937.html?sta...

    > I have read that its best to disable ncq and also read cache from all drives in the raid via the device manager. This may tie in with someone else’s post here who says the nvraid has issues with ncq drives.

    http://www.nforcershq.com/forum/image-vp591021.htm...">http://www.nforcershq.com/forum/image-vp591021.htm...

    NF4 + Vista + RAID1 = no NCQ?

    ------------------------------------

    RAID is dead, all hail the storage robot

    http://www.daniweb.com/blogs/printentry1399.html">http://www.daniweb.com/blogs/printentry1399.html

    Drobo - The World's first storage robot

    http://www.datarobotics.com/">http://www.datarobotics.com/

    "Drobo changes the way you think about storage. In short, it's the best solution for managing external storage needs I have used." - JupiterResearch

    "It is the iPod of mass storage" - ZDNet

    "...the most impressive multi-drive storage solution for PCs I've seen to date" - eHomeUpgrade

    sucks that it's $500 without drives and usb only though

  • Dave Robinet - Saturday, September 8, 2007 - link

    Good posts. A topic you're obviously interested in. :)

    Let me try and hit a few of the points in random order:

    - Stress/break testing is a GOOD idea, but very highly subjective. You can't GUARANTEE that you'll be writing (or reading) EXACTLY the same data under EXACTLY the same circumstances, so there's always that element of uncertainty. Even opening the same file can't guarantee that the same segments are on the same disk, so... I'll have to give some thought to that. Definitely worthwhile, though, to pursue that angle (especially in terms of looking at how array controllers recover from major issues like that).

    - Your other points pretty much all hit on a major argument: Software versus Hardware RAID (and versus proprietary hardware). I actually know an IT Director in a major (Fortune 500) company who uses software RAID exclusively, including in fairly intensive I/O applications. His argument? "I've been burned by "good" hardware too often - it lasts 7 years, I forget to replace it, and when the controller cooks, my array is done." (Make whatever argument you like about him not being on the ball enough to replace his 7 year old equipment, but I digress). I do find the majority of the decent controllers write header information in fairly documented (and retrievable) ways - look at IBM's SmartRAID series as a random example of this - so I don't see that being a hugely big deal anymore.

    You're dead on, though. *CONSUMERS* who are looking at RAID need to be very, very sure they know what they're getting themselves into.
  • tynopik - Saturday, September 8, 2007 - link

    > You can't GUARANTEE that you'll be writing (or reading) EXACTLY the same data under EXACTLY the same circumstances, so there's always that element of uncertainty

    that's true, but i don't think it's that important

    have a test where you're copying a thousand small files and yank the power in the middle
    run this test 5-10 times and see how they compare
    controller 1 never has a problem
    controller 2 required a complete rebuild 5 times

    maybe you can't exactly duplicate the circumstances, but it's enough to say controller 2 has problems

    (actually requiring a complete rebuild even once would be a serious problem)

    similarly, have a heavy read/write pattern with random data while simultaneously writing data out a pci firewire card and maybe even a usb drive and have audio playing and high network traffic (as much bus traffic and conflict as you can generate) that runs for 6 hours
    controller 1 has 0 bit errors in that 6 hours
    controller 2 has 200 bit errors in that 6 hours

    controller 2 obviously has problems even if you can't exactly duplicate it

    i think it's sufficient to merely show that a controller could corrupt your data

Log in

Don't have an account? Sign up now