RAID 5

In an effort to strike a balance between performance and data redundancy, RAID 5 not only stripes data across multiple disks, but also writes a "parity" block among those disks which allows the array to recover from a single failed drive in the set. This parity block is staggered from one drive to the next, resulting in each drive having either a portion of the data that is trying to be read, or the parity block which allows the data to be reconstructed. In this way, the array gains some performance benefits in having the data striped among multiple disks, while being able to stay online after the failure of a single disk in the array.


Rather than having a dedicated drive for parity as in the less-popular RAID levels 3 and 4, the parity sharing of RAID 5 allows for a more distributed drive access pattern, resulting in improved write performance and a more even disk wear pattern than with a dedicated parity drive.

In its optimal form, RAID 5 provides substantially faster read performance than a single drive or RAID 1 configuration, but write performance suffers due to the need to write (and sometimes recalculate) the parity information for the majority of writes performed. While this write performance is still faster than a single disk configuration in most cases, the true performance benefit of a RAID 5 array is in its read ability.

It should be noted that the read performance of a RAID 5 array improves as the number of disks in the array increases. This increase in disks, however, increases the odds of a disk failure in the array due to the law of averages, which results in a performance degradation during rebuilding operations. It also increases the likelihood of the entire array being unrecoverable if a second disk in the array fails before the first failed disk is replaced. In the image shown, the array contains a "hot spare" disk - in the event of a disk failure, this "hot spare" disk would be brought into the array to replace the failed disk immediately.

RAID 5 finds a comfortable home in most "read often, write infrequently" server applications which require long periods of uptime, such as web servers, file/print servers, and database servers. Dedicated RAID 5 controllers that include large amounts of RAM can negate much of the write performance penalty, but such setups are quite a bit more costly. Note also that simplified RAID 5 controllers exist that require the CPU to perform the parity calculations, which can result in write performance that is lower than a single drive.

Pros:
  • Good usable amount of data (capacity is the sum of all but one drive in the set)
  • Fault-tolerant - can survive one disk failure without impact to users
  • Strong read performance
Cons:
  • Write performance (without a large controller cache) is substantially below that of RAID 0
  • Expensive (either in terms of controller cost or CPU usage) due to parity calculations
RAID 6

RAID 6 attempts to address the most glaring of the RAID 5 issues: The comparatively large window in which the array is in a dangerous state due to a failed disk.


As in RAID 5, RAID 6 staggers its parity information across multiple drives. Its major difference, however, is that it writes two parity blocks for every stripe of data, which means that the array is capable of remaining accessible to the users even after having sustained two simultaneous drive failures. The advantage of a RAID 6 array versus a RAID 5 array with a hot spare risk is that no rebuilding is necessary to bring the last disk into the array in the event of a failure. In this sense, performance is more or less guaranteed after a single disk failure under RAID 6, whereas a significant performance hit occurs during the rebuilding required under RAID 5.

RAID 6's parity scheme is not simply multiple copies of the same parity information, but rather two different means of calculating parity information for the same data. This results in a much higher computing overhead than the already-intensive RAID 5 scheme, and a resulting increase in controller/CPU usage. This requirement for the second parity calculation and write, however, further adversely impacts the write performance of a RAID 6 array versus a RAID 5 solution.

RAID 6 is an excellent choice for both extremely mission-critical applications and in instances where large numbers of disks are intended to be used in the array to improve read performance. Because of this (and the poor write performance without special hardware), RAID 6 support is typically only included in high-end, expensive controller cards.

Pros:
  • Fair usable amount of data (sum of all but two drives in the set)
  • Provides more comfortable levels of redundancy for very large array sizes (8+ disks)
  • Strong read performance
Cons:
  • Expensive (both in computing power, controller, and in additional "wasted" disks)
  • Write performance is generally very poor compared to other RAID solutions
RAID 0 and RAID 1 RAID 0+1 and Conclusion
Comments Locked

41 Comments

View All Comments

  • alecweder - Wednesday, February 4, 2015 - link

    The biggest issue with RAID are the unrecoverable read errors.
    If you loose the drive, the RAID has to read 100% of the remaining drives even if there is no data on portions of the drive. If you get an error on rebuild, the entire array will die.

    http://www.enterprisestorageforum.com/storage-mana...

    A UER on SATA of 1 in 10^14 bits read means a read failure every 12.5 terabytes. A 500
    GB drive has 0.04E14 bits, so in the worst case rebuilding that drive in a five-drive
    RAID-5 group means transferring 0.20E14 bits. This means there is a 20% probability
    of an unrecoverable error during the rebuild. Enterprise class disks are less prone to this problem:

    http://www.lucidti.com/zfs-checksums-add-reliabili...

Log in

Don't have an account? Sign up now