IDE RAID Comparison

by Matthew Witheiler on June 18, 2001 4:31 AM EST

Many of the components that are typically found on desktop computers today were once technologies reserved for the server and high-end workstation market. Numerous examples can be given; lets take RAM size. As short as three years ago, any RAM amount over 64MB was thought to be an overkill, to say the least. It was really only in the server market that we saw 'outrageous' RAM configurations such as 256MB. Now we typically see high-end desktop computers come with no less than this amount, with some manufacturers selling computers with RAM sizes that stretch into the 384MB plus range. In a similar fashion, older server hard drive sizes have found their home in desktop systems, with the majority of major OEMs pushing high-end home systems with 30 or more gigabytes of space; sizes previously reserved to servers exclusively.

The truth of the matter is that is just a matter of time before more or less all of the components found in servers eventually find their way to the less expensive home computing market. The trickle down effect we see is caused by both the decreasing cost of the technology as well as the advent of new and more effective technologies. It is a combination of both of these effects that allow use of previously costly technologies in the enthusiast market.

In addition, as software evolves and demands get larger, there is a greater need for higher end technology in the desktop and low end workstation market. Operating systems provide perfect example of the need for more advanced technology in the home office. Prior to the release of Windows2000, any amount of memory in excess of 128MB was wasted in many cases. Windows2000, a true 32-bit operating system, is more more apt to take advantage of as much memory as it can get its hands on. With the release of new operating systems nearly every two years and applications becoming increasingly demanding, the longevity of a computer is often dependent on what previously high-end technologies the system incorporates.

The most recent case of server technology proliferating into the mainstream market comes with RAID. Standing for Redundant Array of Independent (or Inexpensive) Disks, RAID technology was originally developed in 1987 but has really only been utilized in the server market until very recently. Now a buzz work among hardware enthusiasts, RAID technology is quickly finding its way into many home and professional systems. Promising increased speed, increased reliability, increased space, and combinations of these features, it is little wonder why RAID technology powering its way into users systems and hearts.

Due to its recent adoption in the main stream market, more than a few questions surround the desktop RAID market of today. What are the different RAID modes and which one should I choose? What stripe size should I build my array with? And, perhaps most importantly, which IDE RAID controller is best? In order to both remove the mystery associated with RAID technology as well as help you, the potential RAID owner, choose what configuration and which RAID controller is best, today AnandTech takes an in-depth look at the RAID solutions out there now in an approach that has never been attempted before.

RAID Explained and RAID 0
Comments Locked

2 Comments

View All Comments

  • kburrows - Thursday, December 4, 2003 - link

    Have you run any tests on any onboard RAID solutions for RAID 0 & 1? I would love to see the results posted for the new SATA RAID on the Intel 875 boards.
  • Anonymous User - Sunday, August 17, 2003 - link

    In adressing the performance of an raid array with different stripe sizes, you miss an important factor, namely the accestime of an disk. This wait time has two main couses. First the head positioning and second the rotational latency (the heads track the right trace, but position where the read start has not passed under the head). You may have to wait from 0 to (in the worst case) a full cycle.
    Since the disks move independently You can calculate that the average latency to get an small file is minimal when the stripe size is about an full cycle of an disk in the array (aprox. 250kB today). All other factors I do know do not reduce this. (controller overhead, transport,...)
    So I think that today a minimum stripe size of 256kB should be used.

Log in

Don't have an account? Sign up now