Original Link: http://www.anandtech.com/show/788

IDE RAID Comparison

by Matthew Witheiler on June 18, 2001 4:31 AM EST

Many of the components that are typically found on desktop computers today were once technologies reserved for the server and high-end workstation market. Numerous examples can be given; lets take RAM size. As short as three years ago, any RAM amount over 64MB was thought to be an overkill, to say the least. It was really only in the server market that we saw 'outrageous' RAM configurations such as 256MB. Now we typically see high-end desktop computers come with no less than this amount, with some manufacturers selling computers with RAM sizes that stretch into the 384MB plus range. In a similar fashion, older server hard drive sizes have found their home in desktop systems, with the majority of major OEMs pushing high-end home systems with 30 or more gigabytes of space; sizes previously reserved to servers exclusively.

The truth of the matter is that is just a matter of time before more or less all of the components found in servers eventually find their way to the less expensive home computing market. The trickle down effect we see is caused by both the decreasing cost of the technology as well as the advent of new and more effective technologies. It is a combination of both of these effects that allow use of previously costly technologies in the enthusiast market.

In addition, as software evolves and demands get larger, there is a greater need for higher end technology in the desktop and low end workstation market. Operating systems provide perfect example of the need for more advanced technology in the home office. Prior to the release of Windows2000, any amount of memory in excess of 128MB was wasted in many cases. Windows2000, a true 32-bit operating system, is more more apt to take advantage of as much memory as it can get its hands on. With the release of new operating systems nearly every two years and applications becoming increasingly demanding, the longevity of a computer is often dependent on what previously high-end technologies the system incorporates.

The most recent case of server technology proliferating into the mainstream market comes with RAID. Standing for Redundant Array of Independent (or Inexpensive) Disks, RAID technology was originally developed in 1987 but has really only been utilized in the server market until very recently. Now a buzz work among hardware enthusiasts, RAID technology is quickly finding its way into many home and professional systems. Promising increased speed, increased reliability, increased space, and combinations of these features, it is little wonder why RAID technology powering its way into users systems and hearts.

Due to its recent adoption in the main stream market, more than a few questions surround the desktop RAID market of today. What are the different RAID modes and which one should I choose? What stripe size should I build my array with? And, perhaps most importantly, which IDE RAID controller is best? In order to both remove the mystery associated with RAID technology as well as help you, the potential RAID owner, choose what configuration and which RAID controller is best, today AnandTech takes an in-depth look at the RAID solutions out there now in an approach that has never been attempted before.

RAID Explained

Before we can analyze the RAID solutions out there, a proper understanding of RAID technology must be gained first. To begin, we will discuss each RAID type and how it works, an area that is often gray in the minds of many consumers.

For starters, the "A" for array and "I" for independent in RAID are important: they tell that more than one physical hard drive must be used. The number of drives required depends on the RAID type being implemented, but each RAID setup requires at least two hard drives. Typically, it is best to use identical drives. This ensures that both drives react to the RAID environment in the same way. It is possible to use different brands or models on a RAID array, but various issues such as reduced array size and/or decreased performance may result but more on that in the individual sections.


The first but not necessarily the most basic RAID type is RAID 0, or striping. The main purpose of RAID 0 is to provide speed, not fault-tolerence as other RAID configurations offer. Requiring two or more physical drives, RAID 0 works in the following manner.

RAID 0 uses an algorithm to break files into smaller files of the user defined size called the stripe size. Once a file is broken down into these stripes, each drive in the array receives one or more of these fragments. For example, if there are two drives in a RAID 0 array with a 64KB stripe size and the RAID controller gets a command to write a single 128KB file, the file is broken down into two 64KB stripes. Next, one of the two stripes is sent to disk 1 and the other to disk 2 simultaneously This completes the write process.

Image courtesy of Promise Technology, Inc.

Naturally, this decreases the time required to write a file since more than one disk is working to store the information. In our example above the time associated with writing our 128KB file turns out to be the time required to write a single 64KB file, since this is what is what occurs simultaneously on both disks in the array.

The speed of reading a file back is also increased with a sufficiently large file. Let's use our 128KB file on a two disk RAID 0 array with a 64KB stripe size for example again. After the data is stored on both drives in the array, it can be read back by reading the two 64KB files from each drive at the same time. Thus, once again, the time required to read back our 128KB file is actually only the time required to read a single 64KB file.

In some situations, when a file is smaller than the stripe, the file is not broken up and instead is written to the array as is. This results in no speed improvement over a non RAID 0 setup because the drives on the array are not working together when reading or writing.

At the same time, an extremely small stripe size makes a drive do more work than it can handle and can significantly slow down RAID 0 performance as well. For example, if we had a 1KB stripe size and a 128KB file, each drive would have to be written to 64 times to store 64 different 1KB files. This creates a bottleneck as the drive attempts to read or write a large number of times for a single file.

As we mentioned before, RAID 0 has no fault tolerance, meaning that if one drive in the array fails, the whole array is shot. There is no way to rebuild or repair the information stored on a RAID 0 array. This makes a RAID 0 is setup the most susceptible to failure RAID type, a fact that usually keeps users with sensitive data from choosing RAID 0 as their RAID setup.

At the same time, however, RAID 0 is the fastest of all RAID setups. Since there is no overhead required to store extra information for fault tolerance, the speed of RAID 0 can theoretically perform 2 times the speed of a single drive when there are 2 drives in the array. Adding more drives only increases this theoretical performance amount, so if you have a 6 drive RAID 0 array, performance could be as large as 6 times the performance of a single drive.

Using different hard drives in a RAID 0 setup can result in two problems. First off, the size of the RAID array will only be the size of the smallest drive multiplied by the number of drives in the array. This is because the controller always writes to all the drives in the array and once one is filled no more information can be stored on the array. Secondly, the speed of a RAID 0 setup is only as fast as the slowest drive in the array. Because chunks of data are being written to the disks at the same time, if one drive is slower than the rest the others must sit and wait for the slowest drive to finish. It is for these reasons that it is suggested that identical drives be used on a RAID 0 setup.

What RAID 0 boils down to is speed and little more. The fact of the matter is that RAID 0 is not redundant at all, just fast. But for many users, this is all that is important.


Although speed can be an important aspect of computing, so can the security that comes with fault tolerance. Speed may be sacrificed but RAID 1 provides users with a whole new level of security.

RAID 1 works by writing identical sets of information to two drives in an array. When the controller is sent a 64KB file to be written to a two disk RAID 1 array, the controller sends identical copies of this 64KB file to both disks in the array. Reads are the same as on a single drive: the controller requests the file from one of the two drives.

Image courtesy of Promise Technology, Inc.

The special feature of RAID 1 is its fault tolerance. If either of the two drives in the array fails, no data is lost. If/when a drive fails, the RAID controller simply uses the information off of the drive that is still available. When a new drive is added to the array to fix the failed one, a mirroring occurs in which the data from the good drive is written to the new drive to recreate the array again.

As one could suspect, RAID 1 offers very little in terms of performance. When requesting data from a drive, some RAID controllers take information from the drive that is not busy or closer to the desired information, theoretically resulting in faster data access. When writing, on the other hand, there is some overhead when compared to a single drive as the controller must duplicate the file it is sent and then pass it along to the drives.

In a RAID 1 setup, identical drives are best in order to prevent lost space. Since the same data is being written to two drives, the size of the RAID 1 array is equal to the size of the smallest drive in the array. For example, if a 20GB drive and a 30GB drive are used in a RAID 1 setup, the array would only be 20GB with the 10 extra gigabytes on the 30GB drive going to waste. The performance difference between two drives is also an issue here, since a faster drive would have to wait for a slower drive before it could write more information.

RAID 1 is a good solution for those looking for security over speed. Although not the slowest of the common RAID types, RAID 1 can be slower than a single drive in some cases (more on that in the benchmarks). What RAID 1 does provide is a very safe environment, where failure of a single drive does not equate to any down time.


For the final popular RAID configuration, we jump to the number 5 for RAID 5. This setup is typically found in higher-end RAID cards only because it requires extra hardware to work properly.

RAID 5 requires at least 3 drives and attempts to combine the speed of striping with the reliability of mirroring. This is done by striping the data across two drives in the array at a user defined stripe size. The 3rd drive in the array, the one not getting striped data, is given a parity bit. A parity bit is generated from the original file using an algorithm to produce data that can recreate the information stored on both drives that received the striped data.

The two drives receiving the striped data and the one receiving the parity bit are constantly changing. For example, if drives 1 and 2 receive striped data on a write and drive 3 receives a parity bit, on the next write drives 2 and 3 will receive the striped data and drive 1 will receive the parity bit. The shifting continues and eliminates the random write performance hit that comes with a dedicated drive receiving the parity information.

Image courtesy of Promise Technology, Inc.

The parity information is typically calculated on the RAID controller itself, and thus these types of controllers are called hardware RAID controllers since they require a special chip to make the parity information and decide what drive to send it to.

RAID 5 arrays are said to provide a balance between RAID 0 and RAID 1 configurations. With RAID 5, some of the features of striping are in place as well as the features of mirroring. Thanks to the parity bit, if information is lost on one of the three drives in the array, it can be rebuilt. Thanks to the striping it uses to break up the data and send it to multiple drives, aspects of speed from RAID 0 are present.

Recreation works in the following manner. Let's use a 3 drive RAID 5 array with a 64KB stripe size for an example with a 128KB file needed to be written. First, a parity bit is created for the file that the controller card has received by performing an XOR calculation on the data. Next, the 128KB file is broken into two 64KB files, one of which is sent to drive 1 and the other to drive 2. Finally, the parity information calculated above is written to the third drive in the array.

Now, if one of the drives in the array goes bad and our 128KB file is lost, the data can be recreated. It does not matter which drive fails: all the data is still available. If the third drive in the above example, the one that received the parity information for this write, fails then the originally data can be read off of drives 1 and 2 to recreate the parity information. If either drive 1 or drive 2 fails, then the parity information stored on drive 3 can be used to recreate the information lost on the original drive.

Not all is good with RAID 5, however. Due to the parity bit that must be calculated and written to on each drive, there is overhead. This is especially present when changing only one piece of information on one drive in the array. During this process, not only does the information that requires changing require writing but the parity bit must also be recreated. This means that once the data is written, both drives with the stripe blocks on them must be read, a new parity bit be calculated, and then the new parity bit has to be written to the third drive. This problem only increases as additional drives are added to the array.

For the same reasons mentioned in both the RAID 0 and RAID 1 discussions it is best to use identical drives for a RAID 5 setup. Not only does this ensure speed it also ensures that all of the array's storage capacity is utilized. The size of a RAID 5 array is equal to the size of the smallest drive times the number of drives in the array minus one (since one of the drives is always getting a parity bit).

RAID 5 does provide a good balance between speed and reliability and is a popular configuration for arrays in a variety of systems, from serves to workstations. The data security made possible with the parity bit as well as the speed and space provided by RAID 5 have many high-end system builders turning to RAID 5.

Other Configurations

All numbers between 0 and 7 have a RAID type associated with them, however most of the other RAID types are not used often. RAID 0, RAID 1, and RAID 5 have become popular on the market because of the advantages they offer over a single drive or a series of drives.

It is important to note that there are also combinations of these setups that some cards support. Usually their name describes what they are. For example, RAID 0+1 incorporates the striping of data over two drives and the mirroring of data over another pair of drives (thus requiring at least four drives).

Stripe Sizes

We suspect that many of you out there are interested in RAID for its performance advantage. Stripe sizes play a very important role in the performance of RAID arrays and thus it is critical to understand the concept of striping before we delve any further into RAID discussion.

As we mentioned before, stripes are blocks of a single file that are broken into smaller pieces. The stripe size, or the size that the data is broken into, is user definable and can range from 1KB to 1024KB or more. The way it works is when data is passed to the RAID controller, it is divided by the stripe size to create 1 or more blocks. These blocks are then distributed among drives in the array, leaving different pieces on different drives.

Like we discussed before, the information can be written faster because it is as if the hard drive is writing a smaller file, although it is really only writing pieces of a large file. At the same time, reading the data is faster because the blocks of data can be read off of all the drives in the array at the same time, so reading back a large file may only require the reading of two smaller files on two different hard drives at the same time.

There is quite a bit of debate surrounding what stripe size is best. Some claim that the smaller the stripe the better, because this ensures that no matter how small the original data is it will be distributed across the drives. Others claim that larger stripes are better since the drive is not always being taxed to write information.

To understand how a RAID card reacts to different stripe sizes, let's use the most drastic cases as examples. We will assume that there are 2 drives setup in a RAID 0 stripe array that has one of two stripe sizes: a 2KB stripe and a 1024KB stripe. To demonstrate how the stripe sizes influence the reading and writing of data, we will use also use two different data sizes to be written and read: a 4KB file and a 8192KB file.

On the first RAID 0 array with a 2KB stripe size, the array is happy to receive the 4KB file. When the RAID controller receives this data, it is divided into two 2KB blocks. Next, one of the 2KB blocks is written to the first disk in the array and the second 2KB blocks is written to the second disk in the array. This, in theory, divides the work that a single hard drive would have to do in half, since the hard drives in the array only have to write a single 2KB file each.

When reading back, the outcome is just as pretty. If the original 4KB file is needed, both hard drives in the array move to and read a single 2KB block to reconstruct the 4KB file. Since each hard drive works independently and simultaneously, the speed of reading the 4KB file back should be the same as reading a single 2KB file back.

This pretty picture changes into a nightmare when we try to write the 8192KB file. In this case, to write the file, the RAID controller must break it into no less than 4096 blocks, each 2KB in size. From here, the RAID card must pass pairs of the blocks to the drives in the array, wait for the drive to write the information, and then send the next 2KB blocks. This process is repeated 4096 times and the extra time required to perform the breakups, send the information in pieces, and move the drive actuator to various places on the disk all add up to an extreme bottleneck.

Reading the information back is just as painful. To recreate the 8192KB file, the RAID controller must gather information from 4096 places on each drive. Once again, moving the hard drive head to the appropriate position 4096 times is quite time consuming.

Now let's move to the same array with a 1024KB stripe size. When writing a 4KB file, the RAID array in this case does essentially nothing. Since 4 is not divisible by 1024 in a whole number, the RAID controller just takes the 4KB file and passes it to one of the drives on the array. The data is not split, or striped, because of the large stripe size and therefore the performance in this instance should be identical to that of a single drive.

Reading back the file results in the same story. Since the data is only stored on one drive in our array, reading back the information from the array is just like reading back the 4KB file from a single disk.

The RAID 0 array with the 1024KB stripe size does better when it comes to the 8192KB file. Here, the 8192KB file is broken into eight blocks of 1024KB in size. When writing the data, both drives in the array receive 4 blocks of the data meaning that each drive only has the task of writing four 1024KB files. This increase the writing performance of the array, since the drives work together to write a small number of blocks. At the same time reading back the file requires four 1024KB files to be read back from each drive. This holds a distinct advantage over reading back a single 8192KB file.

As you can see, the performance of various stripe sizes differ greatly depending on the situation. Just what stripe size should you use? For once we will actually be able to answer that question in just a few moments.

Types of RAID Cards

There is quite a bit of confusion over the different types of RAID cards on the market today. Most of the cards that end up in the hands of the consumer are IDE RAID solutions due to their low price and it is IDE RAID cards that we focus on in this article. Of the types readily available to the consumer, the cards can be broken down into two categories: software RAID and hardware RAID.

Software RAID is the RAID controller type that is found integrated onto a number of motherboards and available as a low cost PCI card. These cards are really nothing more than a slightly specialized IDE controller Software RAID cards appear to function as an IDE controller with the ability to stripe or mirror data, but this is not actually the case. Software RAID controllers do not actually perform the striping or mirroring calculations but rather call upon the CPU to perform these functions. When a file is sent to a software IDE RAID controller, the controller passes along the data and the stripe size and the details of the data to the CPU which then does the dirty work of figuring out how the file is broken up and what goes where.

Once the CPU has done its job, the information is passed back to the software RAID controller and then written to the disks in the array just as an IDE controller on the southbridge of a chipset would do. Since the CPU is actually the brains behind the software RAID solutions, RAID 5 ability is typically not provided on these cards. This is because the parity calculations that RAID 5 require (the XOR operations) are rather stressful. If they were passed to the CPU they would require quite a bit of CPU time and power. The striping calculations, on the other hand, are fairly stress-free on the CPU. Just how much CPU power these cards required would have to wait for the benchmarks.

Hardware RAID cards bypass the CPU and instead perform the striping calculations on an integrated controller chip. The most common chip used for these calculations is the Intel i960. Because data is processed on the card, hardware RAID solutions are much more complex than software RAID cards. Hardware RAID cards must include memory for cache as well as extra controls to power the i960 brain.

It is typically not the strain reduced from the CPU during striping calculations that is the need for a hardware RAID solution, but rather the fact that hardware RAID cards can provide RAID 5 support. Since these cards have a dedicated processor on them, they can handle the XOR operations without influencing CPU speed at all. It is for this reason that hardware RAID cards have realized large success, since they are really the only viable solution for RAID 5 support that promises to combine the speed of striping with the reliability of mirroring

Due to their complexity, hardware RAID cards are much more expensive than software RAID ones. With the cost also comes extra features, as we found when we got a chance to play around with our hardware RAID cards.

For these series of RAID tests we were able to get our hands on the 3 most popular software RAID solutions and two of the large players in the hardware RAID market. Now it is time to take a look at the cards.

AMI MegaRAID 100

The American Megatrends MegaRAID 100 is a software IDE RAID controller that makes use of the American Megatrends MG80649 controller. This is the same IDE RAID controller that is found on a few motherboards out there, including ones from Iwill. There is actually no difference between the PCI card version of the MG80649 and the integrated version: both work over the PCI bus to provide identical IDE RAID capabilities and performance.

The MegaRAID 100 is an ATA100 RAID controller, meaning that it supports speeds up to the current ATA100 spec. The chip on the card is capable of supporting up to a total of 4 drives on two IDE channels, just like every other IDE controller. The card features pin-outs for hard drive activity LEDs on both the primary and secondary channel as well as a jumper for enabling or disabling the onboard BIOS.

Iwill SIDE RAID100

The second software RAID card we got the opportunity to test was the Iwill SIDE RAID100. This card is powered by the HighPoint HPT370A chip that is found on many motherboards including those by ABIT and EPoX.

Like the American Megatrends offering, the HighPoint RAID controller features two IDE channels (primary and secondary) that support drive speeds up to ATA100. Our card had a single connection for a hard drive activity LED over both channels of the card.

Promise FastTrak100

The final software RAID card used in the tests was the Promise FastTrak100 IDE RAID controller. This card makes use of the Promise Technology PDC20267 ATA100 software RAID controller, the same one found on some motherboards manufactured by MSI as well as others.

Just like every other software RAID card we have seen, the Promise FastTrak100 and its PDC20267 chip provides two IDE channels, both of which support ATA100 speeds. The reason that all software RAID cards are the same when it comes to their number of channels is because they really are nothing more than an IDE controller, just like the one found integrated in every southbridge. The card also featured a 4 pin-out LED connection to power up to two hard drive activity lights.


As we move to the hardware RAID solutions we got, things get infinitely more interesting. No longer does the controller consist of an IDE controller and a BIOS chip but now the cards include processing units and memory, among other items.

The AAA-UDMA is one of the few Adaptec hardware based IDE RAID solutions. As the picture of the card clearly shows, there is a lot going on. It took quite a bit digging around to find out what each chip on the AAA-UDMA does..

Let's start off with a component that we know every IDE hardware RAID card needs: a coprocessor. For the hardware portion of the card, the AAA-UDMA makes use of Adaptec's own RAID coprocessor: the AIC-7915G. Since this chip is made by Adaptec is is proprietary to their hardware RAID solutions, not much information on the AIC-7915G is available. All we really know about this chip is that it does at least the XOR calculations needed for a RAID 5 configuration and perhaps does more (such as determining where to split and send data).

From here on out, we can find a bit of information regarding the chips on the AAA-UDMA but at first were puzzled by what function they serve. The reason is because two of the four remaining large chips are both noted as SCSI chips on venders websites. The first of the two, the qLogic FAS466 is actually a SCSI controller according to qLogic's website. The second is the Adaptec AIC-7890 chip; a chip which Adaptec's site labels a single-chip SCSI host adapter with Ultra2 SCSI support. What are these SCSI chips are doing on the card?

Well, the AAA-UDMA uses the third large chip to translate the SCSI information sent by its components into information that an IDE device can use. The chip that serves this function is the Altera Flex EPF6024 and it acts as a translator of sorts as well as the UDMA controller. With Adaptec's previous success in the SCSI RAID market, it was easier for them to design a IDE RAID card off their existing SCSI RAID technology.

The fourth large chip on the card is the Intel 21152 transparent PCI-to-PCI bridge. This bridge, which is integrated in the popular i960 coprocessor, allows for the use of the two extra IDE controllers present on the AAA-UDMA.

The AAA-UDMA does offer four IDE output ports, but each port is only capable of accepting one drive. This is done to prevent the drives from having to wait when writing information. Since a two drive channel can only read or write to one drive on the channel at a time, having support for both a master and a slave would decrease performance of the array. The four ports are provided via two IDE controllers integrated on the Alter Flex EPF6034 chip. One downside to this controller is the fact that it is only a ATA66 controller, meaning that it cannot provide ATA100 drives with the maximum amount of bandwidth they can handle.

The AAA-UDMA uses a standard 168-pin EDO DIMM for cache memory. Our card came packaged with a 2MB DIMM but the card can accept up to 64MBs of cache memory.

The card also features a set of LEDs located at the front of the board that are used for diagnostics. Also on the card are pin headers for hard drive activity LEDs.

One big complaint we had with the AAA-UDMA was that the card's BIOS did not contain any utilities to configure or create an array. Instead the card uses a bootable floppy disk to provide these functions. This turned out to be a big pain, especially when we misplaced the disk and had to download and create a new one.

Promise SuperTRAK100

The second IDE hardware RAID controller that we took a look at was the Promise SuperTRAK100. A full length PCI card, the SuperTRAK100 is the same size as the AAA-UDMA but has a few more features.

First off, Promise is a company that has been in the IDE business for some time. As a result of this, the card is a true IDE hardware RAID controller, not a SCSI card modified to read and write to IDE devices. The SuperTRAK100 is much simpler as far as design.

The coprocessor of the SuperTRAK100 is an Intel i960RD which is actually an I/O processor. The chip operates at 66MHz and has a built in PCI-to-PCI bridge, memory controller, and support for the I2O specification. I2O is an architecture that provides support for intelligent I/O and is the standard supported in almost every operating system today and provides the maximum offload of RAID functions from the CPU onto the chip. It is the i960RD that performs the XOR calculations necessary for parity bits as well as other RAID specific functions.

The i960RD chip sends its information to up to six hard drives powered by three Promise PDC20265 chips located on the board. Like the Adaptec solution, each IDE port on the SuperTRAK100 only accepts one master drive to prevent any unnecessary slow down. The PDC20265 also provides support for the ATA100 specification.

For cache memory, the SuperTRAK100 uses a single 72-pin EDO SIMM. Our card came installed with a 16MB stick, but amounts up to 128MB are supported.

The card features a set of 4 LEDs located on the back panel of the card. These lights allow for easy trouble shooting of the RAID array, as the system does not have to be dismantled to find out what is going wrong. During normal operation, the LEDs move in a back and forth fashion to indicate that all is well. Also included for diagnostic purposes is a speaker as well as a connection for an audio source other than the built-in speaker.

Other than that, there is not much else to the SuperTRAK100 hardware wise. There is the standard pin-header for hard drive LED activity lights, a battery to power the BIOS, and a Lattice BIOS chip.

About the Test

Our RAID testbed consisted of a Gigabyte GA-7DX motherboard outfitted with 128MB of PC2100 memory and a Duron 850MHz processor. All tests were conducted under Windows 2000 with the operating system running off of an old Western Digital 20GB 205AA drive connected to the motherboard's IDE controller.

All RAID tests were performed using two (in the case of RAID 0 and RAID 1) or three (in the case of RAID 5) 30GB IBM 75GXP hard drives connected directly to the RAID controller being tested. In all cases the drives were on different channels and always set to master.

Intel's Iometer benchmark was used in the tests and this required that the drives in the RAID array have no partition information on them and thus were strictly physical drives. For testing purposes we used three different Iometer access patterns under three different loads. The three access patterns used were a file server access pattern, a database access pattern, and a workstation access pattern. Both the file server pattern as well as the database pattern come predefined by Intel. These patterns consist of the following characteristics.

File Server Access Pattern

% Access
% Read
% Random

Database Access Pattern

% Access
% Read
% Random

The third access pattern used was made to simulate disk usage in a workstation. To make an appropriate access pattern for this type of situation, we turned to the knowledgeable folks over at StorageReview.com who developed and use the following pattern to simulate workstation usage.

Workstation Access Pattern

% Access
% Read
% Random

More information regarding this access pattern and how it was developed can be found at StorageReview's website.

We tested each of the above access patterns on a set of three I/O loads described as light, medium, and high. The below table describes what each load consisted of.


16 Outstanding I/Os
64 Outstanding I/Os
128 Outstanding I/Os

Iomter build 1999.10.20 was used to conduct the tests. A ramp-up time of 30 seconds was set to eliminate any variability that would occur during the start of a test. Each of the 9 tests (three loads for each access pattern) was set to run for 10 minutes. Using the command line implementation of Iometer, we were able to construct a program that ran each of the tests one after another, meaning that the automated test ran for a total of 1 hour 34 minutes and 30 seconds. As the test results will show, these tests were done on each card at each available stripe size on RAID 0, RAID 1 (no stripe sizes), and RAID 5 (all available stripes), making this review a truly monstrous undertaking considering we ran other tests as well. To put it in perspective, the Iometer tests alone took a total of benchmark time of 78 hours and 45 minutes.

The other benchmarks used required an active RAID array with a valid partition table and format. For these tests we created an NTFS partition on the array using the default cluster size (which happens to be 4KB). Once the array was partitioned and formatted, the only information written to or read from the drive was information needed for testing. For example, on the Content Creation Winstone 2001 tests, the benchmark was installed to the RAID array and then told to use the array for the test. This ensures that the RAID array is what is being benchmarked; the IDE drive on the motherboard simply served as a boot drive and performed basic operating system functions.

We also felt it necessary to compare the speed of the various RAID arrays with the speed of the standard motherboard IDE controller. To perform this test, we hooked one of the IBM 75GXP hard drives up to the secondary IDE channel of the motherboard. Since the Gigabyte GA-7DX motherboard makes use of the VIA 686B south bridge, the motherboard supports the ATA100 specification, meaning that we could compare apples to apples since most cards in this review are ATA100 cards. The same procedure as above was used when testing the performance of the 686B in the various benchmarks.

Iometer: Understanding the Results

We are all familiar with the typical "benchmark" score that is the product of many popular performance benchmarks on the market today. Typically, these benchmarks put hardware through a series of tests, count the amount of time needed to complete these tests, compares this time to a baseline time, and outputs one pretty number said to represent the performance of the hardware. Iometer is a synthetic benchmark and the way it reports performance is a bit different.

Iometer is capable of describing many aspects of not only disk performance but also network performance. For our purposes, only 4 numbers that Iometer reports at the end of a run are necessary to describe the performance of a RAID array. These numbers are: total I/O per second, total MBs per second, average I/O response time, and CPU utilization.

The total I/O per second label describes exactly what this number is a measure of. This score is a measure of the number of I/O requests completed per second. Since an I/O request is made up of moving the hard drive head to the proper location and then reading or writing a file, this number provides and excellent measure of how a drive or an array is performing.

The next number, total MBs per second, is directly related to the total I/O per second number since the number is produced by taking the number of I/O operations per second and multiplying them by the size of the file being written or read. Once again, this number is very useful when evaluating performance.

The average I/O response time number increases as the number of outstanding I/Os the test is set to have increases. The two do not increase in a linear fashion and measuring the average I/O response time is a good indication of how various RAID cards can handle load.

As for the next number, CPU utilization, the meaning of this number should be fairly clear. Iometer measures CPU utilization in percent and typically the lower the percent CPU utilization, the better.

The Test

Windows 2000 Test System


CPU(s) AMD Duron 850MHz
Motherboard(s) Gigabyte GA-7DX
Memory 128MB PC2100 Corsair DDR SDRAM
Hard Drive

IBM Deskstar 30GB 75GXP 7200RPM Ultra ATA/100


Phillips 48X

Video Card(s)


RAID Card(s)

Adaptec AAA-UDMA

AMI MegaRAID 100

Iwill SIDE RAID100

Promise FastTrak100
Promise SuperTrak100


Linksys LNE100TX 100Mbit PCI Ethernet Adapter


Operating System

Windows 2000 SP1

Video Drivers

NVIDIA Detonator3 6.50

Benchmarking Applications


Intel Iometer 1999.10.20 beta
Ziff Davis Content Creation Winstone 2001

Software RAID vs. Hardware RAID: Behind the Scenes

One of the first questions we felt needed to be answered was what exactly hardware RAID cards do.

Although we knew that hardware RAID cards perform the parity bit calculations on the card itself, some debate surrounded over where other RAID calculations occur. Some thought that it was only the XOR calculations that occur on the coprocessor and that other calculations are passed to the CPU just as in software RAID. Others thought the opposite: that the hardware RAID's coprocessor performed all RAID functions including the division of files into stripes and deciding what information to send to what drive.

Since we knew that if RAID functions (other than XOR calculations) were performed on the CPU, CPU utilization would be higher than if the CPU did not have to do these task, we thought that measuring CPU utilization would give us a good indication of whether or not hardware RAID cards require the CPU at all. The problem was that during our testing the amount of CPU power required to perform the RAID tasks on even the software RAID cards was minimal, as we will soon see.

Therefore, we decided it would be best to stress the CPU in an effort to make the task of performing RAID calculations more difficult. To do this, we installed FlasK MPEG and set it to encode a MPEG2 movie that we had into MPEG4 (or Divix) file. At the same time, we ran a single Iometer run in the workstation access pattern under the medium I/O load (to add even more stress to the environment). For our software RAID card we used the Promise FastTrak100 in a RAID 0 configuration, and for the hardware RAID card we used its brother, the Promise SuperTrak100, in the same configuration. The results allowed us to gain insight on what was going on behind the hardware RAID configurations.

The above graph clearly shows that the CPU has to do drastically less work when RAID functions are performed on a hardware RAID card. Since in the above RAID 0 configuration no XOR calculations are occurring, the work offloaded from the CPU onto the hardware Raid's coprocessor must be the basic RAID functions such as the striping of the data.

So, the hardware on RAID cards do more than just the calculation of parity bits. It seems that the coprocessor eases the burden of software RAID when the CPU is put under extreme stress. More on how well this works in a bit, but now it is time to get to the nitty gritty of the tests: individual card performance.

Performance: AMI MegaRAID 100

As we discussed before, the AMI MegaRAID is a software RAID solution that makes use of the American Megatrends MG80649 controller. To describe the performance of the RAID cards at different stripe sizes, we decided to make line graphs out of the Iometer results at a medium I/O load (64 outstanding I/Os). We found that this I/O load provided a good representation of what occurs at different stripe sizes on each RAID card. The complete set of data, including the scores at both the light and high I/O loads, are found at the end of the section for those wanting the full story. We will begin with the MegaRAID 100's I/O per second performance and continue with the other scores.

As the line graphs above clearly show, the optimal stripe size with regard to I/O per second performance is 512KB. This is represented on the graphs by the peak present at this stripe size, which forms the graph into a curve. The file server access pattern resulted in 118.55 I/Os per second, while the database access patter resulted in 126.62 I/Os per second and the workstation pattern in 134.47 I/Os per second.

Once again we find that the performance of the AMI MegaRAID 100 peaks at around a stripe size of 512KB when evaluating the total MBs. This is illustrated on the graphs with the peak of the curved lines occurring at this stripe size. The array was able to output .99 MBs per second in the database access pattern, 1.05 MBs in the workstation access pattern and 1.28 MBs in the workstation access pattern.

The trend continues with I/O response time. Once again, the AMI MegaRAID 100 seems to prefer the 512KB stripe setting. This allows the card to have a 237.934 I/O response time in the workstation access pattern, a 252.682 I/O response time in the database access pattern, and 269.887 I/O response in the file server pattern.

CPU utilization does jump around quite a bit at various stripe settings, as the above graph shows. The important item to note is the overall decreasing trend in % CPU utilization as the stripe size increases. This is likely due to the fact that at larger stripe sizes, not as many files need to be divided into stripes meaning that the CPU has to do less calculations. We do note that CPU utilization at the 512KB stripe size that has given us good results was higher than the surrounding stripe sizes. This shows that the CPU is actually doing quite a bit of work at this stripe size. CPU utilization ranged from 2.17% to 2.28% at this stripe size.

Now it is time to move to the real world benchmarks. Let's see if what Iometer told us has any bearing on application performance.

Just as we suspected. Although the performance differences are rather small, the 512KB stripe size setting provides for the fastest Winstone score. It seems that the performance advantages that the 512KB stripe setting showed in Iomter were able to result in a very slight real world performance advantage.

The following table includes the results of all tests we performed on the AMI MegaRAID 100.

Complete Performance: AMI MegaRAID 100

AMI MegaRAID 100
  32KB 64KB 128KB 256KB 512KB 1024KB 2048KB 4096KB Mirror 
CCW01 42.9 42.6 42.9 42.8 43 42.1 42.1 42.6 39.5
File server Low                  
Total I/O/sec 79 80.79 82.12 83.04 83.36 83.57 82.52 83.67 74.52
Total MB/sec 0.85 0.88 0.89 0.9 0.89 0.91 0.89 0.9 0.81
Avg I/O response 12.6554 12.3751 12.1742 12.0399 11.9941 11.9642 12.116 11.9484 13.4167
CPU Util 1.71 1.69 1.68 1.66 1.74 1.58 1.58 1.66 1.7
File server Med                  
Total I/O/sec 115.64 117.65 118.16 118.6 118.55 118.88 118.31 118.87 114.16
Total MB/sec 1.25 1.27 1.28 1.28 1.28 1.28 1.29 1.29 1.22
Avg I/O response 276.673 271.949 270.794 269.763 269.887 269.13 270.431 269.147 280.284
CPU Util 2.29 2.3 2.27 2.17 2.17 2.12 2.07 2.18 2.2
File server High                  
Total I/O/sec 132.72 134.06 135.58 136.3 136.8 137.07 136.1 137.21 130.21
Total MB/sec 1.44 1.45 1.47 1.48 1.49 1.47 1.47 1.49 1.41
Avg I/O response 964.082 954.251 943.71 938.673 935.273 933.382 969.3654 937.476 982.49
CPU Util 2.7 2.45 2.5 2.41 2.47 2.29 2.26 2.41 2.52
Database Low                  
Total I/O/sec 82.07 86.81 86.81 87.93 88.4 88.72 87.7 89.16 72.71
Total MB/sec 0.64 0.68 0.68 0.69 0.69 0.69 0.69 0.7 0.57
Avg I/O response 12.1824 11.5167 11.5167 11.3704 11.3094 11.2685 11.4005 11.2128 13.7492
CPU Util 1.85 1.7 1.7 1.71 1.77 1.75 1.62 1.7 1.73
Database Med                  
Total I/O/sec 120.53 123.93 125.39 126.6 126.62 126.74 124.77 127.44 112.74
Total MB/sec 0.94 0.97 0.98 0.99 0.99 0.99 0.97 1 0.88
Avg I/O response 265.445 258.154 255.179 252.734 252.682 252.466 256.422 251.051 283.787
CPU Util 2.32 2.28 2.36 2.25 2.28 2.13 2.24 2.15 2.33
Database High                  
Total I/O/sec 137.44 141.69 144.46 145.37 146.62 146.15 144.98 146.61 130.25
Total MB/sec 1.07 1.11 1.13 1.14 1.15 1.14 1.13 1.14 1.02
Avg I/O response 930.974 902.997 885.631 880.045 872.655 875.44 882.45 872.468 982.319
CPU Util 2.72 2.51 2.76 2.61 2.61 2.4 2.45 2.67 2.67
Workstation Low                  
Total I/O/sec 90.16 94.44 96.19 97.87 98.37 98.48 98.28 99.02 87.73
Total MB/sec 0.7 0.74 0.75 0.76 0.77 0.77 0.77 0.77 0.69
Avg I/O response 11.0892 10.5866 10.3932 10.2154 10.1627 10.152 10.1716 10.0965 11.3956
CPU Util 1.85 1.95 1.84 1.99 1.84 1.81 1.83 1.88 1.94
Workstation Med                  
Total I/O/sec 127.33 131.08 132.68 133.83 134.47 134.12 133.33 133.15 129.63
Total MB/sec 0.99 1.02 1.04 1.05 1.05 1.05 1.04 1.04 1.01
Avg I/O response 251.278 244.072 241.147 239.08 237.934 238.568 231.962 230.275 246.806
CPU Util 2.47 2.53 2.46 2.25 2.39 2.27 2.31 2.34 2.56
Workstation High                  
Total I/O/sec 145.26 149.91 152.42 151.97 154.52 154.79 153.76 155.56 147.59
Total MB/sec 1.13 1.17 1.19 1.19 1.21 1.21 1.2 1.22 1.15
Avg I/O response 880.777 843.472 839.412 841.964 827.948 826.512 832.174 848.177 866.961
CPU Util 2.75 2.8 2.71 2.69 2.62 2.66 2.55 2.89 2.97

Performance: Iwill SIDE RAID100

The Iwill SIDE RAID 100 makes use of the Highpoint HPT370A software RAID controller.

It seems that the Iwill SIDE RAID100 is held back by its lack of available stripe sizes. In our Iometer I/Os per second benchmark, the Iwill SIDE RAID100 keeps improving upon its performance until the stripe sizes run out at 64KB.

Once aging the MBs per second performance of the card keeps increasing until we run out of stripe sizes at 64KB. This is similar to the results we saw in the AMI card, where performance did not peak until a stripe size of 512KB was reached.

As the other graphs showed, once again the average I/O response time keeps decreasing as the stripe size gets larger. We do not reach a limit when performance starts to go down again because the available stripe sizes are too limited. 64KB seems to be the best that the Iwill SIDE RAID can do.

We can clearly see here that CPU utilization reaches a minimum at the 16KB stripe size on the SIDE RAID 100. The utilization does not go up too much more when increasing the stripe size up to the 64KB mark that we found to be most effective in the previous tests.

Like we saw in Iometer, Content Creation 2001 performs best when the stripe size of the SIDE RAID100 is the largest possible: 64KB. Here it performs about 2% faster than the slowest stripe size in the Content Creation benchmark, the 8KB size.

Complete Performance: Iwill SIDE RAID100

Iwill SIDE RAID100
  4KB 8KB 16KB 32KB 64KB Mirror
CCW01 43 42.6 42.9 43 43.4 39.1
File server Low            
Total I/O/sec 71.96588 76.16413 77.99069 79.78272 81.23027 75.40166
Total MB/sec 0.774649 0.813444 0.848937 0.853567 0.872817 0.831817
Avg I/O response 13.89316 13.12671 12.81875 12.53129 12.30844 13.26002
CPU Util 1.053881 1.202333 1.23708 1.131823 1.078711 1.004935
File server Med            
Total I/O/sec 108.8442 113.3916 115.6452 117.2821 118.5643 117.1143
Total MB/sec 1.188819 1.229986 1.253872 1.262554 1.285013 1.269665
Avg I/O response 293.9401 282.1344 276.6704 272.7844 269.8303 273.1716
CPU Util 1.496434 1.402906 1.29099 1.414638 1.379497 1.427925
File server High            
Total I/O/sec 122.3923 128.8452 132.0535 134.3386 135.3279 132.8397
Total MB/sec 1.329463 1.384786 1.433583 1.486907 1.472532 1.439469
Avg I/O response 1045.091 992.687 968.3978 952.1668 945.1867 962.9287
CPU Util 1.718562 1.588313 1.581671 1.569222 1.583297 1.57466
Database Low            
Total I/O/sec 69.50876 70.7161 77.05485 82.2547 86.47777 75.48504
Total MB/sec 0.543037 0.552469 0.601991 0.642615 0.675608 0.589727
Avg I/O response 14.38432 14.13795 12.97491 12.15531 11.56147 13.24494
CPU Util 1.248648 1.293701 1.260289 1.322049 1.145151 1.095085
Database Med            
Total I/O/sec 106.2887 107.514 116.3793 121.4839 125.5406 116.6244
Total MB/sec 0.830381 0.839953 0.909214 0.949093 0.980786 0.911128
Avg I/O response 300.9745 297.5667 274.9305 263.3495 254.8234 274.3215
CPU Util 1.587422 1.647527 1.268784 1.524031 1.53401 1.487205
Database High            
Total I/O/sec 119.568 120.4754 131.9374 139.4826 143.0439 133.2462
Total MB/sec 0.934125 0.941214 1.030761 1.089708 1.117531 1.040986
Avg I/O response 1069.508 1061.567 969.5489 917.0306 894.1836 959.9097
CPU Util 1.755861 1.76935 1.655792 1.776024 1.609178 1.684221
Workstation Low            
Total I/O/sec 83.06857 83.95215 85.03873 90.06615 95.4477 90.1141
Total MB/sec 0.648973 0.655876 0.664365 0.703642 0.745685 0.704016
Avg I/O response 12.03559 11.90887 11.75691 11.10092 10.47502 11.09437
CPU Util 1.278695 1.340405 1.250328 1.345463 1.140158 1.275376
Workstation Med            
Total I/O/sec 121.3267 121.6994 122.6075 128.3637 132.4976 133.6806
Total MB/sec 0.947865 0.950776 0.957871 1.002842 1.035137 1.04438
Avg I/O response 263.65 262.8905 260.9327 249.2134 241.4541 239.3457
CPU Util 1.774374 1.794442 1.463957 1.614135 1.450558 1.480591
Workstation High            
Total I/O/sec 134.8674 135.372 138.2017 146.0536 150.7452 150.5597
Total MB/sec 1.053652 1.057594 1.079701 1.141044 1.177697 1.176348
Avg I/O response 948.6608 944.9069 925.5926 875.7668 848.5587 849.669
CPU Util 1.704198 1.832815 1.786065 1.684263 1.812719 1.675899

Performance: Promise FastTrak100

As the card section coves, this Promise FastTrak100 card is based on the Promise PDC20267 software RAID controller. Lets see how the card did.

As we saw before, the Promise FastTrak100 offered a variety of stripe sizes. The above graph shows that the performance of the card seems to peak around the 64KB to 128KB stripe range when it comes to how many I/Os per second the card can perform. As the stripe sizes get larger than these, the performance of the card seems to level off.

We see the same trend when looking at the MBs per second speed of the Promise FastTrak100. Here the maximum speeds reached occur around the 64KB to 128KB stripe range, after which speed levels off. For the raw numbers involved, please see the end of this section.

This time we see a mirror image of the previous results since in the measure of I/O response time lower is better. The minimum response times in all access patterns lied in the 64KB to 128KB stripe size range, with the performance leveling off after this.

As we saw in the AMI software RAID card testing, CPU usage varies greatly so we are really only able to report on trends we see in the graphs. Above we can clearly see that the smaller stripe sizes use much more CPU time than the larger ones. This is due to all the division work the CPU has to do when the stripe size is small. The trend is for the CPU utilization to decrease until the 128KB stripe size and then slightly increase again.

Once again it seems that the synthetic numbers outputted by Iometer equate to real world performance. In this case, the optimal stripe size appears to be 64KB, as this is where Content Creation Winstone 2001 was able to perform the best.

Complete Performance: Promise FastTrak100

Promise FastTrak100
  1KB 2KB 4KB 8KB 16KB 32KB 64KB 128KB 256KB 512KB 1024KB
CCW01 42.6 41.8 41.2 42.3 42.3 42.9 43 42.8 42.8 42.1 41.4
File server Low                      
Total I/O/sec 69.36 70.51
75.04 77.18 78.07 79.76 81.37 82.3 82.21 81.82
Total MB/sec 0.75 0.77 0.76 0.81 0.83 0.84 0.87 0.88 0.89 0.88 0.88
Avg I/O response 14.4147 14.1793 14.2613 13.3232 12.9548 12.8063 12.5356 12.2865 12.148 12.162 12.2193
CPU Util 1.55 1.58 1.03 0.97 1 1.04 1.04 1.03 1.09 1 0.98
File server Med                      
Total I/O/sec 105.45 105.92 107 111.35 113.82 114.7 116.99 117.59 117.75 118.47 118.64
Total MB/sec 1.14 1.16 1.15 1.21 1.22 1.25 1.25 1.26 1.28 1.28 1.28
Avg I/O response 303.397 302.028 299.01 287.324 281.078 278.929 273.463 272.088 271.769 270.039 269.656
CPU Util 1.93 1.32 1.35 1.34 1.34 1.42 1.43 1.37 1.39 1.35 1.31
File server High                      
Total I/O/sec 119.07 119.16 121.11 126.79 130.2 131.36 134.04 135.27 135.73 136.63 137.01
Total MB/sec 1.28 1.29 1.31 1.38 1.42 1.43 1.46 1.46 1.46 1.47 1.48
Avg I/O response 1074.41 1037.36 1056.17 1008.98 982.396 937.588 954.199 945.592 942.232 936.274 933.584
CPU Util 2.07 1.61 1.6 1.61 1.55 1.6 1.64 1.5 1.57 1.54 1.54
Database Low                      
Total I/O/sec 68.71 66.99 66.46 69.12 75.91 80.32 83.76 85.63 86.2 86.52 86.21
Total MB/sec 0.54 0.52 0.52 0.54 0.59 0.63 0.65 0.67 0.67 0.68 0.67
Avg I/O response 14.5512 14.9236 15.0429 14.4647 13.1701 12.4475 11.9361 11.6751 11.5975 11.555 11.5973
CPU Util 1.67 1.15 1.14 1.18 1.12 1.17 1.22 1.19 1.11 1.04 1.14
Database Med                      
Total I/O/sec 104.36 103.97 103.78 105.37 113.76 119.87 122.61 124.13 124.97 126.07 126.28
Total MB/sec 0.82 0.81 0.81 0.82 0.89 0.94 0.96 0.97 0.98 0.98 0.99
Avg I/O response 306.568 307.732 308.256 303.662 281.234 266.914 260.931 257.754 256.022 253.777 253.339
CPU Util 2.01 1.57 1.45 1.48 1.51 1.34 1.35 1.32 1.47 1.37 1.43
Database High                      
Total I/O/sec 118.06 118.05 118.06 119.22 130.64 137.64 142.2 144.01 145.14 146.07 146.29
Total MB/sec 0.92 0.92 0.92 0.93 1.02 1.08 1.11 1.13 1.13 1.14 1.14
Avg I/O response 1083.64 1083.44 1003.33 1073.09 979.116 929.297 899.442 888.325 881.394 875.739 874.187
CPU Util 2.13 1.69 1.68 1.68 1.68 1.74 1.66 1.62 1.62 1.58 1.64
Workstation Low                      
Total I/O/sec 82.37 81.63 80.55 82.11 83.55 89.4 93.42 95.3 97.12 96.82 97.26
Total MB/sec 0.64 0.64 0.63 0.64 0.65 0.7 0.73 0.74 0.76 0.76 0.76
Avg I/O response 12.138 12.2486 12.4129 12.1761 11.9654 11.1825 10.7014 10.4909 10.2941 10.3264 10.2793
CPU Util 1.7 1.25 1.24 1.3 1.2 1.22 1.27 1.29 1.23 1.26 1.26
Workstation High                      
Total I/O/sec 133.88 133.76 133.83 134.44 136.78 144.63 150 151.88 153.33 154.56 154.8
Total MB/sec 1.05 1.05 1.05 1.05 1.07 1.13 1.17 1.19 1.2 1.21 1.21

Performance: Adaptec AAA-UDMA

The I/Os per second performance of our first hardware RAID card, the Adaptec AAA-UDMA, keeps improving as stripe size is increased. Much like we saw with the limited stripe sizes on the Iwill SIDE RAID100, the Adaptec AAA-UDMA hits a stripe limit before performance begins to decrease again.

Once again we see a positive linear trend with the number of MBs per second the array is able to process and the stripe size of the array. If more stripe sizes were present, we would almost certainly see performance peak as we saw with other cards. For the AAA-UDMA, the 128KB setting seems to be the best.

As one could predict by examining the other AAA-UDMA graphs, the I/O response time of the card keeps decreasing as stripe size is increased. Once again, performance keeps increasing throughout the stripe range because of the limited size that the stripes can be.

The CPU utilization of the Adaptec AAA-UDMA is rather spastic, with hardly any trends in the data. This can be attributed to the fact that the CPU is not actually performing the RAID calculations. It is for this reason that the CPU utilization numbers on this card are lower than those of the software RAID cards. We will discuss this more in a bit.

The performance gain that the AAA-UDMA is able to realize when increasing stripe size was quite impressive. From the slowest stripe size of 8KB to the fastest of 128KB, the performance of the card increased a full 29%.

Complete Performance: Adaptec AAA-UDMA

Adaptec AAA-UDMA
RAID5 128k
CCW01 31.2 34.8 37.9 39.4 40.4 28.5 35.4
File server Low              
Total I/O/sec 71.25655 74.1532 76.40516 77.43056 78.93444 64.25565 71.84088
Total MB/sec 0.768494 0.810977 0.823789 0.844516 0.854708 0.703336 0.765489
Avg I/O response 14.03083 13.48302 13.08511 12.91184 12.66574 15.56021 13.917
CPU Util 0.711054 0.719604 0.658672 0.673348 0.61168 0.591968 0.596753
File server Med              
Total I/O/sec 104.0569 108.1677 110.3821 111.6732 112.5718 88.31788 110.9632
Total MB/sec 1.114103 1.17716 1.19118 1.209714 1.225276 0.956352 1.198263
Avg I/O response 307.4287 295.757 289.8369 286.4917 284.1708 362.2482 288.3103
CPU Util 0.906624 0.960045 0.939914 0.812937 0.846381 0.719377 0.90666
File server High              
Total I/O/sec 116.3812 122.2189 125.1642 126.8328 128.7369 100.7005 124.7053
Total MB/sec 1.258856 1.315137 1.353862 1.380514 1.406467 1.075972 1.339324
Avg I/O response 1098.934 1046.521 1021.732 1008.487 933.4825 1269.965 1025.75
CPU Util 0.974041 1.035432 1.063364 0.859536 0.931563 0.898099 1.08758
Database Low              
Total I/O/sec 67.23656 73.90781 79.04055 81.97249 83.64126 59.4603 71.66333
Total MB/sec 0.525286 0.577405 0.617504 0.64041 0.653447 0.464534 0.55987
Avg I/O response 14.86962 13.52734 12.64924 12.1964 11.95336 16.81475 13.95172
CPU Util 0.819647 0.791279 0.782936 0.762894 0.737914 0.714503 0.774591
Database Med
Total I/O/sec 101.1893 109.9112 115.3153 118.1498 119.458 83.16301 112.1667
Total MB/sec 0.790541 0.858682 0.900901 0.923046 0.933327 0.649711 0.876303
Avg I/O response 316.146 291.0723 277.4322 270.7919 267.8315 384.6634 285.1995
CPU Util 0.94814 1.036596 0.991546 0.964818 0.924751 0.903044 0.95822
Database High              
Total I/O/sec 113.2573 123.9873 131.2281 135.2258 137.639 94.37823 126.0247
Total MB/sec 0.884823 0.968651 1.025219 1.056451 1.075305 0.73733 0.984568
Avg I/O response 1129.301 1031.513 974.6317 945.945 929.4414 1355.043 1014.974
CPU Util 1.061546 1.04995 1.068269 1.05661 0.984666 0.944821 1.111758
Workstation Low              
Total I/O/sec 78.95693 80.65549 86.43924 89.9544 92.19003 74.34632 85.6204
Total MB/sec 0.616851 0.630121 0.675307 0.702769 0.720235 0.580831 0.668909
Avg I/O response 12.66264 12.39597 11.56635 11.11425 10.84406 13.44816 11.67704
CPU Util 0.821255 0.81632 0.841381 0.807946 0.776303 0.682773 0.829715
Workstation Med              
Total I/O/sec 112.9377 114.9921 120.6931 124.1917 125.9231 99.23827 127.1239
Total MB/sec 0.882326 0.898376 0.942915 0.970248 0.983774 0.775299 0.993156
Avg I/O response 283.2855 278.2004 265.1005 257.6229 254.0763 322.4086 251.6587
CPU Util 1.008242 0.978171 0.891352 0.948209 0.971432 0.90473 1.018305
Workstation High              
Total I/O/sec 124.9915 129.3204 136.7711 141.2342 143.8035 112.3077 141.8096
Total MB/sec 0.976496 1.010316 1.068524 1.103392 1.123465 0.877404 1.107887
Avg I/O response 1023.267 989.1566 935.2815 905.7801 889.3829 1138.763 902.0593
CPU Util 1.170038 1.083329 1.10174 1.018261 1.091622 1.001548 1.228573

Performance: Promise SuperTrak100

Once again we are not limited by stripe size options and we see performance hit a peak at a 256KB stripe size. After this point performance levels off on the card performs similarly as stripe size increases.

The MBs per second performance of the Promise SuperTrak100 once again levels off at around the 128 to 256KB stripe size. After this point increasing the stripe size does not increase performance.

Just like all the other Iometer results on the Promise SuperTrak100, the I/O response time is best at around a 256KB stripe size. At the 128KB level, performance begins to level out but is still able to drop when moving to the 256KB stripe size.

Like we saw with our other hardware RAID card, the Promise SuperTrak''s CPU utilization does not seem to be related to the card's stripe size. The only interesting part of the graph is the fact that CPU utilization decreases at the 16KB stripe size. This is likely due to variation in normal CPU utilization and not on the stripe size chosen by the Promise card.

We already gathered from the Iometer scores that the Promise SuperTrak100 favored the 256KB stripe size. Our Content Creation scores verify this, as the 256KB stripe size allowed for the greatest performance in this benchmark. This stripe size performed 6% faster than the slowest stripe size of 1KB.

Complete Performance: Promise SuperTrak100

Promise SuperTrak100
  1KB 2KB 4KB 8KB 16KB 32KB 64KB 12K8KB 256KB 512KB 1024KB
CCW01 37.4 38.2 38.7 37.7 38.4 38.5 39 39 39.5 38.4 38.4
File server Low                      
Total I/O/sec 67.12005 66.7399 68.0414 71.38657 73.7661 75.06172 74.55458 75.29941 77.98196 78.2779 78.10587
Total MB/sec 0.710424 0.73174 0.73448 0.770902 0.800466 0.808513 0.799331 0.813581 0.839697 0.852504 0.837103
Avg I/O response 14.45323 14.9808 14.6948 14.00631 13.5539 13.32025 13.41028 13.27661 12.82096 12.77201 12.80111
CPU Util 1.482699 1.3254 1.39504 1.343758 0.942509 1.281537 1.353274 1.45521 1.353354 1.499368 1.225796
File server Med                      
Total I/O/sec 87.46062 87.2386 88.9403 92.80042 94.86779 96.55235 95.74918 96.3978 99.23468 99.40937 99.64426
Total MB/sec 0.944761 0.95725 0.95413 1.000568 1.030867 1.049752 1.022847 1.044978 1.078611 1.080527 1.07305
Avg I/O response 365.763475 368.589 359.561 344.61996 337.2032 331.3549 334.1284 331.8347 322.3957 321.787 321.0314
CPU Util 1.573113 1.66668 1.65819 1.776742 1.240699 1.611452 1.700047 1.721831 1.58964 1.676577 1.421107
File server High                      
Total I/O/sec 95.39213 95.2869 96.8453 101.6889 105.2693 106.784 106.1719 107.5983 110.2461 110.7789 110.9654
Total MB/sec 1.027482 1.02302 1.03752 1.120983 1.148083 1.171375 1.153509 1.171477 1.175404 1.203587 1.212442
Avg I/O response 1339.67755 1341.26 1319.91 1257.581 1214.627 1197.363 1204.277 1188.289 1160.027 1154.106 1152.861
CPU Util 1.711693 1.76509 1.85955 1.836471 1.360957 1.901578 1.881995 1.830311 1.99722 2.037145 1.801851
Database Low                      
Total I/O/sec 69.09686 67.3382 68.9268 69.9527 77.09841 81.45626 82.07396 82.77721 86.57783 86.4156 86.46775
Total MB/sec 0.539819 0.53698 0.53849 0.546505 0.602331 0.636377 0.641203 0.646697 0.676389 0.675122 0.675529
Avg I/O response 14.470205 14.5463 14.5659 14.29212 12.96805 12.27385 12.18229 12.0785 11.54825 11.5699 11.56303
CPU Util 1.363821 1.40719 1.54405 1.291963 1.098411 1.39884 1.42053 1.677575 1.522373 1.707581 1.313737
Database Med                      
Total I/O/sec 86.41607 85.8631 86.0781 87.49403 96.540041 101.4864 101.8905 103.4112 103.3124 106.6987 106.6974
Total MB/sec 0.675126 0.67081 0.67249 0.683547 0.753909 0.792863 0.796019 0.8079 0.830566 0.833583 0.833573
Avg I/O response 370.229549 378.125 371.626 365.61587 331.5631 315.1881 313.9804 309.3473 300.8615 299.7763 299.8674
CPU Util 1.539031 1.514 1.78433 1.856098 1.397179 1.844496 1.809413 1.816077 1.720956 1.765902 1.587451
Database High                      
Total I/O/sec 95.10542 94.8278 94.8804 96.28764 107.0218 112.8167 114.3271 115.6289 118.6619 119.4283 119.713
Total MB/sec 0.743011 0.74084 0.74125 0.752247 0.836108 0.88138 0.89318 0.903351 0.927046 0.933033 0.935258
Avg I/O response 1344.9083 1347.44 1348.37 1327.356 1195.257 1133.232 1119.056 1106.587 1077.37 1070.372 1067.916
CPU Util 1.77274 1.65756 1.75597 1.988084 1.465632 1.946343 1.839303 2.146408 1.771015 1.919541 1.842781
Workstation Low                      
Total I/O/sec 75.89715 75.8349 76.0327 76.49338 78.63168 84.16026 85.33085 87.08861 91.79598 91.85746 91.6449
Total MB/sec 0.592947 0.59246 0.59401 0.597605 0.61431 0.657502 0.666647 0.68038 0.717156 0.717636 0.715976
Avg I/O response 13.173596 13.1844 13.1496 13.07045 12.71545 11.87967 11.71728 11.48065 10.89046 10.88439 10.90994
CPU Util 1.565722 1.54739 1.54399 1.398773 1.125112 1.510667 1.532374 1.707658 1.510625 1.685889 1.417205
Workstation Med                      
Total I/O/sec 96.8112 96.7732 97.3077 97.97584 101.2399 106.4841 108.6437 110.0267 113.8189 114.1943 114.3564
Total MB/sec 0.756337 0.75604 0.76022 0.765436 0.790937 0.831907 0.848779 0.859583 0.889211 0.892143 0.893409
Avg I/O response 330.370583 333.358 328.705 326.45661 316.0336 300.3906 294.4176 290.7395 281.0522 280.1705 279.7864
CPU Util 1.710912 1.71926 1.8327 1.762635 1.43891 1.994656 1.807781 1.832817 1.936305 1.75927 1.729295
Workstation High                      
Total I/O/sec 104.9817 105.124 105.385 105.9808 110.788 117.577 120.8391 122.8812 126.1852 127.1693 126.7841
Total MB/sec 0.82017 0.82128 0.82332 0.827975 0.865531 0.91857 0.944055 0.960009 0.985822 0.99351 0.990501
Avg I/O response 1218.14682 1216.74 1213.45 1206.976 1154.397 1087.875 1058.749 1041.081 1015.058 1005.736 1008.638
CPU Util 1.739309 2.10648 1.93782 1.907984 1.47564 1.929605 2.134907 2.224847 1.957912 2.004625 1.864465

Performance: RAID 0

Now that we have exhaustively gathered and analyzed the performance of RAID 0 with each card tested, it is time to put the cards together and see which one comes out on top to become the RAID 0 performance champion.

To do this, we are going to use the ideal situation for each card by comparing the performance of the RAID cards at their optimal stripe sizes. If you recall from the above tests, we arrived at the following optimal stripe sizes for each card:

Ideal Stripe Size

AMI MegaRAID 100
Iwill SIDE RAID 100
Promise FastTrak100
Adaptec AAA-UDMA
Promise SuperTrak100
AMI MG80649
HighPoint HPT370A
Promise PDC20267

We will now compare the performance of the cards at the above stripe sizes to one another in order to determine which one is "the best" at RAID 0. This time will will also throw in the regular, single IDE drive that we tested by plugging it directly into the motherboard. This should give us a good idea of the speed advantages of RAID 0. It did take us a long time to get to this point, but the above information is crucial for determining what stripe size to use with your IDE RAID controller.

To begin we will examine the Iometer scores in the same manner we did with the above cards. Our first numbers are the I/Os per second count. This time the scores are represented in a bar graph. In order to simplify our results, the bar graph will only contain data from the workstation access pattern. Since all the access patterns followed the same trend (as we saw above), this should not be a problem. Like before, if you are interested in additional scores, they are posted in the next page.

The results are a bit different that we would have though. In this synthetic benchmark, the IDE drive was only beat out by one RAID array: the one powered by the AMI MG80649 controller. The MegaRAID 100 outperformed the non-RAID drive by about 1%. The other RAID drives were beaten by as little as .2% (the SIDE RAID100) and by as much as 17% (SuperTrak100). Lets see how the cards do when in the MBs per second test.

Remember how at the beginning of the test section we discussed how I/Os per second and MBs per second were related? Well this graph goes to prove our point. When switching from I/Os per second to MBs per second, the standings of the cards remain the same. The performance advantage of the MegaRAID 100 here is about 1% again over the IDE drive and the SuperTrak100 maintains 17% performance lag.

Once again, with the I/O response benchmark, the standings of the cards do not change. In fact, the MegaRAID 100 remains the only card to beat out the standard IDE drive, again by about 1%.

When moving to CPU utilization measures, things change a bit. The MegaRAID 100, which previously beat the other RAID cards and the IDE drive in the other synthetic tests, is now at the bottom of the chart with a massive 2.39% CPU utilization. The hardware based AAA-UDMA does the best in this test with the lowest CPU utilization of all.

As you can see, our RAID performance in the synthetic Iometer benchmark leaves quite a bit to be desired. The highest performance increase we saw over standard IDE was a 1% increase, and that was only in one of the cards. The rest fell below the IDE drive in the graphs, leaving us to wonder what advantage RAID 0 offers.

To put this question to test, we next performed our Content Creation Winstone 2001 benchmark on the arrays tested above. We hoped that the real-world advantage of RAID, one where the CPU is taxed and the disk being written too would be greater than the synthetic advantage we just observed.

We must admit, it was good to see the above scores when we graphed them for the first time. We were more than a bit worried that even the "fast" RAID 0 did not do anything to performance but hinder it. As our real-world Content Creation Winstone 2001 scores show, RAID 0 does indeed increase performance, and by a noticeable amount.

We found that the fastest card was the Iwill SIDE RAID100, powered by the Highpoint HPT370A. Every RAID 0 array tested was able to outperform the single IDE drive, and in the case of the SIDE RAID100 it was able to do so by a good 13%. The next fastest card, the FastTrak100 and its Promise PDC20267, outperformed the IDE drive by 12%, as did the AMI MG80649 powered MegaRAID 100. The Adaptec hardware RAID card, the AAA-UDMA, performed 5% faster than the pure IDE drive. Finally, the Promise SuperTrak100 hardware RAID card outperformed the IDE drive by 3%.

Obviously, the 12% to 13% performance increase that the software RAID 0 arrays experienced is noticeable. A bit perplexing was the 3% to 5% performance gain that the hardware RAID 0 arrays experienced. To understand why this is, we must refer to the Hardware RAID vs. Software RAID comparison made at the beginning of this article.

If you recall, we showed that all RAID functions, not just the RAID 5 functions, were calculated on the hardware RAID cards. What does this mean in terms of actual performance? Well, it does mean that the system CPU does not have to do as much work. At the same time, it also means that the hardware RAID solutions will perform slower.

The reason for this is that with the high speed of CPU processors available today, like the Duron 850MHz processor used in this review, the CPUs are able to perform the RAID calculations faster than the hardware RAID coprocessors can. It is ture that when moving to RAID 5 the CPU is overburdened with parity bit calculations, but in RAID 0 arrays the CPU has plenty of power to devote to a few simple calculations per clock.

What does this mean to you? Well, if you are a home computer user looking for only a RAID 0 or RAID 1 configuration, there is absolutely no reason to spend the extra money for a hardware RAID card. In fact, you are actually better off performance wise going with a cheaper software RAID solution.

So, we now know that hardware RAID cards are really only good for one thing: RAID 5 configurations. Lets see how our two hardware RAID cards performed in a RAID 5 configuration.

Complete Performance: RAID 0

RAID 0 Performance
IDE Only SuperTrak100 AAA-UDMA MegaRAID 100 FastTrak100 SIDE RAID100
CCW01 38.5 39.5 40.4 43 43 43.4
File server Low
Total I/O/sec 72.31409 77.98196 78.93444 83.36 79.76 81.23027
Total MB/sec 0.800398 0.839697 0.854708 0.89 0.87 0.872817
Avg I/O response 13.82637 12.82096 12.66574 11.9941 12.5356 12.30844
CPU Util 0.894611 1.353354 0.61168 1.74 1.04 1.078711
File server Med
Total I/O/sec 115.9632 99.23468 112.5718 118.55 116.99 118.5643
Total MB/sec 1.254932 1.078611 1.225276 1.28 1.25 1.285013
Avg I/O response 275.8972 322.3957 284.1708 269.887 273.463 269.8303
CPU Util 1.25258 1.58964 0.846381 2.17 1.43 1.379497
File server High
Total I/O/sec 132.4862 110.2461 128.7369 136.8 134.04 135.3279
Total MB/sec 1.433341 1.175404 1.406467 1.49 1.46 1.472532
Avg I/O response 965.457 1160.027 933.4825 935.273 954.199 945.1867
CPU Util 1.694932 1.99722 0.931563 2.47 1.64 1.583297
Database Low
Total I/O/sec 71.2335 86.57783 83.64126 88.4 83.76 86.47777
Total MB/sec 0.556512 0.676389 0.653447 0.69 0.65 0.675608
Avg I/O response 14.0341 11.54825 11.95336 11.3094 11.9361 11.56147
CPU Util 0.979878 1.522373 0.737914 1.77 1.22 1.145151
Database Med
Total I/O/sec 115.8976 103.3124 119.458 126.62 122.61 125.5406
Total MB/sec 0.90545 0.830566 0.933327 0.99 0.96 0.980786
Avg I/O response 276.0633 300.8615 267.8315 252.682 260.931 254.8234
CPU Util 1.434361 1.720956 0.924751 2.28 1.35 1.53401
Database High
Total I/O/sec 132.4817 118.6619 137.639 146.62 142.2 143.0439
Total MB/sec 1.035013 0.927046 1.075305 1.15 1.11 1.117531
Avg I/O response 965.5515 1077.37 929.4414 872.655 899.442 894.1836
CPU Util 1.549068 1.771015 0.984666 2.61 1.66 1.609178
Workstation Low
Total I/O/sec 85.83641 91.79598 92.19003 98.37 93.42 95.4477
Total MB/sec 0.670597 0.717156 0.720235 0.77 0.73 0.745685
Avg I/O response 11.64828 10.89046 10.84406 10.1627 10.7014 10.47502
CPU Util 0.941466 1.510625 0.776303 1.84 1.27 1.140158
Workstation Med
Total I/O/sec 132.8 113.8 125.9 134.5 130.3 132.5
Total MB/sec 1.04 0.889 0.983 1.05 1.02 1.04
Avg I/O response 240.94 281.05 254.08 237.93 245.48 241.45
CPU Util 1.45 1.93 0.97 2.39 1.46 1.45
Workstation High
Total I/O/sec 150.7187 126.1852 143.8035 154.52 150 150.7452
Total MB/sec 1.17749 0.985822 1.123465 1.21 1.17 1.177697
Avg I/O response 848.6778 1015.058 889.3829 827.948 852.915 848.5587
CPU Util 1.61234 1.957912 1.091622 2.62 1.7 1.812719

Performance: RAID 5

To build our RAID 5 array we decided upon a stripe size of 128KB, as this stripe size allowed for good performance in both the Adaptec and Promise hardware RAID cards tested. Like we did in the above performance analysis, we will again use bar graphs of the workstation access pattern at a medium I/O load to compare the performance of the cards. Also thrown into the set is our standard IDE drive.

At the beginning, while discussing RAID 5, we mentioned some performance issues that may come from using RAID 5: mainly that every write requires at least 2 reads and 2 writes. Well, as you can see, this plays a role in the performance of RAID 5, at least in our I/Os per second benchmark. Both cards performed significantly worse than the single IDE drive, with the Adaptec card performing 4% faster than the Promise card.

The story remains the same with the MBs per second results. Once again although both RAID 5 configurations perform worse than the single IDE drive, the Adaptec card holds a 4% performance advantage over the Promise offering.

The I/O response times show the same results, with the Adaptec card performing about 4% faster again.

The story changes a bit when we look at CPU utilization. This time both hardware RAID cards have the advantage over the IDE drive, since all data is passing through the card and not through the system. The Adaptec card was able to use much less CPU power than the Promise card, perhaps due to its use of a SCSI to IDE converter.

It is with the real-world Content Creation 2001 benchmark that we can really see the toll that hardware RAID takes on the system. Both hardware RAID cards ended up performing the same in this benchmark, which left them a full 35% behind the single IDE drive.

As we can see, RAID 5 should be strictly reserved in cases where data integrity is of the utmost importance. It does offer advantages over a RAID 1 solution, as the array makes use of more hard drive space and has extra capabilities that RAID 1 can not offer. But with the rather noticeable performance hit that RAID 5 incurs, this RAID type is best left for servers with critical data but not much need for speed.

The other RAID type that offers some form of security is RAID 1. We will take a look at this RAID type next.

Complete Performance: RAID 5

RAID 5 Performance
IDE Only
CCW01 28.5 28.5 38.5
File server Low
Total I/O/sec 70.44107 64.25565 72.31409
Total MB/sec 0.765217 0.703336 0.800398
Avg I/O response 14.19397 15.56021 13.82637
CPU Util 0.954765 0.591968 0.894611
File server Med
Total I/O/sec 85.14026 88.31788 115.9632
Total MB/sec 0.925371 0.956352 1.254932
Avg I/O response 375.7635 362.2482 275.8972
CPU Util 1.304 0.719377 1.25258
File server High
Total I/O/sec 97.5079 100.7005 132.4862
Total MB/sec 1.055822 1.075972 1.433341
Avg I/O response 1310.89 1269.965 965.457
CPU Util 1.469549 0.898099 1.694932
Database Low
Total I/O/sec 72.30849 59.4603 71.2335
Total MB/sec 0.56491 0.464534 0.556512
Avg I/O response 13.82739 16.81475 14.0341
CPU Util 1.131753 0.714503 0.979878
Database Med
Total I/O/sec 85.02478 83.16301 115.8976
Total MB/sec 0.664256 0.649711 0.90545
Avg I/O response 376.042 384.6634 276.0633
CPU Util 1.298672 0.903044 1.434361
Database High
Total I/O/sec 97.95414 94.37823 132.4817
Total MB/sec 0.765267 0.73733 1.035013
Avg I/O response 1306.185 1355.043 965.5515
CPU Util 1.458835 0.944821 1.549068
Workstation Low
Total I/O/sec 81.10146 74.34632 85.83641
Total MB/sec 0.633605 0.580831 0.670597
Avg I/O response 12.32774 13.44816 11.64828
CPU Util 1.123447 0.682773 0.941466
Workstation Med
Total I/O/sec 95.8 99.2 132.8
Total MB/sec 0.75 0.78 1.04
Avg I/O response 333.83 322.41 240.94
CPU Util 1.35 0.9 1.45
Workstation High
Total I/O/sec 110.2732 112.3077 150.7187
Total MB/sec 0.861509 0.877404 1.17749
Avg I/O response 1159.93 1138.763 848.6778
CPU Util 1.480718 1.001548 1.61234

Performance: RAID 1

We will investigate the performance of the various cards in a RAID 1 configuration in the same way that we investigated the performance of the cards in a RAID 0 array.

With the exception of the SuperTrak100, all the cards performed relatively similar in the I/O per second test. The SIDE RAID100 was the only card able to outperform the IDE drive, doing so by less than 1%.

The results are the same for the MBs per second rate. Most of the cards are able to run a RAID 1 array at speeds very similar to that of a single IDE drive.

The I/O response times are very similar throughout all the cards, yet the SuperTrak100 falls to the bottom of the chart again for some unknown reason.

The situation changes a bit when we investigate how much CPU the cards are using. Again we find that the MegaRAID and its AMI MG80649 uses the most CPU time by far. The card with the lowest tax on the CPU in RAID 1 is the Adaptec hardware RAID card.

Perhaps the best demonstration of which RAID cards perform best when mirroring is the Content Creation Winstone 2001 test. Here we can see that the MegaRAID 100 and its AMI 80649 chip and the SIDE RAID100 and its Highpoint HPT370A chip provide the best RAID 1 speeds. The SIDE RAID100 performed 2% faster than a single IDE drive and the MegaRAID 100 performed slightly higher.

The purpose of RAID 1 configurations should not be speed but rather data security. Since the drives are mirrored, data is easily recoverable if a drive fails. The 2% performance increase that we saw with some of the cards does not hurt the decision to go RAID 1 either.

Complete Performance: RAID 1

RAID 1 Performance
IDE Only
MegaRAID 100
CCW01 35.4 36 38.1 38.5 39.1 39.5
File server Low
Total I/O/sec 71.84088 80.17033 81.24 72.31409 75.40166 74.52
Total MB/sec 0.765489 0.878465 0.89 0.800398 0.831817 0.81
Avg I/O response 13.917 12.073732 12.3057 13.82637 13.26002 13.4167
CPU Util 0.596753 1.073732 1.11 0.894611 1.004935 1.7
File server Med
Total I/O/sec 110.9632 96.01303 113.77 115.9632 117.1143 114.16
Total MB/sec 1.198263 1.035771 1.23 1.254932 1.269665 1.22
Avg I/O response 288.3103 333.2256 281.212 275.8972 273.1716 280.284
CPU Util 0.90666 1.407705 1.33 1.25258 1.427925 2.2
File server High
Total I/O/sec 124.7053 105.4753 131.18 132.4862 132.8397 130.21
Total MB/sec 1.339324 1.125384 1.41 1.433341 1.439469 1.41
Avg I/O response 1025.75 1212.078 974.849 965.457 962.9287 982.49
CPU Util 1.08758 1.594545 1.48 1.694932 1.57466 2.52
Database Low
Total I/O/sec 71.66333 82.99024 77.4 71.2335 75.48504 72.71
Total MB/sec 0.55987 0.648361 0.6 0.556512 0.589727 0.57
Avg I/O response 13.95172 12.04376 12.9166 14.0341 13.24494 13.7492
CPU Util 0.774591 1.295239 1.09 0.979878 1.095085 1.73
Database Med
Total I/O/sec 112.1667 95.4798 113.46 115.8976 116.6244 112.74
Total MB/sec 0.876303 0.745936 0.89 0.90545 0.911128 0.88
Avg I/O response 285.1995 335.1254 281.93 276.0633 274.3215 283.787
CPU Util 0.95822 1.418879 1.38 1.434361 1.487205 2.33
Database High
Total I/O/sec 126.0247 106.4353 131.87 132.4817 133.2462 130.25
Total MB/sec 0.984568 0.831526 1.03 1.035013 1.040986 1.02
Avg I/O response 1014.974 1200.898 970.104 965.5515 959.9097 982.319
CPU Util 1.111758 1.59232 1.62 1.549068 1.684221 2.67
Workstation Low
Total I/O/sec 85.6204 93.84052 94.34 85.83641 90.1141 87.73
Total MB/sec 0.668909 0.733129 0.74 0.670597 0.704016 0.69
Avg I/O response 11.67704 10.65443 10.5971 11.64828 11.09437 11.3956
CPU Util 0.829715 1.19184 1.27 0.941466 1.275376 1.94
Workstation Med
Total I/O/sec 127.12 109.85 130.11 132.79 133.68 129.63
Total MB/sec 0.99 0.86 1.02 1.038 1.04 1.01
Avg I/O response 251.658 291.119 246.092 240.941 239.346 246.806
CPU Util 1.02 1.44 1.54 1.44 1.48 2.56
Workstation High
Total I/O/sec 141.8096 120.0808 149.7 150.7187 150.5597 147.59
Total MB/sec 1.107887 0.938131 1.17 1.17749 1.176348 1.15
Avg I/O response 902.0593 1065.201 854.563 848.6778 849.669 866.961
CPU Util 1.228573 1.637439 1.74 1.61234 1.675899 2.97


There is little question that RAID is making its way into the world of home computers. Once reserved only for servers and high end workstations, the increasing power of the CPU and the decreasing price of hard drives both made the proliferation of RAID possible. There is, however, quite a bit of misunderstanding regarding RAID and its features.

The pages above should have helped straighten out the technology and principles behind RAID and, at the same time, given the reader a good idea of what kind of performance to expect out of a RAID array. Software RAID chips like the ones tested here are found integrated on a variety of motherboards. In addition, PCI software RAID cards have been falling in price for some time now and can be purchased for a rather good price.

There is no question that a software RAID 0 IDE array will make your computer run faster, as the Content Creation Winstone 2001 scores clearly showed. Performance gains on the order of 13% are not negligible Just be sure that your software RAID chip is set to use its optimal stripe size, which we found in previous sections. The difference between a good stripe size for your card and a bad one can mean the difference between being faster than a single IDE drive or being slower than it.

Don't let the speed increase that comes with a RAID 0 array fool you into thinking that you can use older drives and get the same performance. The fact of the matter is that unless your IDE drives in the array are fast, the array simply won't perform well. The same can be said with mixing fast drives and slow drives: the performance of the array will go down.

For those not interested in speed, there are two solutions for you: RAID 1 and RAID 5. We found that mirroring, or RAID 1, provides a good way to maintain data integrity without a major loss of speed. Performance of our RAID 1 arrays were nearly identical to the performance of the single IDE drive we tested.

RAID 5 solutions should be reserved only for those who need it. We saw that a RAID 5 array can really decrease the performance of a set of drives, enough so that no home user will likely opt for this solution. RAID 5 is good in situations where data integrity is extremely important as well as in situations where the CPU is constantly under extreme load and disk access occurs on a regular basis. In these situations, the cons of RAID 5 are offset by the advantages of a very reliable data structure as well as very low CPU usage.

Of the three software RAID cards tested, the Iwill SIDE RAID100 and its Highpoint HPT370A chip provided the best RAID 0 performance, offering a bit more speed than the competing chips. On the RAID 1 side, it was the MegaRAID 100 and the AMI MG80649 chip that resulted in the best array performance. Finally, when it came to hardware RAID 5, both cards performed similarly in real-world tests. We would recommend the Promise SuperTrak100 over the Adaptec AAA-UDMA for a few reasons, including the larger standard cache size and extra drive capabilities.

Log in

Don't have an account? Sign up now