The Test

Our hard drive test bed is designed to shift the bottlenecks, as much as possible, onto the hard drive, but while still within reason. To accomplish that purpose, our test bed is configured as follows:

Intel Pentium 4 Extreme Edition 3.4GHz
Intel D875PBZ Motherboard
1GB DDR400 SDRAM
ATI Radeon 9800 Pro (128MB)
Creative Labs Audigy
Ultra ATA/100 or Serial ATA 150 cables were used where appropriate

The important drivers used are as follows:

Intel Chipset INF 5.1.1002
ATI Catalyst 4.5
Windows XP Service Pack 1 (no further updates were installed)

What's important to point out is that although we could have outfitted our test bed with 256MB of memory, we wanted to avoid over-exaggerating the performance impact of the hard drive. After all, if your system is swapping to disk a lot, you should be considering a memory upgrade before or in tandem with a hard drive upgrade.

The tests that we run are as follows:

Business Winstone IPEAK - a playback test of all of the IO operations that occur within Business Winstone 2004.

Content Creation IPEAK - a playback test of all of the IO operations that occur within Multimedia Content Creation Winstone 2004.

Business Winstone 2004 - the official Business Winstone 2004 test suite.

Multimedia Content Creation Winstone 2004 - the official Multimedia Content Creation Winstone 2004 test suite.

SYSMark 2004 - the official SYSMark 2004 test suite.

Far Cry Level Load Test - a timed test of loading a level in Far Cry.

Unreal Tournament 2004 Level Load Test - a timed test of loading a level in Unreal Tournament 2004.

More details about each individual test will appear in the section of the review dedicated to that particular test.

Putting the Redundancy in RAID: RAID-1 Pure Hard Disk Performance
Comments Locked

127 Comments

View All Comments

  • WaltC - Sunday, July 4, 2004 - link

    There are so many basic errors in this article that it's difficult to know just where to start, but I'll wing it...;)

    From the article:

    "The overall SYSMark performance graph pretty much says it all - a slight, but completely unnoticeable, performance increase, thanks to RAID-0, is what buying a second drive will get you."

    Heh...;) Next time you review a 3d card you could use all of the "real world" benchmarks you selected for this article and conclude that there's "no difference in performance" between a GF4 and a 6800U, or an R8500 and an x800PE, too...;) That would be, of course, because none of these "real world" benchmarks you selected (Sysmark, Winstone, etc.) was created for the specific purpose of measuring 3d gpu performance. Rather, they measure things other than 3d-card performance, and so the kind of 3d card you install would have minimal to no impact at all on these benchmark scores. Likewise, in this case, it's the same with hard drive performance relative to to the functions measured by the "real world" benchmarks you used.

    Basically, overall Sysmark scores, for instance, may include possibly 10% (or less) of their weight in measuring the performance of the hard drive arrangements in the system tested. So, even if the mb/sec read from hard disk for RAID 0 is *double* that of normal single-drive IDE in the tested system, because of the fact that these benchmarks spend 90% or more of their time in the cpu and system ram doing things other than testing HD performance, these benchmarks may reflect only a tiny, near insignificant increase in overall performance between RAID 0 and single-drive IDE systems--which is exactly what you report.

    But that's because all of the "real world" benchmarks you used here are designed to tell you little to nothing specifically about hard-drive performance, just as they are not suitable for use in evaluating performance differences between 3d gpus, either. Your conclusions as I quoted them above, to the effect that these "real world" benchmark results prove that RAID 0 has no impact on "real world" performance, are therefore invalid. The problem is that the software you used doesn't specifically attempt to measure the real-world read & write performance of RAID 0, or even the performance of single-drive IDE for that matter, much less provide any basis from which to compare them and draw the conclusions you've reached.

    I'd recommend at this point that you return to your own article and carefully read the descriptions of the "real world" benchmarks you used, as quoted by you (verbatim in your article, direct from the purveyors of these "real world" benchmarks), and search for even one of them which declares: "The express purpose of this benchmark is to measure, in terms of mbs/sec, the real-world read and write performance of hard drives and their associated controllers." None of the "real-world" benchmarks you used make such a declaration of purpose, do they?

    Next, although I consider this really a minor footnote in comparison to the basic flaw in your review method here and the inaccuracies resulting in the inappropriate conclusions you've reached, I have to second what others have said in response to your article about the fact that if your intent is actually to at some point measure hard drive and controller read/write performance and to then draw conclusions and make general recommendations--that you be mindful that just as their are differences relative to performance among hard drives made by competing companies, there are also differences between the hard drive controllers different companies make, and this certainly applies to both standard single-drive IDE controllers as well as to RAID controllers. So I think you want to avoid drawing blanket conclusions based merely on even the appropriate testing for a single manufacturer's hard drive controller, regardless of whether it's a RAID controller or something else. One size surely doesn't fit all.

    As to your conclusions in this article, again, I'm also really surprised that you didn't logically consider their ramifications, apparently. I'm surprised it didn't occur to that if it was true that RAID 0 had no impact on read/write drive performance that it would also have to be true that Intel, nVidia (and all the other core-logic chip and HD-controller manufacturers to which this applies), not to mention controller manufacturers like Promise, are just wasting their time and throwing good money after bad in their development and deployment of RAID 0 controllers.

    I think you'll have to agree that this is an illogical proposition, and that all of these manufacturers clearly believe their RAID 0 implementations have a definite performance value over standard single-drive IDE--else the only kind of RAID development we'd see is RAID mirroring for the purpose of concurrent backup.

    In reading some of the responses in this thread, it's obvious that a lot of your readership really doesn't understand the real purpose of RAID 0, and views it as a "marketing gimmick" of some ill-defined and vague nature that in reality does nothing and provides no performance advantages over standard IDE controller support. I think it's unfortunate that you haven't served them in providing them with worthwhile information in this regard, but instead are merely echoing many of the myths that persist as to RAID 0, myths based in ignorance as opposed to knowledge. My opinion as to the value of RAID 0 is as follows:

    For years, ever since the first hard drives emerged, the chief barrier and bottleneck to hard drive performance has always been found within hard drives themselves, in the mechanisms that have to do with how hard drives work--platters, heads, rotational rate, platter size and density, etc. The bottleneck to IDE hard drive performance, measured in mbs/sec read & write performance, has actually never been the host-bus interface for the drive, and even today the vintage ATA100 bus interface is an average of 2x + faster than the fastest mass-market IDE drives you can buy, which average 30-50mbs/sec average in sustained read from the platters.

    Drives can "burst" today right up to the ceiling of the host-bus interface they support, but these transfer speeds only pertain to data in the drive's cache transferring to the host bus and do not apply to drive data which must be retrieved from the drive because it isn't in the cache--which is when we drop back to the maximums currently possibly with platter technology--30-50mbs/sec depending on the drive.

    Increases in platter density and rotational speeds, and increases in the amount of onboard cache in hard drives, have been the way that hard drive performance has traditionally improved. At a certain point--say 7,200 rpms for platter rotation--an equilibrium of sorts is reached in terms of economies of scale in the manufacture of hard drives, and pushing the platter rotational speed beyond that point--to 10,000 rpms and up-- results in marked diminishing returns both in price and performance, and the price of hard drives then begins to skyrocket in cost per megabyte (thermal issues and other things also escalate to further complicate things.) So the bottom line for mass-market IDE drives in terms of ultimate maximum performance is drawn both by cost and by the current SOA technical ceilings in hard drive manufacturing.

    Enter RAID 0 as a relatively inexpensive, workable, and reliable solution to the performance--and capacity--bottlenecks imposed in single-drive manufacturing. With RAID 0, striped according to the average file size that best fits the individual user's environment, it's fairly common to see read speeds (and sometimes write, too) in mbs/sec go to *double* that possible with either single drive used in a RAID 0 setup when you run it individually on a standard IDE controller, regardless of the host-bus interface.

    At home I've been running a total of 4 WD ATA100 100mb PATA drives for the last couple of years. Two of them--the older 2mb-cache versions--I run singly on IDE 0 as M/S through the onboard IDE controller, and the other two are 8mb-cache WD ATA100 100mb drives running in RAID 0 from a PCI Promise TX2K RAID controller as a single 200mb drive, out of which I have created several partitions.

    From the standpoint of Windows the two drives running through the Promise controller in RAID 0 are transparent and indistinguishable from the operation and management of a single 200mb physical hard drive. What I get from it is a 200mb drive with read/write performance up to double the speed possible with each single drive, a 200mb RAID 0 drive utilizing 16mbs of onboard drive cache, and I get a 200mb hard drive which formats and partitions and behaves just like an actual 200mb single drive but which costs significantly less (but not, to be fair, if I include the cost of the RAID controller--but I'm willing to pay it for performance ceilings just not possible with a current 200mb single IDE drive.)

    Here are some of the common myths about such a setup that I hear:

    (1) The RAID 0 performance benefit is a red herring because you don't always get double the performance of a single drive. It's so silly to say that, imo, since single-drive performance isn't consistent, either, as much depends on the platter location of the data in a single drive as to the speed at which it can be read, and so on, just as it does in a RAID drive. What's important to RAID 0 performance, and is certainly no red herring, is that read/write drive performance is almost always *higher* than the same drive run in single-drive operation on IDE, and can reach double the speed at various times, especially if the user has selected the proper stripe size for his personal environment.

    (2) RAID 0 is unsafe for routine use because the drives aren't mirrored. The fact is that RAID 0 is every bit as safe and secure as normal single-drive IDE use, as those aren't mirrored, either (which you'd think ought to be common sense, right?)...;) As with single-drive use, the best way to protect your RAID 0 drive data is to *back it up* to reliable media on a regular basis.

    On a personal note, one of my older WD's at home died a couple of weeks ago of natural causes--WD's diagnostic software showed the drive unable to complete both smart diagnostic checks, so I know the drive is completely gone. The failed drive was my IDE Primary slave, not one of the RAID drives. Apart from what I had backed up, I lost all the data on it, of course. Proves conclusively that single-drive operation is no defense against data loss...;)

    OTOH, in two+ years of daily RAID 0 operation, I have yet to lose data in any fashion from it, and have never had to reformat a RAID 0 drive partition because of data loss, etc. It has consistently functioned as reliably as my single IDE drives, and indeed my IDE single-drive failure was the first such failure I've had in several years with a hard drive, regardless of controller.

    If people would think rationally about it they'd understand that the drives connected to the RAID controller are the same drives when connected individually to the standard IDE controller, and work in exactly the same way. The RAID difference is a property of the controller, not the drive, and since the drives are the same, the probability of failure is exactly the same for a physical drive connected to a RAID controller and the same drive connected to an IDE controller. There's just no difference.

    (3)Because RAID 0 employs two drives to form one combined drive, the probability of a RAID 0 drive failure is exactly twice as high as it is for a single drive. This is another of those myths that circulates through rumor because people simply don't stop to think it through. While it is true that the addition of a second drive, whether it's added on the Primary IDE channel as a slave, or constitutes the second drive in a RAID 0 configuration, elevates the chance that "a drive" will fail slightly above the chance of failure presented by a single drive--since you now have two drives running instead of one--does this mean you now have increased the probability that a drive will fail by 100%? If you think about it that makes no sense because...

    If I install a single drive which, just for the sake of example, is of sufficient quality that I can reasonably expect it to operate daily for three years, and then I add another drive of exactly the same quality, how can I rationally expect both drives to operate reliably for anything less than three years, since the reliability of either drive is not diminished in the least merely by the addition of another drive just like it? I mean, how does it follow that adding in a second drive just like the first suddenly means I can expect a drive failure in 18 months, instead of three years?...;) Adding a second drive does not diminish the quality of the first, since the second drive is exactly like the first and is of equal quality, and hence both drives should theoretically be equal in terms of longevity.

    But the rumor mongering about RAID 0 is that adding in a second drive somehow means that the theoretical operational reliability of *each* drive is magically reduced by 50%...;) That's nonsense of course, since component failure is entirely an individual affair, and is not affected at all by the number of such components in a system. The best way to project component reliability, then, is not by the number of like components in a system, but rather by the *quality* of each of those components when considered individually. Considering components in "pairs," or in "quads," etc., tells us nothing about the likelihood that "a component" among them will fail.

    Look at the converse as proof: If I have two drives connected to IDE 0 as m/s, and I expect each of those drives to last for three years, does it follow logically that if I remove the slave drive that I increase the projected longevity of the master drive to six years?...;) Of course not--the projected longevity is the same, whether it's the master drive alone, or master and slave combined, because projected component longevity is calculated completely on an individual basis, and is unaffected entirely by the number of such components in a system. The fact is that I could remove the slave drive and the next day the master could fail...;) But that failure would have had nothing whatever to do with the presence or absence of the second drive.

    Putting it another way, does it follow that one 512mb DIMM in a system will last twice as long as two 512mb DIMMs in that system? If I have one floppy drive is it reasonable to expect that adding another just like it will cut the projected longevity of each floppy in half? If I have a motherboard with four USB ports, does it follow that by disabling three of them the theoretical longevity of the remaining USB port will be quadrupled? No? Well, neither does it follow that enabling all four ports will quarter the projected longevity of any one of them, either.

    Consider as well the plight of the hard drive makers if the numerical theory of failure likelihood had legs: if it was true that as the number of like components increases the odds for the failure of each of them increases by 100%, irrespective of individual component quality, then assembly-line manufacturing of the type our civilization depends on would have been impossible, since after manufacturing x-number of widgets they would all begin to fail...;)

    One last example: my wife and I each bought new cars in '98. Both cars included four factory-installed tires meeting the road. Flash forward four years--and I had replaced my wife's entire set of tires with an entirely different make of tire, because with her factory tires she suffered two tread separations while driving--no accidents though as she was very fortunate, and the other two constantly lost air inexplicably. All the difference with the new set. As for my factory tires, however, I'm still driving on them today, with tread to spare, and never a blow-out or leak since '98. The cars weigh nearly the same (mine is actually about 500lbs heavier), the cars are within 5,000 miles of each other in total mileage, and neither of us is lead-footed. Additionally, I serviced both cars every 3,000 miles with an oil change and tire rotation, balancing, inflation, etc.

    The stark variable between us, as it turned out, was that my factory-installed tires were of a much higher quality than her factory-installed tires, as I discovered when replacing hers. It's yet another example in reality of how the number of like components in a system is far less important than the quality of those components individually, when making projections as to when any single component among them might fail.

    Anyway, I think it would nice if we could move into the 21st century when talking about RAID 0, and realize that crossing ourselves, throwing salt over a shoulder, or avoiding walking under ladders won't add anything in the way of longevity to our individual components, nor will this behavior in any way serve to reduce that longevity, which is endemic to the quality of the component, regardless of number. Given time, all components will fail, but when they fail, they always fail individually, and being one of many has nothing to do with it, but being crappy has everything to do with it, which is the point to remember...;)
  • PrinceGaz - Saturday, July 3, 2004 - link

    The article pretty much confirmed my feeling that for general day-to-day usage, RAID 0 is more trouble than its worth.

    There are times when RAID 0 could theoretically help, extracting large (CD image sized) archives, or copying (not moving) a large file to another folder on the same drive. Even though I almost exclusively use CD images and Daemon Tools these days, the time spent extracting or copying them is negligible, and certainly not worth the considerably longer amount of time I'd need to spend when either drive in a RAID 0 array fails.

    Its true that Windows and applications will load faster from a RAID 0 array but again we're just talking a second or two for even the largest applications. As for Windows starting up, I personally never turn my main box off except when doing a hardware change so thats not an issue, but for those who do its unlikely to be more than five or six seconds difference so its hardly the end of the world. It would take an awful lot longer to reinstall Windows XP when one of the drives in the array fails than the few seconds saved each morning.

    I also happen to do video capture and processing which involves files upwards of ten gigs in size and feel RAID 0 is worthless here too, provided the single drive you capture to can keep up with the video bitrate (my Maxtor DiamondMax Plus9 7200rpm drive has no trouble at all with uncompressed lossless Huffyuv encoded 768x576 @ 25fps).

    When it comes to processing the video, I read it from one drive and write the output to another different physical hard-drive meaning it works faster than any RAID 0 array ever could-- one drive is doing nothing but reading the source file while the other only needs to write the result. With a RAID 0 array, both drives would be constantly switching between reading and writing two seperate files which would result in constant seek-time overheads even assuming the two-drive array was twice as fast as one drive (which they never are).

    So IMO, although the article could have included a few more details about the exact setup, it was overall spot on in suggesting you don't use onboard RAID 0 for desktop and home machines. And I'd add that you're better off *without* RAID 0 and keeping the two drives as seperate partitions if you're into video editing.
  • Nighteye2 - Saturday, July 3, 2004 - link

    Adding to all the comments already given, the Intel RAID is not very good as far as integrated RAID goes:

    http://www.tbreak.com/reviews/article.php?cat=stor...

    Especially for business benchmarks:

    http://www.tbreak.com/reviews/article.php?cat=stor...

    Also, notice the increase in performance between single and RAID in the first link.

    If you're HD-limited, RAID 0 helps a lot. Which is why using raptors skews the results of the tests Anand has done for this article.
  • Pumpkinierre - Friday, July 2, 2004 - link

    Apparently anything cpu limited wont be better with RAID0:

    http://faq.storagereview.com/SingleDriveVsRaid0

    This includes encoding (dont know about rendering but that can be cpu intensive as well as gpu). Large sequential reads with minimal cpu requirement will benefit from Raid eg simple file merging. You are better off splitting the raid up for encoding etc. and using one disc as the read and the other as the write on different controllers.

    Games only benefit in the loading stage if large files are required eg bitmaps in Baldur's Gate.

    RAID1 has the advantage of backup recovery as well as improved read speeds which is more beneficial to desktop use than writes. Raid0 has the capacity improvement advantager. So if size is not the problem (and it never is!), Raid1 is better for the desktop than Raid0. I'm sure if they varied the stripe size in Raid1 then games loading times would be improved. Even AT had one game load substantially faster (equivalent to the double platter 74Gb big brother Raptor). Perhaps an analysis of game file structure and loading by AT would be more beneficial to readers.

  • KF - Friday, July 2, 2004 - link

    >It's simple, really. Locating data on one disk is faster than
    > locating it on two disks simultaneously.
    That is no matter
    > which controller you use.
    Sending the two seek commands versus one should add negligeable time. The actual seeks would be done concurrently. The rotational latencies on each drive is independent. Therefore the time to locate the data should be very close to the same as for a single drive.

    However, if the time to locate the data swamps the data tranfer time, say twenty times as long, then yes, doubling the data transfer rate is not going to show much. So according to this idea, almost all file transfers take place in approximately the seek + rotation latency time, and the remainder of the transfer is negligeable. The problem is that the data transfer would be even more neglible for more drives. Let's say the actual data transfer accounts for 5% with one drive. Then it would be 2-3% for 2 drives, and 1% for 4 drives. OTOH, people are claiming that with higher RAID, you do get dramatic differences, not negligeable differences.
  • KF - Friday, July 2, 2004 - link

    >Let me get this straight, you think apps today (I assume you mean
    >desktop/office apps) aren't dependent enough on disk I/O, and should start
    >to be written so they are more I/O bound?

    >I hope you don't mind, but I'm going to put this in the old sig
    >library for use someday. :)

    No you didn't get it straight. Don't worry, Denial, you will understand what it means when they start doing it in the next few years.

    But if you need something for your sig, try this:
    "People have been saying John Kerry eats excrement sandwiches for lunch at the French embassy. No way. Excrement doesn't go with quiche, croissants and chardonay. Maybe for breakfast."
  • Pollock - Friday, July 2, 2004 - link

    Err, meant #71.
  • qquizz - Friday, July 2, 2004 - link

    For those that are asking about the Intel Application Accelerator. The 875 chipset doesn't need/support it:
    http://www.intel.com/support/chipsets/iaa/sb/CS-00...
  • MiLLeRBoY - Friday, July 2, 2004 - link

    I have a RAID 0 array and I definitely notice a dramatic improvement in copying files, file compression, and loading times.

    It is definitely worth it.
  • Pollock - Friday, July 2, 2004 - link

    Actually #72, Anand tested level loading in Far Cry and Unreal 2004, which to my knowledge fit the bill for games you suggested. The result: RAID 0 was equal or actually a little worse. I guess latencies are still more important than bandwidth here...?

Log in

Don't have an account? Sign up now