Parallelism has been a topic of interest within the PC technology industry ever since its inception. The basic principle of computing is to accomplish incredibly large and complicated tasks through the completion of smaller individual tasks, which in some cases, can be executed concurrently to maximize performance. We've seen examples of exploiting parallelism in computing with technologies such as multiprocessor systems, Hyper Threading and, of course, the long-missed Voodoo2 SLI.

The benefits of parallelism vary depending on the application. For example, the impact of dual processors or a Hyper Threading enabled CPU can be as little as 5% for a normal desktop user, but as much as 50% for a server system. Graphics rendering is virtually infinitely parallelizable, with a doubling in raw GPU power resulting in close to a doubling of performance. But what about hard drive performance? Are two drives better than one?

Of course, the technology that we are talking about is RAID, standing for Redundant Array of Independent (or Inexpensive) Disks. As the name implies, the technology was introduced for redundancy, but has morphed into a cheap way to add performance to your system. With the introduction of their 875P/865 chipsets, Intel brought the two simplest forms of RAID to desktop users for free: RAID 0 and RAID 1. With the majority of Intel's chipset shipments featuring RAID support, desktop users are beginning to experiment, now more than ever, with RAID as a method of increasing performance.

On paper, RAID can provide dramatic increases in performance. But as we've shown in our other hard drive reviews, the real world often differs greatly from the realm of synthetic disk benchmarks. So, what happens when you measure the real-world impact of RAID on today's fastest, most disk limited systems? Should we all start buying two hard drives instead of one? Or should RAID still be used for redundancy and not for performance when it comes to the average desktop user?

Let's find out...

Doubling Theoretical Performance: RAID-0
POST A COMMENT

126 Comments

View All Comments

  • Jalf - Friday, July 02, 2004 - link

    Funny how quick people are to dismiss an article the moment it doesn't confirm what they already believed...

    I might be the only one here, but I'm not really surprised by this article in general.
    RAID has its place, yes, but not as a desktop system.

    Yes, bandwidth goes way up, but so does latency. Instead of locating a file on one drive, you have to locate it on two drives, before you can even start the transfer. With sequential transfers, RAID is obviously faster, but with multiple smaller accesses, it will be slower. There's no magic in it, no faked results, and no incompetent and biased authors of that article.

    It's simple, really. Locating data on one disk is faster than locating it on two disks simultaneously.
    That is no matter which controller you use. Yes, a faster controller might mean a smaller performance penalty, but doesn't change the fact.

    The most expensive part of I/O is the seek time. The actual transfer is fast by comparison.

    The problem is that RAID aids the already acceptable transfer speed, but slows down seek time, which was already a bottleneck.

    So yes, it can improve performance, but only if you have large sequential reads/writes, where you don't need to waste time seeking, and where the faster transfer really becomes useful.

    In other words, *not* on normal desktop systems, and not on normal gaming systems.
    Reply
  • masher - Friday, July 02, 2004 - link

    > "I'd like to get to the truth about RAID0 for
    > desktop users like myself."

    RAID0 really isn't significantly faster for most users and apps. Its not due to the reason KF states though-- HD performance is still very important to most apps.

    But a RAID array doesn't increase performance across the board. Bandwidth goes up sharply...but latency rises as well. The only apps you'll see large gains in are ones that favor bandwidth much more than latency-- such as streaming huge files in a diskbound mode.

    The Intel onboard raid controller isn't the best one out there. You can buy a dedicated card and scrape another couple percentage points out. A small gain for the dollars invested.


    Reply
  • TheCimmerian - Friday, July 02, 2004 - link

    This is my first post in this forum.
    Let me start by saying that anandtech.com appears to be a great place to get news, and I've enjoyed the articles so far.

    While I agree that the "Raptor RAID0" article has some issues, I fail to see how so many of you can dismiss the results, and even the conclusions.

    Anand has presented a real-world test of a commonly used RAID0 setup against commonly accepted benchmarks.
    Frankly, I'm astounded by the number of "I don't care what his results show, my RAID0 setup is faster" comments. If your array IS faster, please post some evidence! There is way too much anecdotal assertion on this thread for my taste.

    Honestly, I'm poised to purchase a couple Raptors for a desktopo RAID0 setup--based on the general yahoo about the performance benefits of RAID0. I was suprised and concerned to read this article, and the similar articles linked-to in this thread. As someone on the verge of dropping several hundred dollars for the supposed increased performance, I'd like to get to the truth about RAID0 for desktop users like myself.

    I appreciate KF's thoughts on "why" RAID0 doesn't make a difference, and I'd like to hear more opinions and thoughts--especially opinions backed up by some kind of evidence!

    Anand pretty much (except for the game tests) confined his test to synthetic benchmarks. Anyone have any results with actual applications and/or files?

    Specifically, I plan(ned?) on using a dual 74Gb Raptor RAID0 array as a scratch/capture disk for DV work. DV files are huge (multiple Gb), and disk speed is important for smooth and error-free capture from a DV camera. Any thoughts?

    Thanks for the dialog.
    Reply
  • masher - Friday, July 02, 2004 - link

    > "I can tell you for a fact that my 8 disk RAID
    > 10 array, with 15k 73GB Cheetahs, running on a
    > LSI 320-2, installed in a 133MHZ PCI-X slot..."

    Is it just me, or does Denial sound like he's trying to score chicks by bragging about the size of his array?

    Oh, and BTW Denial...the servers your employer use don't count. You're either a liar for claiming you run this setup in your personal desktop...or an idiot if you're telling the truth.
    Reply
  • Zar0n - Friday, July 02, 2004 - link

    Nice article but very incomplete.
    Next time please include chip7 from via & nvidia
    And more modern drives are available like Seagate 200GB.
    Also include tests with raid 1

    No SCSI drives, keep it real, most ppl have SATA or ATA drives.
    Reply
  • pookie69 - Friday, July 02, 2004 - link

    There have been A LOT of issues/concerns raised by various ppl here regarding things like benches and configuration setups etc that were left out in the article. I think it would be great if there was a follow-up article to this one in which these issues were addressed and previous things further explained.

    >>> indeed, if ALL :) the issues were addressed in the said follow-up article, it may end-up being the most comprehensive RAID report/review ever!

    Anyways, something for the guys at AnandTech to think about - i think its hard to overlook the fact that alotta ppl are feeling quite a bit of discontent at the way this article hit upon its (pre-concluded :) ) conclusion.
    Reply
  • Denial - Friday, July 02, 2004 - link

    "Then programmers (in some cases) will write their programs differently amd the extra speed of RAID 0 will show more in real-life benchmarks."

    Let me get this straight, you think apps today (I assume you mean desktop/office apps) aren't dependent enough on disk I/O, and should start to be written so they are more I/O bound?

    I hope you don't mind, but I'm going to put this in the old sig library for use someday. :)
    Reply
  • KF - Friday, July 02, 2004 - link

    Denial: You are in denial. The results of Anand's simple-to-understand test are the same as the results that have been reported in overwhelmingly mind-numbingly-detailed reviews at specialized storage sites. This just happens to be about the IMPORTANT latest incarnation, which will no doubt put RAID capability on 90% of new computers, once the Intel production machine is rolling. Until Pariah opined, I wondered if I was the only one that understood those reviews, the way people seem to tout RAID 0 so relentlessly.

    Maybe this will be simple to understand: The authors of programs know what a slug their programs would be if they wrote them in such a way as to depend on the slowest link in the chain; namely the HD. Therefore HD accesses are avoided at all costs, and everything accessed is cached (in memory.) The OS (Windows) caches everything out-the-whazoo as well. In other words: all algorithms are selected to preserve locality. Therefore HD speed only shows up during intitialization and where there is no way to arrange locality. Therefore real-life benchmrks have a small dependence on HD speed.

    Since HD I/O is interrupt driven, and transfers are DMA, a program does not have to just sit and wait until the I/O is performed. It can do useful work concurrently provided the I/O algorithms look-ahead. Then the data will be there (most of the time) before it is needed.

    As for why the loading of games does not show a RAID 0 boost, I can only guess that they are doing a lot more than just loading HD data into memory. Possibly most of the HD I/O was done before the point that timing was done, and the slowness at that point is due to other operations. Pre-calculating known physics? Buffering major scenery changes?

    I still think people could get a feeling of extra speed during times when the HD IS loading. It may only be a tiny part of the whole time a program is run, but you could notice it during that time.

    Furthermore, if the past is a guide, every new capability that becomes commonplace gradually is made more and more use of, especially where Intel is concerned. (AGP, 2xAGP, USB, DMA66, SSE.) So Intel putting RAID 0 in its chipset means RAID 0 will be used more and more. Then programmers (in some cases) will write their programs differently amd the extra speed of RAID 0 will show more in real-life benchmarks. Before that comes about, people will correctly warn that the extra money buys you very little. Fortunately for the rest of us, there are a few people willing to pay for that extra bit, which gets the ball rolling.
    Reply
  • Pumpkinierre - Friday, July 02, 2004 - link

    #45,#46 Generally the reviews I've seen on RAID1 have the read rates equal or a little bit more than a single drive while RAID0 shows 30%+ improvement. Why? I dont know. To me, in the read part of the deal, it should be the same in Raid 0 or 1.
    With my suggestion of virtual striping, I also suggested variable stripe size in a previous post (not possible in Raid0 but possible in Raid1 because the stripes are virtual). Here a smaller stripe size could be used for smaller data size requests and a larger stripe for bigger files or sequential data requests. This would speed up reads significantly and give a net advantage over Raid0 which is limited to one stripe size at inception. The controller on request for a particular data file would optimise the size of the stripe based on the request. For desktops where data throughput can range from the few k to the gigs, it would be perfect. This seems possible to me but I have'nt heard anyone implement it.
    Reply
  • Pariah - Friday, July 02, 2004 - link

    If you look way back at comment #37 you will see my last paragraph is basically exactly what you said in your last paragraph. I agree completely that the article stunk, and that basically all the storage related articles on this site throughout its history have stunk. I just think that your nitpicking of his usage of the word RAID in the conclusion was one of the least important problems in the article, as anyone with half a brain knew what he was talking about when he said that, regardless of whether it was a valid point or not. Reply

Log in

Don't have an account? Sign up now