The Test

Our hard drive test bed is designed to shift the bottlenecks, as much as possible, onto the hard drive, but while still within reason. To accomplish that purpose, our test bed is configured as follows:

Intel Pentium 4 Extreme Edition 3.4GHz
Intel D875PBZ Motherboard
1GB DDR400 SDRAM
ATI Radeon 9800 Pro (128MB)
Creative Labs Audigy
Ultra ATA/100 or Serial ATA 150 cables were used where appropriate

The important drivers used are as follows:

Intel Chipset INF 5.1.1002
ATI Catalyst 4.5
Windows XP Service Pack 1 (no further updates were installed)

What's important to point out is that although we could have outfitted our test bed with 256MB of memory, we wanted to avoid over-exaggerating the performance impact of the hard drive. After all, if your system is swapping to disk a lot, you should be considering a memory upgrade before or in tandem with a hard drive upgrade.

The tests that we run are as follows:

Business Winstone IPEAK - a playback test of all of the IO operations that occur within Business Winstone 2004.

Content Creation IPEAK - a playback test of all of the IO operations that occur within Multimedia Content Creation Winstone 2004.

Business Winstone 2004 - the official Business Winstone 2004 test suite.

Multimedia Content Creation Winstone 2004 - the official Multimedia Content Creation Winstone 2004 test suite.

SYSMark 2004 - the official SYSMark 2004 test suite.

Far Cry Level Load Test - a timed test of loading a level in Far Cry.

Unreal Tournament 2004 Level Load Test - a timed test of loading a level in Unreal Tournament 2004.

More details about each individual test will appear in the section of the review dedicated to that particular test.

Putting the Redundancy in RAID: RAID-1 Pure Hard Disk Performance
Comments Locked

127 Comments

View All Comments

  • Arth1 - Thursday, July 1, 2004 - link

    The article contains several factual errors.
    RAID 1, for example, does have *read* speed benefits over a single drive, as you can read one block from one drive and the next block from the other drive at the same time.
    Also, what was the block size used, and what was the stripe size?
    Was the block size doubled when striping (as is normally recommended to keep the read size identical)?
    Since non-serial-ATA drives were part of the test, how come THEY were not tried in a RAID? That way we could have seen how much was the striping effect and how much was due to using two serial ATA ports.
    All in all a very useless article, I'm afraid
  • qquizz - Thursday, July 1, 2004 - link

    here, here, what about more ordinairy drives.
  • Kishkumen - Thursday, July 1, 2004 - link

    Regarding Intel Application Accelerator, I would like to know if that was installed or not as well. It seems to me that could potentially affect performance quite a bit. But perhaps it doesn't make a difference? Either way, I would like to know.
  • pieta - Thursday, July 1, 2004 - link

    It's funny to see metion of ATA and performance. If you really want disk performance, get some real SCSI drives. Without tag cmd queuing, RAID configurations aren't able to reach their full potential.

    It would be interesting see hadware sites measure SCSI performance. Sure, ATA has the price point, but with 15K SCSI spinners so cheap these days, the major cost is the investment in the HBA. With people dropping 500 bucks on a video card, why is it so inconvievable to think power users wouldn't want to run with the best I/O available?

    I was suprised not to see any Iometer benchmarks. IOPS and response times are king in determining disk performance. Iometer is still the best tool, as you can configure workers match typical workloads.

    Show me a review of the latest dual ported ultra320 hardware raid HBA stripped across four 15k spinners. Compare that with a 2 drive configuration and the SATA stuff. Show me IOPS, response times, and CPU utilization. That would be meaningful, as people could better justify the extra $2-300 cost going with a real I/O performer.
  • meccaboy858 - Thursday, July 1, 2004 - link

  • meccaboy858 - Thursday, July 1, 2004 - link

  • meccaboy858 - Thursday, July 1, 2004 - link

  • meccaboy858 - Thursday, July 1, 2004 - link

  • Nighteye2 - Thursday, July 1, 2004 - link

    Of course, RAID 0 makes little sense for raptors, which are already so fast that they hardly form a bottleneck.

    RAID 0 makes more sense for slower, cheaper HD's...try 2 WD 80GB 8MB cache harddisks, for example. Together they are cheaper than a raptor, but I expect performance will be very similar, if not faster.
  • Taracta - Thursday, July 1, 2004 - link

    I am tired of seeing these RAID 0 articles just throwing 2 disk together and getting results that are contrary to what is expected and not dig deeper into what's the problem. I am only posting my comment here because of my repect for this site. Drive technology and methodlogy has to play apart in discussion of RAID technology. The principle behind RAID 0 is sound. The throughput is a multiple of the number of drives in the array (You will not get 100% but close to it). Not getting this, it should be examined as to WHY? One of my suspicion is that incorrect setup of the array is the primary culprit. How is information written to/from the drive, the array and to individual drives in individual arrays. What is the cluster and sectors sizes. How is the information broken up by the controller to be written to the array. Take for example each drive in a array has a minimum data size of 64bits and you have array sizes of 2 rives 128bits, 3 drives 192bits and four drives 256bits. In initializing you array do you intialize for 64bits, 128bits, 192bits or 256bit? Does it matter? Say for example you initialize for 64bits, does the array controller writes 64bits to each drive or does it writes 64bits to the first drive and 0bits (null spaces and wasting and defeating the purpose of the extra drives) to the other drives because it is expecting the array size bits (eg 128bits for 2 drives)or does it split the 64bits between the drives and waste space and kill performance because each drive allocate a minimum of 64bits. I was waiting for someone to examine in detail what's happening. Xbitlabs came close (from looking at the charts)that they could almost taste it I am sure but still jump to incorrect reasoning.

    I know I am rambling but in short the premise of RAID arrays are sound so why is it not showing up in the results of the testing?

Log in

Don't have an account? Sign up now