Test configuration

Testing of the VTrak was performed using build 2004.07.30 of the open-source benchmark IOMeter since it has a very large amount of flexibility when testing storage subsystems. The operating system used was Windows 2003 Server R2 since it supports GPT disks, which are necessary when you want to use more than 2TB per physical volume.

We decided to run a variety of tests on the drives that allowed us to scale the amount of random accesses vs. sequential, as well as a low to high disk queue length (DQL). The purpose of this is so we can see how well the interfaces perform from high throughput to high latency, as well as from light to heavy disk loads.

For those of you unfamiliar with Disk Queue Length, it is a performance counter that indicates the number of outstanding disk requests as well as requests currently being serviced for a particular disk. Microsoft recommends a value of no more then 2 per physical drive, though some seem to think this could be as high as 3 per drive.

The Disk Queue Length counter is commonly used to determine how busy a particular drive is, such as the one hosting a database. If the DQL for a database array averages 2 per drive or more, it is a good indication that disk I/O is becoming a bottleneck and that upgrading your disk subsystem could result in increased performance. Alternatively, if it averages less then 2 per drive, upgrading CPU or memory may be more beneficial since the disk subsystem is able to keep up with the current workload.

Using the LSI 8480E / ServeRAID 6M controller, we created a large RAID 10 array using all 12 disks. The operating system was hosted by a separate controller and pair of disks so that it would not impact the results. The drive was then formatted using all of the drive in one NTFS volume with a 64K Allocation Unit size.

For testing purposes, we started with a DQL of 2 (which works out to 0.167 DQL per drive) and then incremented the number by two until we reached 36 (3 DQL per drive). We wanted to see how each interface would scale from light to heavy workloads. We did not test above a DQL of 3 since most experts advise against running a storage system at this level for an extended period of time.

Since the number of ways storage can be accessed is huge, we decided to run tests that would give us a good indication of performance for almost any scenario. For example, we ran tests at 100% Sequential in the event you need to stream lots of sequential data off the drives. On the other hand you may have an application that is extremely random and you want to know how well it performs under this type of load. We also measured with a mix of random/sequential accesses at key points to better understand how much random access impacts a sequential stream.

Lastly, we used 64K access sizes for IOMeter, the NTFS Allocation unit, and RAID Stripe size. We did this to obtain the best performance possible for all drives/interfaces, but this is also beneficial since most databases use 64K access sizes when reading/writing data.

Test hardware

1 x Promise VTrak J300s with single I/O module
1 x LSI Logic 8480E PCI-E SAS HBA
12 x Seagate NL35.1 250GB SATA I Drive
12 x Western Digital 500GB WD5000YS SATA II Drive
12 x Fujitsu MAX3147RC 146GB 15K SAS Drive
1 x IBM EXP400 Chassis
1 x IBM ServeRAID 6M PCI-X SCSI HBA
12 x IBM 146GB 10K SCSI Drives

We'd like to thank Jennifer Juwono and Billy Harrison from Promise, David Nguyen from Western Digital, along with Seagate, Fujitsu and LSI Logic for providing the hardware to conduct this article.

The Promise VTrak J300s Test results – Read Performance
Comments Locked

31 Comments

View All Comments

  • JarredWalton - Monday, February 5, 2007 - link

    "or seek times" = "where seek times"
  • yyrkoon - Saturday, February 3, 2007 - link

    many 'old timers' are going to claim SCSI is better than anything, because its been around a long time, and has a proven track record. What these 'old timers' don't realize, is that SAS, and SCSI drives, share the same ancestry, except that SAS, also shares a history with SATA. *shrug*
  • mino - Sunday, February 4, 2007 - link

    Yes they are those.
    However most posters here do NOT dispute the benefits and superiority os SAS over U320.
    The point is that according the numbers published even SATA 7.2k is on par with SCSI320 10k !!!
    Those numbers simply coudn't be much more away from the reality then they are.

    Artificially more than halving the performance of a tested platform simply is not acceptable.

    Also to make tables in the sense SATAI vs. SATAII vs. SCSI vs. SAS is on itself seriously misleading to the extent that best thing to do for AT (provided they were not paid for it!) would be to call in the article eithe indefinitelly or for rewrite.

    Actually the reality is:
    SATA I or SATA II driver do not exist, there are only SATA drives inn existence as of now.
    performance-wise on single to 6-drives/system:
    SATA(7.2k) < SATA(10k) < SCSI(10k) = SAS(10k) < SCSI(15k) = SAS(15k)
    on 8+drives system:
    SATA(7.2k) < SATA(10k) < SCSI(10k) < SAS(10k) < SCSI(15k) < SAS(15k)

    For an 12-drive test the results should be:
    SATA(7k) << SCSI(10k) << SAS(15k) which is obvious even before any testing.

    However much more beneficial test would be:
    SATA(10k-Raptor) vs. SCSI(10k) vs. SAS (10k) with SCSI and SAS driver ideally from the same line.
  • mino - Sunday, February 4, 2007 - link

    "SATA I or SATA II driver" --> "SATA I or SATA II drives"
  • mino - Saturday, February 3, 2007 - link

    Yes, one sometimes has to make compromises.
    But benchmarking SCSI setup with 12drives on a SINGLE cable is plain stupid and tabloid-like aproach.
    This organization seriously criples perfromance and is NEVER used unless there is some very serious reason for it.
  • mino - Saturday, February 3, 2007 - link

    If you did have no SCSI option than the one you used, you should not have published those "SCSI" numbers at all. Those numbers as they are have nothing to do with SCSI being poor, they are simply showcasing that 3yrs 10k drive are slower than new 15k drives. Nothing new here.
  • Googer - Friday, February 2, 2007 - link

    That chart is missing the old 5.25 inch drives. The most famous of those was probably the Quantum Bigfoot. Quantum was bought out by Maxtor.

    http://images.anandtech.com/reviews/it/2007/promis...">http://images.anandtech.com/reviews/it/.../promise...

    http://www.pcguide.com/ref/hdd/op/formIn525-c.html">http://www.pcguide.com/ref/hdd/op/formIn525-c.html
  • Justin Case - Friday, February 2, 2007 - link

    Maybe the article author should read this...

    http://www.sata-io.org/namingguidelines.asp">http://www.sata-io.org/namingguidelines.asp
  • monsoon - Friday, February 2, 2007 - link

    Hello,

    I'm used to change computers frequently, I have lots of data to store.
    Currently I've got 4 external 300Gb drives and 4 external 400Gb drive; all of them connected through firewire.

    I've been looking eagerly for solutions similar to the NORCO DS-1220; but I need to connect the storage unit to laptops as well, so it has to control RAID5 all by itself.

    I can't find alternatives in the market, and while the UNRAID solution looks interesting, it's not safe, neither easy to implement.

    Looking forward to external storage devices reviews for home users with big archives.
    Units need to stand the test of time and be there while PCs come and go.
    Ideally, I must be able to replace drives with higher capacity when they get cheaper, without having to replace all of them at the same time.

    It better be silent; well, at least not loud...

    Any idea ?

    Thanks
  • mino - Saturday, February 3, 2007 - link

    Look for some reliable NAS solution (+Gbit swith - now dirt cheap).

Log in

Don't have an account? Sign up now