Test configuration

Testing of the VTrak was performed using build 2004.07.30 of the open-source benchmark IOMeter since it has a very large amount of flexibility when testing storage subsystems. The operating system used was Windows 2003 Server R2 since it supports GPT disks, which are necessary when you want to use more than 2TB per physical volume.

We decided to run a variety of tests on the drives that allowed us to scale the amount of random accesses vs. sequential, as well as a low to high disk queue length (DQL). The purpose of this is so we can see how well the interfaces perform from high throughput to high latency, as well as from light to heavy disk loads.

For those of you unfamiliar with Disk Queue Length, it is a performance counter that indicates the number of outstanding disk requests as well as requests currently being serviced for a particular disk. Microsoft recommends a value of no more then 2 per physical drive, though some seem to think this could be as high as 3 per drive.

The Disk Queue Length counter is commonly used to determine how busy a particular drive is, such as the one hosting a database. If the DQL for a database array averages 2 per drive or more, it is a good indication that disk I/O is becoming a bottleneck and that upgrading your disk subsystem could result in increased performance. Alternatively, if it averages less then 2 per drive, upgrading CPU or memory may be more beneficial since the disk subsystem is able to keep up with the current workload.

Using the LSI 8480E / ServeRAID 6M controller, we created a large RAID 10 array using all 12 disks. The operating system was hosted by a separate controller and pair of disks so that it would not impact the results. The drive was then formatted using all of the drive in one NTFS volume with a 64K Allocation Unit size.

For testing purposes, we started with a DQL of 2 (which works out to 0.167 DQL per drive) and then incremented the number by two until we reached 36 (3 DQL per drive). We wanted to see how each interface would scale from light to heavy workloads. We did not test above a DQL of 3 since most experts advise against running a storage system at this level for an extended period of time.

Since the number of ways storage can be accessed is huge, we decided to run tests that would give us a good indication of performance for almost any scenario. For example, we ran tests at 100% Sequential in the event you need to stream lots of sequential data off the drives. On the other hand you may have an application that is extremely random and you want to know how well it performs under this type of load. We also measured with a mix of random/sequential accesses at key points to better understand how much random access impacts a sequential stream.

Lastly, we used 64K access sizes for IOMeter, the NTFS Allocation unit, and RAID Stripe size. We did this to obtain the best performance possible for all drives/interfaces, but this is also beneficial since most databases use 64K access sizes when reading/writing data.

Test hardware

1 x Promise VTrak J300s with single I/O module
1 x LSI Logic 8480E PCI-E SAS HBA
12 x Seagate NL35.1 250GB SATA I Drive
12 x Western Digital 500GB WD5000YS SATA II Drive
12 x Fujitsu MAX3147RC 146GB 15K SAS Drive
1 x IBM EXP400 Chassis
1 x IBM ServeRAID 6M PCI-X SCSI HBA
12 x IBM 146GB 10K SCSI Drives

We'd like to thank Jennifer Juwono and Billy Harrison from Promise, David Nguyen from Western Digital, along with Seagate, Fujitsu and LSI Logic for providing the hardware to conduct this article.

The Promise VTrak J300s Test results – Read Performance
Comments Locked

31 Comments

View All Comments

  • yyrkoon - Friday, February 2, 2007 - link

    When are you guys going to do some reviews on consumer grade equipment ? Well, let me clarify, 'consumer grade' with on card RAID processor(s). For instance,, right now, I'm in the market for a 8 + port RAID HBA, but would like to know if buying a Highpoint 16 port SATA RAID HBA, would really be any worse than getting an Areca 8 port HBA, for ~$200 usd more. 3Ware, from what I understand offers the best Linux/Unix support, or does it ? If so, would it really make much of a difference in a SOHO application ?

    I personally, would like to see a comparison of the latest Promise, Highpoint, Areca, 3Ware, etc controllers. In short, there is a lot out there for a potential buyer, such as myself, to get lost in, and basically, I personally am interested in reliability first, speed second (to a point).

    Anyhow, I just thought I'd point out, that while you guys do cover a lot in the area, you guys seem to have a gap, where I think it really matters most to your readers (home PC / enthusiast crowd/SOHO).
  • mino - Saturday, February 3, 2007 - link

    I would stay away from Highpoint.
    We have had several issues of RAID HBA(new one!) consistently going down AND screwing the whole RAID5 ubner some workloads. For the money one is better off with QuadFX ASUS board than to go Highpoint-like solutions.
    Areca is pretty much on a different level, ofcourse...
  • yyrkoon - Sunday, February 4, 2007 - link

    Again, this only reinforces what I've said, need a good article on which HBAs are good for reliability, etc.
  • mino - Sunday, February 4, 2007 - link

    Any 3Ware, Areca, LSi, Adaptec solution should be just fine.

    Most people do not actually need RAID5 for home usage and it is usually cheaper to go _software_ RAID1 with every drive in the RAID attached to different controller. In such a scenario even the cheapest or onboard controller offers comparable fault-tollerancy to high-end RAID% solutions.

    However the simplest way to go is really 2 NAS RAID5 boxes mirroring each other.
  • dropadrop - Tuesday, February 6, 2007 - link

    quote:

    Any 3Ware, Areca, LSi, Adaptec solution should be just fine.


    I would rule out Adaptec and the older LSI chipsets still available (under several brands like Intel for example). We replaced a bunch of Intel 6 & 8 port controllers with top of the line 8-port Adaptec SATA II controllers.

    The performance of the Intel controllers (with LSI chipsets) was terrible. We got about 8-13MB/s sequential writes with RAID 10 arrays, and tested using alot of differant drives. The Adaptec products are alot better in regard to speed, but keep dropping drives. This seems to be a common problem, but they have no solution.

    I've previously used 3ware without any problems, and would gladly test Areca if they where available here.
  • yyrkoon - Sunday, February 4, 2007 - link

    why would I want to spend 1300 usd + per 5 disk array (minus drives), when I could build my own system much cheaper, and use the hardware/software I wanted ? Just because I don't know which HBAs are more reliable, than others (because I obviously cant afford to buy them all), doesn't mean I'm an idiot ;)
  • Bob Markinson - Friday, February 2, 2007 - link

    Interesting review!
    I would have liked to see a comparison with latest gen 15K SCSI drives and not 10K SCSI drives to see the true SAS interface performance advantage over SCSI. Futhermore, the Serveraid 6M comes in two versions - one with 128 MB cache and the other with 256 MB cache. Also, there were performance issues with early 7.xx firmware/sw revisions on the 6M at high IO loads - hopefully you ran the tests most recent firmware. Write-back cache was enabled on the 6M, right?

  • Lifted - Tuesday, February 6, 2007 - link

    Based on the title of the article, Promise VTrak J300S, you are expecting too much. The "comparison" was more like an ad for the product. What is point in comparing 10K U320 vs 15k SAS? It's supposed to tell us what exactly? You clearly need to look elsewhere for a SAS vs U320 comparison if that's what you were expecting here. This was more for kicks I think, and perhaps to make the J300S look better than ____ ??? I don't get it, it's just a storage enclosure. The RAID adapters and drives are what determine performance, so why was this apples-to-oranges "performance" review thrown into an enclosure article?

    Odd, quite odd.
  • fjeske - Friday, February 2, 2007 - link

    Isn't it a bit unfair to use old IBM 10K SCSI drives in this comparison? None of the now Hitachi drives show good performance on Storagereview.com. Compare to Seagate's Cheetah 15K.5 and I think you'll see a difference.

    Also, how was the SCSI setup done? Attaching 12 drives to one U320 bus will obviously saturate it. Servers usually pair them when connecting this many drives.
  • cgaspar - Friday, February 2, 2007 - link

    SAS and SCSI drives have disk write caches disabled by default, as the drives' caches are not battery backed. IDE and SATA drives frequently have write caching enabled by default. This makes writes much faster, but if you loose power, those writes the drive claimed were committed will be lost, which can be a very bad thing for a database. I'd suggest disabling the write cache on the SATA drives and re-testing (if you still have the gear), I suspect the results will be illuminating.

Log in

Don't have an account? Sign up now