Test configuration

Testing of the VTrak was performed using build 2004.07.30 of the open-source benchmark IOMeter since it has a very large amount of flexibility when testing storage subsystems. The operating system used was Windows 2003 Server R2 since it supports GPT disks, which are necessary when you want to use more than 2TB per physical volume.

We decided to run a variety of tests on the drives that allowed us to scale the amount of random accesses vs. sequential, as well as a low to high disk queue length (DQL). The purpose of this is so we can see how well the interfaces perform from high throughput to high latency, as well as from light to heavy disk loads.

For those of you unfamiliar with Disk Queue Length, it is a performance counter that indicates the number of outstanding disk requests as well as requests currently being serviced for a particular disk. Microsoft recommends a value of no more then 2 per physical drive, though some seem to think this could be as high as 3 per drive.

The Disk Queue Length counter is commonly used to determine how busy a particular drive is, such as the one hosting a database. If the DQL for a database array averages 2 per drive or more, it is a good indication that disk I/O is becoming a bottleneck and that upgrading your disk subsystem could result in increased performance. Alternatively, if it averages less then 2 per drive, upgrading CPU or memory may be more beneficial since the disk subsystem is able to keep up with the current workload.

Using the LSI 8480E / ServeRAID 6M controller, we created a large RAID 10 array using all 12 disks. The operating system was hosted by a separate controller and pair of disks so that it would not impact the results. The drive was then formatted using all of the drive in one NTFS volume with a 64K Allocation Unit size.

For testing purposes, we started with a DQL of 2 (which works out to 0.167 DQL per drive) and then incremented the number by two until we reached 36 (3 DQL per drive). We wanted to see how each interface would scale from light to heavy workloads. We did not test above a DQL of 3 since most experts advise against running a storage system at this level for an extended period of time.

Since the number of ways storage can be accessed is huge, we decided to run tests that would give us a good indication of performance for almost any scenario. For example, we ran tests at 100% Sequential in the event you need to stream lots of sequential data off the drives. On the other hand you may have an application that is extremely random and you want to know how well it performs under this type of load. We also measured with a mix of random/sequential accesses at key points to better understand how much random access impacts a sequential stream.

Lastly, we used 64K access sizes for IOMeter, the NTFS Allocation unit, and RAID Stripe size. We did this to obtain the best performance possible for all drives/interfaces, but this is also beneficial since most databases use 64K access sizes when reading/writing data.

Test hardware

1 x Promise VTrak J300s with single I/O module
1 x LSI Logic 8480E PCI-E SAS HBA
12 x Seagate NL35.1 250GB SATA I Drive
12 x Western Digital 500GB WD5000YS SATA II Drive
12 x Fujitsu MAX3147RC 146GB 15K SAS Drive
1 x IBM EXP400 Chassis
1 x IBM ServeRAID 6M PCI-X SCSI HBA
12 x IBM 146GB 10K SCSI Drives

We'd like to thank Jennifer Juwono and Billy Harrison from Promise, David Nguyen from Western Digital, along with Seagate, Fujitsu and LSI Logic for providing the hardware to conduct this article.

The Promise VTrak J300s Test results – Read Performance

Log in

Don't have an account? Sign up now