Configuration and benchmarking setup

Thanks to Loes van Emden and Robin Willemse of Promise (Netherlands) for giving us the chance to test the E310f. Billy Harrison and Michael Joyce answered quite a few of the questions we had, so we definitely want to thank them as well. Our thanks also go out to Poole, Frank, and Sonny Banga (Intel US) for helping us to test the Intel SSR212MC2. Last but not least, a big thanks to Tijl Deneut, who spent countless hours in the labs while we tried to figure out the best way to test these storage servers.

SAN FC Storage Server: Promise VTRAK E310f, FC 4Gb/s
Controller: IOP341 1.2 GHz, 512MB Cache
Disks: 8 Fujitsu MAX3073RC 73GB 15k RPM

DAS Storage Server: Intel SSR212MC2
Controller: SRCSAS144E 128MB Cache
Disks: Eight Fujitsu MAX3073RC 73GB 15k RPM

SAN iSCSI Storage Server: Intel SSR212MC2, 1Gb/s
Server configuration: Xeon 5335 (quad-core 2 GHz), 2GB of DDR2-667, Intel S5000PSL motherboard
Controller: SRCSAS144E via 1Gb/s Intel NIC, Firmware 1.03.00-0211 (Ver. 2.11)
Disks: Eight Fujitsu MAX3073RC 73GB 15k RPM
iSCSI Target: StarWind 3.5 Build 2007080905 or Microsoft iSCSI Target software (alias WinTarget) or the iSCSI software found on Linux SLES 10 SP1

Client Configuration
Configuration: Intel Pentium D 3.2Ghz (840 Extreme Edition), Intel Desktop Board D955XBK, 2GB of DDR2-533
NIC (iSCSI): Intel Pro/1000 PM (driver version: 9.6.31.0)
iSCSI Initiator: Windows Microsoft Initiator 2.05
FC HBA: Emulex LightPulse LPe1150-F4 (SCSIport Miniport Driver version: 5.5.31.0)

IOMeter/SQLIO Setup

Your file system, partitioning, controller configuration and of course disk configuration all influence storage test performance. We chose to focus mostly on RAID 5 as it is probably the most popular RAID level. We selected a 64KB stripe size as we assumed a database application that has to perform sequential/random reads and writes. As we test with SQL IO, Microsoft's I/O stress tool for MS SQL server 2005, it is important to know that if the SQL Server Database accesses the disks in random fashion this happens in blocks of 8KB. Sequential accesses (Read-ahead) can use I/O sizes up to from 16KB up to 1024KB, so we used a stripe size of 64KB as a decent compromise.

Next, we aligned our testing partition with a 64KB offset with the diskpart tool. For some prehistoric reasons, Windows (XP, 2003, and older) puts the first sector of a partition on the 64th sector (it should be on the 65th or the 129th) which results in many unnecessary I/O operations and wasted cache slots on the cache controller. Windows Longhorn Server (and Vista) automatically aligns to 2048 sectors as a starting offset, so it will not have this problem. Then we formatted the partition with NTFS and a cluster size of 64KB (first sector of the partition is the 129th sector or the 128th block). To get an idea how much this type of tuning helps, take a look below. The non-tuned numbers are using the "default Windows installation": 4KB clusters and non-aligned partitions (partition starts at 64th sector).

All tests are done with Disk Queue Length (DQL) at 2 per drive (16 in total thus). DQL indicates the number of outstanding disk requests as well as requests currently being serviced for a particular disk. A DQL that averages 2 per drive or higher means that the disk system is the bottleneck.

I/O Meter RAID 5 Random Read (64KB)

As you can see, tuning the file system and partition alignment pays off.

The number of storage testing scenarios is huge: you can alter
  • RAID level
  • Stripe size
  • Cache policy settings (write cache, read cache behavior)
  • File system and cluster size
  • Access patterns (different percentages of sequential and random access)
  • Reading or writing
  • Access block size (the amount of data that is requested by each access)
  • iSCSI target - this is the software that receives the requests of the initiators and processes them
So to keep the number of benchmarks reasonable, we use the following:
  • RAID 5 (most of the time, unless indicated otherwise)
  • Stripe size 64KB (always)
  • Always Adaptive Read Ahead and Write back
  • NTFS, 64KB cluster size
  • 100% random or 100% sequential
  • 100% read, 100% write and 67% read (33% write)
  • Access block size 8KB and 64KB
  • iSCSI SLES, StarWind (not all tests), and MS iSCSI Target software
This should give you a good idea of how we tested. We include a DAS test that is a test run directly on the Intel SSR212MC2. This tests the storage performance of its local disks. This should give you an indication on how fast the disk system is without any FC or iSCSI protocol overhead.
Looking Under the Hood I/O Meter
Comments Locked

21 Comments

View All Comments

  • microAmp - Wednesday, November 7, 2007 - link

    I was just about to post something similar. <thumbsup>

Log in

Don't have an account? Sign up now