Configuration and benchmarking setup

Thanks to Loes van Emden and Robin Willemse of Promise (Netherlands) for giving us the chance to test the E310f. Billy Harrison and Michael Joyce answered quite a few of the questions we had, so we definitely want to thank them as well. Our thanks also go out to Poole, Frank, and Sonny Banga (Intel US) for helping us to test the Intel SSR212MC2. Last but not least, a big thanks to Tijl Deneut, who spent countless hours in the labs while we tried to figure out the best way to test these storage servers.

SAN FC Storage Server: Promise VTRAK E310f, FC 4Gb/s
Controller: IOP341 1.2 GHz, 512MB Cache
Disks: 8 Fujitsu MAX3073RC 73GB 15k RPM

DAS Storage Server: Intel SSR212MC2
Controller: SRCSAS144E 128MB Cache
Disks: Eight Fujitsu MAX3073RC 73GB 15k RPM

SAN iSCSI Storage Server: Intel SSR212MC2, 1Gb/s
Server configuration: Xeon 5335 (quad-core 2 GHz), 2GB of DDR2-667, Intel S5000PSL motherboard
Controller: SRCSAS144E via 1Gb/s Intel NIC, Firmware 1.03.00-0211 (Ver. 2.11)
Disks: Eight Fujitsu MAX3073RC 73GB 15k RPM
iSCSI Target: StarWind 3.5 Build 2007080905 or Microsoft iSCSI Target software (alias WinTarget) or the iSCSI software found on Linux SLES 10 SP1

Client Configuration
Configuration: Intel Pentium D 3.2Ghz (840 Extreme Edition), Intel Desktop Board D955XBK, 2GB of DDR2-533
NIC (iSCSI): Intel Pro/1000 PM (driver version: 9.6.31.0)
iSCSI Initiator: Windows Microsoft Initiator 2.05
FC HBA: Emulex LightPulse LPe1150-F4 (SCSIport Miniport Driver version: 5.5.31.0)

IOMeter/SQLIO Setup

Your file system, partitioning, controller configuration and of course disk configuration all influence storage test performance. We chose to focus mostly on RAID 5 as it is probably the most popular RAID level. We selected a 64KB stripe size as we assumed a database application that has to perform sequential/random reads and writes. As we test with SQL IO, Microsoft's I/O stress tool for MS SQL server 2005, it is important to know that if the SQL Server Database accesses the disks in random fashion this happens in blocks of 8KB. Sequential accesses (Read-ahead) can use I/O sizes up to from 16KB up to 1024KB, so we used a stripe size of 64KB as a decent compromise.

Next, we aligned our testing partition with a 64KB offset with the diskpart tool. For some prehistoric reasons, Windows (XP, 2003, and older) puts the first sector of a partition on the 64th sector (it should be on the 65th or the 129th) which results in many unnecessary I/O operations and wasted cache slots on the cache controller. Windows Longhorn Server (and Vista) automatically aligns to 2048 sectors as a starting offset, so it will not have this problem. Then we formatted the partition with NTFS and a cluster size of 64KB (first sector of the partition is the 129th sector or the 128th block). To get an idea how much this type of tuning helps, take a look below. The non-tuned numbers are using the "default Windows installation": 4KB clusters and non-aligned partitions (partition starts at 64th sector).

All tests are done with Disk Queue Length (DQL) at 2 per drive (16 in total thus). DQL indicates the number of outstanding disk requests as well as requests currently being serviced for a particular disk. A DQL that averages 2 per drive or higher means that the disk system is the bottleneck.

I/O Meter RAID 5 Random Read (64KB)

As you can see, tuning the file system and partition alignment pays off.

The number of storage testing scenarios is huge: you can alter
  • RAID level
  • Stripe size
  • Cache policy settings (write cache, read cache behavior)
  • File system and cluster size
  • Access patterns (different percentages of sequential and random access)
  • Reading or writing
  • Access block size (the amount of data that is requested by each access)
  • iSCSI target - this is the software that receives the requests of the initiators and processes them
So to keep the number of benchmarks reasonable, we use the following:
  • RAID 5 (most of the time, unless indicated otherwise)
  • Stripe size 64KB (always)
  • Always Adaptive Read Ahead and Write back
  • NTFS, 64KB cluster size
  • 100% random or 100% sequential
  • 100% read, 100% write and 67% read (33% write)
  • Access block size 8KB and 64KB
  • iSCSI SLES, StarWind (not all tests), and MS iSCSI Target software
This should give you a good idea of how we tested. We include a DAS test that is a test run directly on the Intel SSR212MC2. This tests the storage performance of its local disks. This should give you an indication on how fast the disk system is without any FC or iSCSI protocol overhead.
Looking Under the Hood I/O Meter
Comments Locked

21 Comments

View All Comments

  • Lifted - Wednesday, November 7, 2007 - link

    quote:

    We have been working with quite a few SMEs the past several years, and making storage more scalable is a bonus for those companies.


    I'm just wondering this sentence was linked to an article about a Supermicro dual node server. So you considere Supermicro an SME, or are you saying their servers are sold to SME's? I just skimmed the Supermicro article, so perhaps you were working with an SME in testing it? I got the feeling from the sentence that you meant to link to an article where you had worked with SME's in some respect.
  • JohanAnandtech - Wednesday, November 7, 2007 - link

    no, Supermicro is not an SME in our viewpoint :-). Sorry, I should have been more clear, but I was trying to avoid that the article lost it's focus.

    I am head of a serverlab in the local university and our goal is applied research in the fields of virtualisation, HA and Server sizing. One of the things we do is to develop software that helps SME's (with some special niche application) to size their server. That is what the link is going towards, a short explanation of the stresstesting client APUS which has been used to help quite a few SMEs. One of those SMEs is MCS, a software company who develops facility management software. Basically the logs of their software were analyzed and converted by our stresstesting client into a benchmark. Sounds a lot easier than it is.

    Because these applications are used in real world, and are not industry standard benchmarks that the manufacturers can tune to the extreme, we feel that this kind of benchmarking is a welcome addition to the normal benchmarks.
  • hirschma - Wednesday, November 7, 2007 - link

    Is the Promise gear compatible with Cluster File Systems like Polyserve or GFS? Perhaps the author could get some commentary from Promise.
  • JohanAnandtech - Wednesday, November 7, 2007 - link

    We will. What kind of incompatibility do you expect? It seems to me that the filesystem is rather independent from the storage rack.
  • hirschma - Thursday, November 8, 2007 - link

    quote:

    We will. What kind of incompatibility do you expect? It seems to me that the filesystem is rather independent from the storage rack.


    I only ask because every cluster file vendor suggests that not all SAN systems are capable of handling multiple requests to the same LUN simultaneously.

    I can't imagine that they couldn't, since I think that cluster file systems are the "killer app" of SANs in general.
  • FreshPrince - Wednesday, November 7, 2007 - link

    I think I would like to try the intel solution and compare it to my cx3...
  • Gholam - Wednesday, November 7, 2007 - link

    Any chance of seeing benchmarks for LSI Engenio 1333/IBM DS3000/Dell MD3000 series?
  • JohanAnandtech - Wednesday, November 7, 2007 - link

    I am curious why exactly?

    And yes, we'll do our best to get some of the typical storage devices in the labs. Any reason why you mention these one in particular (besides being the lower end of the SANs)
  • Gholam - Thursday, November 8, 2007 - link

    Both Dell and IBM are aggressively pushing these in the SMB sector around here (Israel). Their main competition is NetApp FAS270 line, which is considerably more expensive.
  • ninjit - Wednesday, November 7, 2007 - link

    It's a good idea to define all your acronyms the first time you use them in an article.
    Sure, a quick google told me what an SME was, but it's helpful to the casual reader, who would otherwise be directed away from your page.

    What's funny, is that you were particular about defining FC, SAN, HA on the first page, just not the title term of your article.

Log in

Don't have an account? Sign up now