Test Blade Configuration

 

Our bladecenters are full of high performance blades that we use to run a virtualized hosting environment at this time. Since the blades that are in those systems are in production, we couldn’t very well use them to test the performance of our ZFS system. As such, we had to build another blade. We wanted the blade to be similar in spec to the blades that we were using, but we also wanted to utilize some of the new technology that has come out since we put many of our blades into production. Our current environment is mixed with blades that are running Dual Xeon 5420 processors w/ 32GB RAM and dual 250GB SATA hard drives, some systems that are running Dual Xeon 5520 processors w/48GB RAM and dual 32GB SAS HDD’s.  We use the RAID1 volumes in each blade as boot volumes. All of our content is stored on RAID10 SANs.

Following that tradition we decided to use the SuperMicro SBI-7126T-S6 as our base blade. We populated it with Dual Xeon 5620 processors (Intel Xeon Nehalem/Westmere based 32nm quad core), 48GB Registered ECC DDR3 memory, dual Intel X-25V SSD drives (for boot in a RAID1 mirror) and a SuperMicro AOC-IBH-XDD InfiniBand Mezzanine card.


Click to enlarge

Front panel of the SBI-7126T-S6 Blade Module


Click to enlarge

Intel X25-V SSD boot drives installed


Click to enlarge


Dual Xeon 5620 processors, 48GB Registered ECC DDR3 memory, Infiniband DDR Mezzanine card installed

Our tests will be run using Windows 2008R2 and Iometer. We will be testing iSCSI connections over gigabit Ethernet, as this is what most budget SAN builds are based around.  Our blades also offer us connectivity options in the form of 10Gb Ethernet and 20Gb Infiniband but those connections are out of the scope of this article.

 

Price OpenSolaris box

 The OpenSolaris box, as tested was quite inexpensive for the amount of hardware added to it.  Overall costs for the OpenSolaris system was $6765.  The breakdown is here :

Part

Number

Cost

Total

Chassis

1

$1,199.00

$1,199.00

RAM

2

$166.00

$332.00

Motherboard

1

$379.00

$379.00

Processor

1

$253.00

$253.00

HDD - SLC - Log

2

$378.00

$756.00

HDD - MLC - Cache

2

$414.00

$828.00

HDD - MLC - Boot 40GB

2

$109.00

$218.00

HDD - WD 1TB RE3

20

$140.00

$2,800.00

Total

 

 

$6,765.00

Price of Nexenta

 While OpenSolaris is completely free, Nexenta is a bit different, as there are software costs to consider when building a Nexenta system.  There are three versions of Nexenta you can choose from if you decide to use Nexenta instead of OpenSolaris.  The first is Nexenta Core Platform, which allows unlimited storage, but does not have the GUI interface.  The second is Nexenta Community Edition, which supports up to 12TB of storage and a subset of the features.  The third is their high end solution, Nexenta Enterprise.  Nexenta Enterprise is a paid-for product that has a broad feature set and support, accompanied by a price tag.

The hardware costs for the Nexenta system are identical to the OpenSolaris system.  We opted for the trial Enterprise license for testing (unlimited storage, 45 days) as we have 18TB of billable storage.  Nexenta charges you based on the number of TB that you have in your storage array.  As configured the Nexenta license for our system would cost $3090, bringing the total cost of a Nexenta Enterprise licensed system to $9855.

Price of Promise box

The Promise M610i is relatively simple to calculate costs on.  You have the cost of the chassis, and the cost of the drives.  The breakdown of those costs is below.

Part

Number

Cost

Total

Promise M610i

1

4170

$4,170.00

HDD - WD 1TB RE3

16

$140.00

$2,240.00

Total

 

 

$6,410.00

How we tested with Iometer

Our tests are all run from Iometer, using a custom configuration of Iometer.  The .icf configuration file can be found here.  We ran the following tests, starting at a queue depth of 9, ending with a queue depth of 33, stepping by a queue depth of 3.  This allows us to run tests starting below a queue depth of 1 per drive, to a queue depth of around 2 per drive (depending on the storage system being tested).

The tests were run in this order, and each test was run for 3 minutes at each queue depth.

4k Sequential Read

4k Random Write

4k Random 67% write 33% read

4k Random Read

8k Random Read

8k Sequential Read

8k Random Write

8k Random 67% Write 33% Read

16k Random 67% Write 33% Read

16k Random Write

16k Sequential Read

16k Random Read

32k Random 67% Write 33% Read

32k Random Read

32k Sequential Read

32k Random Write

These tests were not organized in any particular order to bias the tests.  We created the profile, and then ran it against each system.  Before testing, a 300GB iSCSI target was created on each system.  Once the iSCSI target was created, it was formatted with NTFS defaults, and then Iometer was started.  Iometer created a 25GB working set, and then started running the tests.

While running these tests, bear in mind that the longer the tests run, the better the performance should be on the OpenSolaris and Nexenta systems.  This is due to the L2ARC caching.  The L2ARC populates slowly to reduce the amount of wear on MLC SSD drives (approximately 7MB/sec).  When you run a test over a significant amount of time the caching should improve the number of IOPS that the OpenSolaris and Nexenta systems are able to achieve.

Building the System Benchmark Results
Comments Locked

102 Comments

View All Comments

  • cdillon - Tuesday, October 5, 2010 - link

    I've been working on getting the additional parts necessary to build a similar system out of a slightly used HP DL380 G5 with a bunch of 15K SAS drives and an MSA20 shelf full of 750GB SATA drives. Here's what I'm going to be doing a little differently from what you've done:

    1) More CPU (already there, it has dual Xeon X5355 if I recall correctly)

    2) Two mirrored OCZ Vertex2 EX 50GB drives for the SLOG device (the ZIL write cache). Even though the Vertex2 claims a highly impressive 50,000 random-write IOPS, the ZIL is written sequentially, and the Vertex2 EX claims to sustain 250MB/sec writes, so it should make a very good SLOG device.

    3) Two OCZ Vertex2 100G (the cheaper MLC models) for L2ARC.

    4) The SSDs will be put on a separate SAS HBA card from the HDDs to prevent I/O starvation due to the HBA I/O queue filling up because of the relatively slow I/O service-times of the HDDs.

    5) Quad Gigabit Ethernet or 10G Ethernet link. The latter will require an upgrade to our datacenter switches, which is probably going to happen soon anyway.
  • mbreitba - Tuesday, October 5, 2010 - link

    I would love to see performance results for your setup. The IOMeter ICF file that we have linked to in the article would help you run the exact same tests as we ran if you would be interested in running them.
  • cdillon - Tuesday, October 5, 2010 - link

    I forgot to mention it might also be running FreeBSD (which I'm very familiar with) rather than Nexenta or OpenSolaris, but I'm just kind of playing it by ear. I may try all three. The goal is for it to eventually become a production storage server, but I'm going to do a bit of experimentation first. I still haven't gotten around to ordering the SSDs and the extra SAS HBAs, so it'll be a while before I have any benchmarks for you.
  • Maveric007 - Tuesday, October 5, 2010 - link

    You should throw Linux into the mix. You find your performance will increase over the other selections ;)
  • MGSsancho - Tuesday, October 5, 2010 - link

    ZFS on linux is terrible. also ZFS on FreeBSD is decent. recent ZFS features such as deduplication and iSCSI are not available on FreeBSD. just grab a copy of the latest build of opensolaris (134), compile it from build 157. use solaris 10 (got to pay now), or use one of the mentioned Nexenta distros.

    From personal experience, use fast SSD drives. I made the mistake of using a pair of the Intel 40GB Value drives for a home box with 8 x 1.5 TB drives. terrible performance. Yes it is cool for latency but I cant get more than 40MBs from it. I have tried using them just for ZIL or just for L2ARC and performance is abyssal. Get the fastest possible drives you can afford.

    Matt, have you tested with using for example realtek nics (dont, pain in the ass), intel desktop nics (stable) or the more fancy server grade nics that have reported iSCSI offload? also have you tried using dedup/compresion for increased performance/space savings? this will use up lots of memory for indexies but if your cpus are fast enough along with network, less IO hits the discs. I hear it has worked assuming you have the memory, CPU, network. One last bit, try using the Sun 40GBs infiniband cards? I know they will work with solaris 10 and opensolaris and thus I would assume nexenta. might want to check the hardware compatibility list for your IB card.

    Cheers
  • Mattbreitbach - Tuesday, October 5, 2010 - link

    We have not tested with any other NIC's other than the Intel GB nics onboard the blade. We considered using an iSCSI offload NIC for the ZFS system, but given the cost of such cards we could not justify using them.

    As for Deduplication - we have recently tested using deduplication on Nexenta and the results were abysmal. Most tests were reading above 90% CPU utilization while delivering far lower IOPS. I believe that deduplication could help performance, but only if you have an insane amount of CPU available. With the checksumming and deduplication running our 5504 was simply not able to keep up. By increasing the core count, adding a second processor, and increasing the clock speed, it may be able to keep up, but after you spend that much additional capital on CPU's and better motherboards, you could increase your spindle count, switch to SAS drives, or simply add another storage unit for marginally more money.
  • MGSsancho - Tuesday, October 5, 2010 - link

    from my personal experience i could not agree more for the deduplication. 33% on each core on my phenom 2 for a home setup is insane. Some things like exchange server, it is best to let the application decide what is should be cached but duplication realy make sense for a tier three storage or nightly backup or maybe for a small dev box. Also the drives them selves mater, you want to use the ones that are geared for raid setups. it allows the system to better communicate with it. I wont name a particular vendor but the current 'green' 5400 rpm 2TB drives are terrible for zfs http://pastebin.com/aS9Zbfeg (not my setup) that is a nightly backup array used at a webhosting facility. sure they have great throughput but all those errors after a few hours.
  • andersenep - Tuesday, October 5, 2010 - link

    I use WD green drives in my home OpenSolaris NAS. I have 2 raidz vdevs of 4 drives each (initially I used mirrors, but wanted more space). I can serve 720p content to two laptops and my Xstreamer simultaneously without a hiccup...I guess it depends on your needs, but for a home media server, I have absolutely no complaints with the 'green' drives. Weekly scrubs for 1 yr plus with no issues. I did have to replace a scorpio on my mirrored rpool after 6 months. I am quite happy with my setup.
  • solori - Wednesday, October 20, 2010 - link

    As a Nexenta partner, we see these issues all the time. Deduplication is not an apples-apples feature. The system build-out and deduplication set (affecting DDT size) are both unique factors.

    With ZFS' deduplication, RAM/ARC and L2ARC become critical components for performance. Deduplication tables that spill to disk (will not fit into memory) will cause serious performance issues. Likewise, the deduplication hash function and verify options will impact perfomance.

    For each application, doing the math on spindle count (power, cost, space, etc.) versus effective deduplication is always best. Note that deduplication does not need to be enabled pool-wide, and that - like in compression where it is wasteful to compress pre-compressed data - data with low deduplication rates should not be allowed to dominate a deduplication-enabled pool/folder.

    Deduplication of 15K, primary storage seems contradictory, but that type of storage has the highest $/TB factor and spindle count for any given capacity target. By allocating deduplication to targets folders/zvol, performance and capacity can be optimized for most use cases. Obviously, data sets that are write-heavy and sensitive to storage latency are not good candidates for deduplication or inline compression.

    If you do the math, the cost of SSD augmentation of 7200 RPM SAS pools is very competitive against similar capacity 15K pools. The benefits to SSD augmentation (i.e. L2ARC and ZIL->SLOG where synchronous writes dominate performance profiles) is in higher IOP potential for random IO workloads (where the 7200 disks suffer most). In fact, contrasting 600GB SAS 15K to 2TB SAS 7200, you approach an economic factor where 7200 RPM disks favor mirror groups over 15K raidz groups - again, given the same capacity goals.

    The real beauty of ZFS storage - whether it be Opensolaris/Illumos or Nexenta/Stor/Core - is that mixing 15K and 7200 RPM pools within the same system is very easy/effective to do. With the proper SAS controllers and JBOD/RBOD combinations, you can limit 15K applications to a small working set and commit bulk resources to augmented 7200 RPM spindles in robust raidz2 groups (i.e. watch your MTTDL versus raidz).

    It is important to note that ZFS was not designed with the "home user" in mind. It can be very memory and CPU/thread hungry and easily out-strip a typical hobbyist's setup. A proper enterprise setup will include 2P quad core and RAM stores suited to the target workload. Since ZFS was designed for robust threading, the more "hardware" threads it has at its disposal, the more efficient it is. While snapshots are "free" in ZFS (i.e. copy-on-write nature of ZFS means writes are the same with or without snapshots) but data integrity (checksums) and compression/deduplication are not.
  • Mattbreitbach - Wednesday, October 20, 2010 - link

    Excellent comments! Thank you for your input.

    As you noted, we found deduplication to be beyond the reaches of our system. With proper tuning and component selection, I think it could be used very well (and have talked to several people who have had very good experiences with it). For the average home user it's probably beyond the scope of what they would want to use for their storage.

Log in

Don't have an account? Sign up now