Test Blade Configuration

 

Our bladecenters are full of high performance blades that we use to run a virtualized hosting environment at this time. Since the blades that are in those systems are in production, we couldn’t very well use them to test the performance of our ZFS system. As such, we had to build another blade. We wanted the blade to be similar in spec to the blades that we were using, but we also wanted to utilize some of the new technology that has come out since we put many of our blades into production. Our current environment is mixed with blades that are running Dual Xeon 5420 processors w/ 32GB RAM and dual 250GB SATA hard drives, some systems that are running Dual Xeon 5520 processors w/48GB RAM and dual 32GB SAS HDD’s.  We use the RAID1 volumes in each blade as boot volumes. All of our content is stored on RAID10 SANs.

Following that tradition we decided to use the SuperMicro SBI-7126T-S6 as our base blade. We populated it with Dual Xeon 5620 processors (Intel Xeon Nehalem/Westmere based 32nm quad core), 48GB Registered ECC DDR3 memory, dual Intel X-25V SSD drives (for boot in a RAID1 mirror) and a SuperMicro AOC-IBH-XDD InfiniBand Mezzanine card.


Click to enlarge

Front panel of the SBI-7126T-S6 Blade Module


Click to enlarge

Intel X25-V SSD boot drives installed


Click to enlarge


Dual Xeon 5620 processors, 48GB Registered ECC DDR3 memory, Infiniband DDR Mezzanine card installed

Our tests will be run using Windows 2008R2 and Iometer. We will be testing iSCSI connections over gigabit Ethernet, as this is what most budget SAN builds are based around.  Our blades also offer us connectivity options in the form of 10Gb Ethernet and 20Gb Infiniband but those connections are out of the scope of this article.

 

Price OpenSolaris box

 The OpenSolaris box, as tested was quite inexpensive for the amount of hardware added to it.  Overall costs for the OpenSolaris system was $6765.  The breakdown is here :

Part

Number

Cost

Total

Chassis

1

$1,199.00

$1,199.00

RAM

2

$166.00

$332.00

Motherboard

1

$379.00

$379.00

Processor

1

$253.00

$253.00

HDD - SLC - Log

2

$378.00

$756.00

HDD - MLC - Cache

2

$414.00

$828.00

HDD - MLC - Boot 40GB

2

$109.00

$218.00

HDD - WD 1TB RE3

20

$140.00

$2,800.00

Total

 

 

$6,765.00

Price of Nexenta

 While OpenSolaris is completely free, Nexenta is a bit different, as there are software costs to consider when building a Nexenta system.  There are three versions of Nexenta you can choose from if you decide to use Nexenta instead of OpenSolaris.  The first is Nexenta Core Platform, which allows unlimited storage, but does not have the GUI interface.  The second is Nexenta Community Edition, which supports up to 12TB of storage and a subset of the features.  The third is their high end solution, Nexenta Enterprise.  Nexenta Enterprise is a paid-for product that has a broad feature set and support, accompanied by a price tag.

The hardware costs for the Nexenta system are identical to the OpenSolaris system.  We opted for the trial Enterprise license for testing (unlimited storage, 45 days) as we have 18TB of billable storage.  Nexenta charges you based on the number of TB that you have in your storage array.  As configured the Nexenta license for our system would cost $3090, bringing the total cost of a Nexenta Enterprise licensed system to $9855.

Price of Promise box

The Promise M610i is relatively simple to calculate costs on.  You have the cost of the chassis, and the cost of the drives.  The breakdown of those costs is below.

Part

Number

Cost

Total

Promise M610i

1

4170

$4,170.00

HDD - WD 1TB RE3

16

$140.00

$2,240.00

Total

 

 

$6,410.00

How we tested with Iometer

Our tests are all run from Iometer, using a custom configuration of Iometer.  The .icf configuration file can be found here.  We ran the following tests, starting at a queue depth of 9, ending with a queue depth of 33, stepping by a queue depth of 3.  This allows us to run tests starting below a queue depth of 1 per drive, to a queue depth of around 2 per drive (depending on the storage system being tested).

The tests were run in this order, and each test was run for 3 minutes at each queue depth.

4k Sequential Read

4k Random Write

4k Random 67% write 33% read

4k Random Read

8k Random Read

8k Sequential Read

8k Random Write

8k Random 67% Write 33% Read

16k Random 67% Write 33% Read

16k Random Write

16k Sequential Read

16k Random Read

32k Random 67% Write 33% Read

32k Random Read

32k Sequential Read

32k Random Write

These tests were not organized in any particular order to bias the tests.  We created the profile, and then ran it against each system.  Before testing, a 300GB iSCSI target was created on each system.  Once the iSCSI target was created, it was formatted with NTFS defaults, and then Iometer was started.  Iometer created a 25GB working set, and then started running the tests.

While running these tests, bear in mind that the longer the tests run, the better the performance should be on the OpenSolaris and Nexenta systems.  This is due to the L2ARC caching.  The L2ARC populates slowly to reduce the amount of wear on MLC SSD drives (approximately 7MB/sec).  When you run a test over a significant amount of time the caching should improve the number of IOPS that the OpenSolaris and Nexenta systems are able to achieve.

Building the System Benchmark Results
Comments Locked

102 Comments

View All Comments

  • Mattbreitbach - Tuesday, October 5, 2010 - link

    Indeed you can, which is one of the most exciting parts about using software based storage appliances. Nexenta really excels in this area, offering iSCSI, NFS, SMB, and WebDAV with simple mouse clicks.
  • MGSsancho - Tuesday, October 5, 2010 - link

    or a single command!
  • FransUrbo - Wednesday, January 11, 2012 - link

    Would be really nice to see how ZoL compares. It's in no way optimized yet (current work is on getting the core functionality stable - which IMHO it is) so it would have no chanse against OpenSolaris or Nexenta, but hopfully it's comparative to the Promise rack.

    http://zfsonlinux.org/
  • gfg - Tuesday, October 5, 2010 - link

    NAS is extremely cost effective in a data center if a large majority of NFS/CIFS users are more interested in capacity, not performance. NDMP can be very efficent for backups, and the snapshots/multi-protocol aspects of NAS systems are fairly easy to manage. Some of the larger Vendor NAS systems can support 100+TB's per NAS fairly effectively.
  • bhigh - Wednesday, October 6, 2010 - link

    Actually, OpenSolaris and Nexenta can act as a SAN device using COMSTAR. You can attach to them with iSCSI, FC, Infiniband, etc. and use any zvols as raw scsi targets.
  • JGabriel - Wednesday, October 6, 2010 - link

    Also, "Testing and Benchmarking"?

    Doesn't that mean the same thing and isn't it redundant? See what I did there?

    .
  • Fritzr - Thursday, October 7, 2010 - link

    This is similar to the NAS<>SAN argument. They are used in a similar manner, but have very different purposes.

    Testing. You are checking to see if the item performance meets your need & looking for bugs or other problems including documentation and support.

    Benchmarking. You are running a series of test sets to measure the performance. Bugs & poor documentation/support may abort some of the measuring tools, but that simply goes into the report of what the benchmarks measured.

    Or in short:
    Test==does it work?
    Benchmark==What does it score on standard performance measures?
  • lwatcdr - Friday, October 8, 2010 - link

    I am no networking expert so please bear with me.
    What are the benfits of a SAN over local drivers and or a NAS?
    I would expect a NAS to have better performance since it would send less data over the wire than a SAN if they both had the same physical connection.
    A local drive/array I would expect to be faster than a SAN since it will not need to go through a network.
    Does it all come down to management? I can see the benefit of having your servers boot over the network and having all your drives in one system. If you set up the servers to boot over the network it would be really easy to replace a server.
    Am I missing something or are the gains all a matter of management?
  • JohanAnandtech - Sunday, October 10, 2010 - link

    A NAS has most of the time worse performance than a similar SAN since there is a file system layer on the storage side. A SAN only manages block and has thus less layers, and is more efficient.

    A local drive array is faster, but is less scalable and depending on the setup, it is harder to give a large read/write cache: you are limited by the amount of RAM your cache controller supports. In a software SAN you can use block based caches in the RAM of your storage server.

    Management advantages over Local drives are huge: for example you can plug a small ESXi/Linux flash drive which only contains the hypervisor/OS, and then boot everything else from a SAN. That means that chances are good that you never have to touch your server during its lifetime and handle all storage and VM needs centrally. Add to that high availability, flexibility to move VMs from one server to another and so on.
  • lwatcdr - Monday, October 11, 2010 - link

    I but that layer must be executed somewhere I thought that decrease in data sent over the physical wire would make up for the extra software cost on the server side.
    Besides you would still want a NAS even with a SAN for shared data. I am guessing that you could have a NAS served data from the SAN if you needed shared directories.
    I also assume that since most SAN are on a separate storage network that the SAN is mainly used to provide storage to servers and than the servers provide data to clients on the lan.
    The rest of it seems very logical to me in a large setup. I am guessing that if you have a really high performance data base server that one might use a DAS instead of SAN or dedicate a SAN server just to the database server.
    Thanks I am just trying to educate myself on SANs vs NAS vs DAS.
    Since I work at a small software development firm our sever setup is much simpler than the average Data center so I don't get to deal this level of hardware often.
    However I am thinking that maybe we should build a SAN and storage network just for our rack.

Log in

Don't have an account? Sign up now