Test Blade Configuration

 

Our bladecenters are full of high performance blades that we use to run a virtualized hosting environment at this time. Since the blades that are in those systems are in production, we couldn’t very well use them to test the performance of our ZFS system. As such, we had to build another blade. We wanted the blade to be similar in spec to the blades that we were using, but we also wanted to utilize some of the new technology that has come out since we put many of our blades into production. Our current environment is mixed with blades that are running Dual Xeon 5420 processors w/ 32GB RAM and dual 250GB SATA hard drives, some systems that are running Dual Xeon 5520 processors w/48GB RAM and dual 32GB SAS HDD’s.  We use the RAID1 volumes in each blade as boot volumes. All of our content is stored on RAID10 SANs.

Following that tradition we decided to use the SuperMicro SBI-7126T-S6 as our base blade. We populated it with Dual Xeon 5620 processors (Intel Xeon Nehalem/Westmere based 32nm quad core), 48GB Registered ECC DDR3 memory, dual Intel X-25V SSD drives (for boot in a RAID1 mirror) and a SuperMicro AOC-IBH-XDD InfiniBand Mezzanine card.


Click to enlarge

Front panel of the SBI-7126T-S6 Blade Module


Click to enlarge

Intel X25-V SSD boot drives installed


Click to enlarge


Dual Xeon 5620 processors, 48GB Registered ECC DDR3 memory, Infiniband DDR Mezzanine card installed

Our tests will be run using Windows 2008R2 and Iometer. We will be testing iSCSI connections over gigabit Ethernet, as this is what most budget SAN builds are based around.  Our blades also offer us connectivity options in the form of 10Gb Ethernet and 20Gb Infiniband but those connections are out of the scope of this article.

 

Price OpenSolaris box

 The OpenSolaris box, as tested was quite inexpensive for the amount of hardware added to it.  Overall costs for the OpenSolaris system was $6765.  The breakdown is here :

Part

Number

Cost

Total

Chassis

1

$1,199.00

$1,199.00

RAM

2

$166.00

$332.00

Motherboard

1

$379.00

$379.00

Processor

1

$253.00

$253.00

HDD - SLC - Log

2

$378.00

$756.00

HDD - MLC - Cache

2

$414.00

$828.00

HDD - MLC - Boot 40GB

2

$109.00

$218.00

HDD - WD 1TB RE3

20

$140.00

$2,800.00

Total

 

 

$6,765.00

Price of Nexenta

 While OpenSolaris is completely free, Nexenta is a bit different, as there are software costs to consider when building a Nexenta system.  There are three versions of Nexenta you can choose from if you decide to use Nexenta instead of OpenSolaris.  The first is Nexenta Core Platform, which allows unlimited storage, but does not have the GUI interface.  The second is Nexenta Community Edition, which supports up to 12TB of storage and a subset of the features.  The third is their high end solution, Nexenta Enterprise.  Nexenta Enterprise is a paid-for product that has a broad feature set and support, accompanied by a price tag.

The hardware costs for the Nexenta system are identical to the OpenSolaris system.  We opted for the trial Enterprise license for testing (unlimited storage, 45 days) as we have 18TB of billable storage.  Nexenta charges you based on the number of TB that you have in your storage array.  As configured the Nexenta license for our system would cost $3090, bringing the total cost of a Nexenta Enterprise licensed system to $9855.

Price of Promise box

The Promise M610i is relatively simple to calculate costs on.  You have the cost of the chassis, and the cost of the drives.  The breakdown of those costs is below.

Part

Number

Cost

Total

Promise M610i

1

4170

$4,170.00

HDD - WD 1TB RE3

16

$140.00

$2,240.00

Total

 

 

$6,410.00

How we tested with Iometer

Our tests are all run from Iometer, using a custom configuration of Iometer.  The .icf configuration file can be found here.  We ran the following tests, starting at a queue depth of 9, ending with a queue depth of 33, stepping by a queue depth of 3.  This allows us to run tests starting below a queue depth of 1 per drive, to a queue depth of around 2 per drive (depending on the storage system being tested).

The tests were run in this order, and each test was run for 3 minutes at each queue depth.

4k Sequential Read

4k Random Write

4k Random 67% write 33% read

4k Random Read

8k Random Read

8k Sequential Read

8k Random Write

8k Random 67% Write 33% Read

16k Random 67% Write 33% Read

16k Random Write

16k Sequential Read

16k Random Read

32k Random 67% Write 33% Read

32k Random Read

32k Sequential Read

32k Random Write

These tests were not organized in any particular order to bias the tests.  We created the profile, and then ran it against each system.  Before testing, a 300GB iSCSI target was created on each system.  Once the iSCSI target was created, it was formatted with NTFS defaults, and then Iometer was started.  Iometer created a 25GB working set, and then started running the tests.

While running these tests, bear in mind that the longer the tests run, the better the performance should be on the OpenSolaris and Nexenta systems.  This is due to the L2ARC caching.  The L2ARC populates slowly to reduce the amount of wear on MLC SSD drives (approximately 7MB/sec).  When you run a test over a significant amount of time the caching should improve the number of IOPS that the OpenSolaris and Nexenta systems are able to achieve.

Building the System Benchmark Results
Comments Locked

102 Comments

View All Comments

  • Penti - Wednesday, October 6, 2010 - link

    And a viable alternative still isn't available how is Nexenta and the community suppose to get driver support and support for new hardware there, when Oracle has closed the development kernel (SXDE is closed source), meaning that they maybe just maybe can use the retail Solaris 11 kernel if it's released in a functioning form that can be piped in with existing software and distro. They aren't going to develop it themselves and the vendors have no reason giving the code/drivers to anybody but Oracle. Continuing the OpenSolaris kernel means creating a new operating system. It means you won't get the latest ZFS updates and tools any more, at least not till they are in the normal S11 release. Means you can't expect the latest driver updates and so on either. You can continue to use it on todays hardware, but tomorrow it might be useless, you might not find working configurations.

    It's not clear that Nexenta actually can develop their own operating system, rather then just a distro, it means they have to create their own OS with their own kernel eventually. With their own drivers and so on. And it's not clear how much code Oracle will let slip out. It's just clear that they will keep it under wraps till official releases. It's however clear that there won't be any distro for them to base it on and any and all forks would be totally dependent on what Nexenta (Illuminos) manage to do. It will quickly get outdated without updates flowing all the time, and they came from Sun.
  • andersenep - Wednesday, October 6, 2010 - link

    OpenIndiana/Illumos runs the same latest and greatest pool/zfs versions as the most recent Solaris 10 update.

    Work continues on porting newer pool/ZFS versions to FreeBSD which has plenty of driver support (better than OpenSolaris ever did).

    A stated goal of the Illumos project is to maintain 100% binary compatibility with Solaris. If Oracle decides the break that compatibility, intentionally or not, it will truly become a fork. Development will still continue.

    Even if no further development is made on ZFS, it's still an absolutely phenomenal filesystem. How many years now has Apple been using HFS+? FAT is still around in everything. If all development on ZFS stopped today, it would still remain an absolutely viable filesystem for many years to come. There is nothing else currently out there that even comes close to its feature set.

    I don't see how ZFS being under Oracle's control makes it any worse than any other open source filesystem. The source is still out there, and people are free to do what they want with it within the CDDL terms.

    This idea that just because the OpenSolaris DISTRO has been discontinued, that everything that went into it is no longer viable is silly. It is like calling Linux dead because Mandriva is dead.
  • Guspaz - Wednesday, October 6, 2010 - link

    Thanks for mentioning OpenIndiana. I've been eagerly awaiting IllumOS to be built into an actual distribution to give me an upgrade path for my home OpenSolaris file server, and I look forward to upgrading to the first stable build of OpenIndiana.

    I'm currently running a dev build of OpenSolaris since the realtek network driver was broken in the latest stable build of OpenSolaris (for my chipset, at least).
  • Mattbreitbach - Wednesday, October 6, 2010 - link

    I believe all of the current Hypervisors support this. Hyper-V does, as does XenServer. I have not done extensive testing with ESXi, but I would imagine that it supports it also.
  • joeribl - Wednesday, October 6, 2010 - link

    "Nexenta is to OpenSolaris what OpenFiler or FreeNAS is to Linux."

    FreeNAS has always been FreeBSD based, not Linux. It does however provide ZFS support.
  • Mattbreitbach - Wednesday, October 6, 2010 - link

    I should have caught that - thanks for the info. I've edited the article to reflect as such.
  • vermaden - Wednesday, October 6, 2010 - link

    ... with deduplication and other features, here You can grab an ISO build or a VirtualBox apliance here: http://blog.vx.sk/archives/9-Pomozte-testovat-ZFS-...

    It would be great to see how FreeBSD performs (8.1 and 9-CURRENT) on that hardware, I can help You configure FreeBSD for these tests if You would like to, for example, by default FreeBSD does not enables AHCI mode for SATA drives which increases random performance a lot.

    Anyway, great article about ZFS performance on nice piece of hardware.
  • Mattbreitbach - Wednesday, October 6, 2010 - link

    In Hyper-V it is called a Differencing disk - you have a parent disk that you build, and do not modify. You then create a "differencing disk". That disk uses the parent disk as it's source, and writes any changes out to the differencing disk. This way you can maintain all core OS files in one image, and write any changes out to child disks. This allows the storage system to cache any core OS components once, and any access to those core components comes directly from the cache.

    I believe that Xen calls it a differencing disk also, but I do not currently have a Xen Hypervisor running anywhere that I can check quickly.
  • gea - Wednesday, October 6, 2010 - link

    new: Version 0.323
    napp-it ZFS appliance with Web-UI and online-installer for NexentaCore and Openindiana

    Napp-it, a project to build a free "ready to run" ZFS- Web und NAS-Appliance with Web-UI and Online-Installer now supports NexentaCore and OpenIndiana (free successor of OpenSolaris) up from Version 0.323. With its online Installer, you will have your ZFS-Server running with all services and tools within minutes.

    Features
    NAS Fileserver with AFP (incl. Time Maschine and Zero Config), SMB with ACLs, AD-Support and User/ Groups
    SAN Server with iSCSI (Comstar) and NFS forr XEN or Vmware esxi
    Web-Server, FTP
    Database-Server
    Backup-Server
    newest ZFS-Features (highest security with parity and Copy On Write, Deduplication, Raid-Z3, unlimited Snapshots via Windows previous Version, working ACLs, Online Pooltest with Datarefresh, Hybridpools, expandable Datapools=simply add Controller or Disks,............)

    included Tools:
    bonnie Pool-Performancetest
    iperf Net-Performancetest
    midnight commander
    ndmpcopy Backup
    rsync
    smartmontools
    socat
    unzip

    Management:
    remote via Web-UI and Browser

    Howto with NexentaCore:
    1. insert NexentaCore CD and install
    2. login as root and enter:

    wget -O - www.napp-it.org/nappit | perl

    During First-Installation you have to enter a mySQL Passwort angeben and select Apache with space-key

    Howto with OpenIndiana (free successor of OpenSolaris):
    1. Insert OpenIndiana CD and install
    2. login as admin, open a terminal and enter su to get root permissions and enter:

    wget -O - www.napp-it.org/nappit | perl

    AFP-Server is currently installed only on Nexenta.

    thats all, no step 3!
    You can now remotely manage this Mac/PC NAS appliance via Browser

    Details
    www.napp-it.org

    running Installation
    www.napp-it.org/pop_en.html
  • Mattbreitbach - Wednesday, October 6, 2010 - link

    Very neat - I am installing OpenIndiana on our hardware right now and will test out the Napp-it application.

Log in

Don't have an account? Sign up now