After the exhaustive building and testing process, we've found several areas where we could have improved the original build.

Improved CPU

When we initially decided which hardware components to use, we thought we would not need very much CPU.  While we are not doing any type of parity with our storage, we neglected to account for the checksumming that ZFS does to maintain data integrity.  This checksumming consumes significantly more processor time than we had originally anticipated.  Many tests were using 70% or more of the CPU.  We believe that at this high of CPU utilization that there is significant IO contention.  Our next ZFS based storage system will probably be based on a dual socket platform and higher clocked (possibly more cores also) CPU's, giving additional headroom for the checksumming and allowing you to use more advanced features that consume CPU resources like Deduplication and Compression.  It is not a noticeable problem when testing with gigabit Ethernet speeds.  We have been doing some additional benchmarking using 20Gbps InfiniBand, and we have been able to max out the CPU in the ZFS server well before we approached the limits of 20Gbps networking.

More Memory

Going into this project, we did not really know how much main memory we would need in the ZFS SAN, or how well the system would perform with more main memory.  After doing some tests on smaller datasets that fit entirely into main memory, we decided that our next build would be 48GB of RAM or more.  As a general rule, ZFS will benefit from as much RAM as you can afford to give it.  The ARC (main memory) cache of Nexenta and OpenSolaris both function great when the dataset fits entirely into the main cache, and the performance benefits gained from having significant amounts of main memory are huge.  At some point you will run into diminishing returns.  If you're working with a dataset that is able to fit into main memory and is mainly reads, having more memory for the ARC cache will significantly improve performance.  We saw numbers in the 100's of thousands of IOPS when working just out of main memory for random reads.  On the flip side of the coin, if your workload is mainly writes then adding 48GB of RAM or more may not give you any noticeable performance advantage.

SAS drives

We thought ZFS's advanced software could overcome some of the inherent problems with slow spindle speeds, and it did up to a certain point.  ZFS on OpenSolaris was able to outperform the Promise M610i at basically the same price point.  However, we feel we left a lot more performance on the table.  Next time we deploy a ZFS server, we plan to use 15k RPM SAS drives instead of 7200 RPM SATA drives as the primary storage.  We suspect that we could have easily doubled the performance of our ZFS box in certain tests by using 15k RPM SAS drives.  The downside of the SAS drives will be increased cost and decreased capacity, but those tradeoffs will be worthwhile for us if we can double the IOPS, especially on write operations where all transactions have to be committed to disk as quickly as possible.  Reads may not be affected as much since many of the reads are coming from SSD storage already, and having SAS drives feed the SSD's would probably not increase overall performance unless your working set is large enough to exceed the total capacity of the SSD drives.

SSD Drives

In the ZFS project, we used SLC style SSD drives for ZIL and MLC style SSD drives for L2ARC.  If the price on MLC style SSD drives continues to fall, we will eventually omit the L2ARC and simply use MLC style SSD drives for all of the primary storage.  When that day comes, we will also need to use multiple SAS controllers and a much faster CPU in each ZFS box to keep up with all of the IO that it will be able to deliver.  Our only concern would be the wear leveling on the MLC drives and the ability of the drives to sustain writes over an extended period of time.  Only time will tell if the drives will be able to handle the sustained writes in an L2ARC role or as a primary storage role.

If you decide to use MLC SSD drives for actual storage instead of using SATA or SAS hard drives, then you don’t need to use cache drives. Since all of the storage drives would already be ultra fast SSD drives, there would be no performance gained from also running cache drives. You would still need to run SLC SSD drives for ZIL drives, though, as that would reduce wear on the MLC SSD drives that were being used for data storage.

If you plan to attach a lot of SSD drives, remember to use multiple SAS controllers. The SAS controller in the motherboard for our ZFS Build project is based on the LSI 1068e chipset.  We could not find specific numbers for our integrated SAS controller, but another LSI 1068 based standalone card the LSI SAS3080X-R is able sustain 140,000 IOPS. If you use enough SSD drives, you could actually saturate the  SAS controller. As a general rule of thumb, you may want to have one additional SAS controller for every 24 MLC style SSD drives.  Of course, we have not tested with 24 MLC style SSD's, that number could be higher or lower, but based on our initial performance numbers and the percieved performance of our SAS controller, we believe that 24 would be a good starting point.

Shortcomings of OpenSolaris Conclusion
Comments Locked

102 Comments

View All Comments

  • mbreitba - Tuesday, October 5, 2010 - link

    Thanks for the comment on the ZIL.

    As far as using the X25-E's as ZIL devices - when we built the box initially, the X25-E's were the best choice at the time. Future builds will probably include a capacitor-backed SSD.
  • James5mith - Tuesday, October 5, 2010 - link

    For what it's worth, we are currently using roughly 16 of the Supermicro 846-E1 chassis in our storage solutions.

    Drive numbering is from bottom to top, left to right. Don't know if this helps or not.

    5 11 17 23
    4 10 16 22
    3 9 15 21
    2 8 14 20
    1 7 13 19
    0 6 12 18
  • badhack - Tuesday, October 5, 2010 - link

    I would be curious to know how the performance compares to traditional fs caching on Linux w/ ext3 or ext4 with same amount of memory and a few SSD drives.
  • Maveric007 - Tuesday, October 5, 2010 - link

    There are a few options within Linux that would be pretty interesting to see. FS caching and the different schedulers that are available within Linux. Also I would throw out ext3 and replace that with ext4 and xfs. Redhat is now supporting xfs and there are just tons of tunables for xfs compared to the other file systems.
  • badnews - Tuesday, October 5, 2010 - link

    Thanks Matt, I've been following the build over at your blog and this is an excellent article to tie it all together. I hope you follow up with your "things we'd do differently" in future articles. I would also love to see some more benchmarking against more alternatives, e.g. Open-E, or even an off-the-shelf EqualLogic.

    Keep up the good work :)
  • Fallen Kell - Tuesday, October 5, 2010 - link

    Well, I know at least for Solaris 10.... I would suspect that OpenSolaris has it as well by now, since it has been out for at least 4 years that I know of...

    https://<host>:6789
  • mbreitba - Tuesday, October 5, 2010 - link

    You can install the ZFS Web GUI from the Solaris toolkit, but it isn't bundled into OpenSolaris. It is binary compatible, but it doesn't give any good options for iSCSI setup, as it only supported the old iSCSI target rather than the new COMSTAR target.
  • sfc - Tuesday, October 5, 2010 - link

    How can you spend a page talking about how you aren't really worried about the future of Opensolaris, and then have half a paragraph mentioning "oh, btw, it's cancelled"? The project is clearly dead. They stopped releasing source almost a month ago. Oracle has made absolutely no guarantees about when or how source would be released in the future. For all we know, they could release only portions of Solaris Express, and do it months to years after the binaries drop.

    http://opensolaris.org/jive/thread.jspa?messageID=...

    I love ZFS/Opensolaris, I use it at home, but Opensolaris is dead.
  • Mattbreitbach - Tuesday, October 5, 2010 - link

    OpenSolaris is indeed dead as far as development goes, but it's still viable if you want to use the last build released which is what all of our performance figures are based on. I will be writing some companion articles to this one talking about not only the death of OpenSolaris, but it's alternative, OpenIndiana, and the Promise M610i used as a comparison in this article.
  • andersenep - Tuesday, October 5, 2010 - link

    The OpenSolaris project may be dead but ZFS and all the CDDL licensed code is still out there. Illumos, OpenIndiana and a few other distros are still out there and available. Oracle has stated they will continue to release source code after Solaris releases and will also provide binary preview releases in the form of Solaris Express. To say Solaris and ZFS are dead is pretty premature.

    Whatever happens, the existing code is out there. To call it dead is a bit premature. Sure the project that had the name 'OpenSolaris' has been canceled, but everything that made it up (minus a small few closed bits that have already been replaced) lives on.

Log in

Don't have an account? Sign up now