Shortcomings of OpenSolaris

OpenSolaris, while a great platform for a storage system does lack some features that we consider necessary for a dedicated storage array.  One thing that never worked quite right was the LED's on the front of the chassis.  It was very difficult to know which drive was which after they were installed.  The drive names shown in the operating system do not correspond with the physical drives in any consistent way.  This would make troubleshooting a drive failure very difficult, as you'd not know which drive was which drive.  Ideally, a red LED should come on beside the failed drive so it will be easy for a tech to quickly swap the correct drive.

Another shortcoming was the lack of a built-in Web GUI.  The Promise system comes with a web interface to create, destroy, and manage logical volumes.  OpenSolaris has no such interface.  It's all done via command line controls.  Granted once you've become familiar with those command line tools, it's not terrible to set up and destroy volumes, but it'd be nice to have a GUI that allowed you the same control while making it easier for first-timers to manage the system.

The last and possibly most important shortcoming of OpenSolaris is the lack of an automatic notification system if there is a failure.  No email goes out to page a system administrator if a drive dies, so when the system has a drive failure you may never know that a drive has failed.  This presents a very clear danger for usage in the datacenter environment, because most of us just expect to be notified if there is a problem.  The Promise solution does this very well and all you have to do is put in an SMTP server address and an email address to send the notification messages to.

All of these can be solved with custom scripting within OpenSolaris.  An even easier solution is to simply use Nexenta.  They already have the LED's and notifications figured out.  It's a very simple to get Nexenta configured to notify you of any failures. 

Another solution is to buy third-party LED/FMA code.   We have tried the SANtools package and it seems to work pretty well for enabling LED's, but there is still some work to be done before it is as easy as Nexenta.  If you use the code from SANtools to control the LED’s, you will still need to write some scripts to polls FMA and send notifications and launch the SANtools script to control the LED’s.  You can find the SANtools software here:

While this is very possible to script all of this with FMA, I'm not interested in re-inventing the wheel.  Until someone comes up with this code and contributes it into the OpenSolaris project, it is simply not practical for most people to use OpenSolaris directly.  OpenSolaris should have code built into the core for notifying the system administrator and for shining the LED on the correct drive.

Demise of OpenSolaris Things We Would Have Done Differently
Comments Locked

102 Comments

View All Comments

  • diamondsw2 - Tuesday, October 5, 2010 - link

    You're not doing your readers any favors by conflating the terms NAS and SAN. NAS devices (such as what you've described here) are Network Attached Storage, accessed over Ethernet, and usually via fileshares (NFS, CIFS, even AFP) with file-level access. SAN is Storage Area Network, nearly always implemented with Fibre Channel, and offers block-level access. About the only gray area is that iSCSI allows block-level access to a NAS, but that doesn't magically turn it into a SAN with a storage fabric.

    Honestly, given the problems I've seen with NAS devices and the burden a well-designed one will put on a switch backplane, I just don't see the point for anything outside the smallest installations where the storage is tied to a handful of servers. By the time you have a NAS set up *well* you're inevitably going to start taxing your switches, which leads to setting up dedicated storage switches, which means... you might as well have set up a real SAN with 8Gbps fibre channel and been done with it.

    NAS is great for home use - no special hardware and cabling, and options as cheap as you want to go - but it's a pretty poor way to handle centralized storage in the datacenter.
  • cdillon - Tuesday, October 5, 2010 - link

    The terms NAS and SAN have become rightfully mixed, because modern storage appliances can do the jobs of both. Add some FC HBAs to the above ZFS storage system and create some FC Targets using Comstar in OpenSolaris or Nexenta and guess what? You've got a "SAN" box. Nexenta can even do active/active failover and everything else that makes it worthy of being called a true "Enterprise SAN" solution.

    I like our FC SAN here, but holy cow is it expensive, and its not getting any cheaper as time goes on. I foresee iSCSI via plain 10G Ethernet and also FCoE (which is 10G Ethernet + FC sharing the same physical HBA and data link) completely taking over the Fibre Channel market within the next decade, which will only serve to completely erase the line between "NAS" and "SAN".
  • mbreitba - Tuesday, October 5, 2010 - link

    The systems as configured in this article are block level storage devices accessed over a gigabit network using iSCSI. I would strongly consider that a SAN device over a NAS device. Also, the storage network is segregated onto a separate network already, isolated from the primary network.

    We also backed this device with 20Gbps InfiniBand, but had issues getting the IB network stable, so we did not include it in the article.
  • Maveric007 - Tuesday, October 5, 2010 - link

    I find iscsi is closer to a NAS then a SAN to be honest. The performance difference between iscsi and san are much further away then iscsi and nas.
  • Mattbreitbach - Tuesday, October 5, 2010 - link

    iSCSI is block based storage, NAS is file based. The transport used is irrelevent. We could use iSCSI over 10GbE, or over InfiniBand, which would increase the performance significantly, and probably exceed what is available on the most expensive 8Gb FC available.
  • mino - Tuesday, October 5, 2010 - link

    You are confusing the NAS vs. SAN terminology with the interconnects terminology and vice versa.

    SAN, NAS, DAS ... are abstract methods how a data client accesses the stored data.
    --Network Attached Storage (NAS), per definition, is an file/entity-based data storage solution.
    - - - It is _usually_but_not_necessarily_ connected to a general-purpose data network
    --Storage Area Network(SAN), per definition, is a block-access-based data storage solution.
    - - - It is _usually_but_not_necessarily_THE_ dedicated data network.

    Ethernet, FC, Infiniband, ... are physical data conduits, they are the ones who define in which PERFORMANCE class a solution belongs

    iSCSI, SAS, FC, NFS, CIFS ... are logical conduits, they are the ones who define in which FEATURE CLASS a solution belongs

    Today, most storage appliances allow for multiple ways to access the data, many of the simultaneously.

    Therefore, presently:

    Calling a storage appliance, of whatever type, a "SAN" is pure jargon.
    - It has nothing to do with the device "being" a SAN per se
    Calling an appliance, of whatever type, a "NAS" means it is/will be used in the NAS role.
    - It has nothing to do with the device "being" a NAS per se.
  • mkruer - Tuesday, October 5, 2010 - link

    I think there needs to be a new term called SANNAS or snaz short for snazzy.
  • mmrezaie - Wednesday, October 6, 2010 - link

    Thanks, I learned a lot.
  • signal-lost - Friday, October 8, 2010 - link

    Depends on the hardware sir.

    My iSCSI Datacore SAN, pushes 20k iops for the same reason that their ZFS does it (Ram cacheing).

    Fibre Channel SANs will always outperform iSCSI run over crappy switching.
    Currently Fibre Channel maxes out at 8Gbps in most arrays. Even with MPIO, your better off with an iSCSI system and 10/40Gbps Ethernet if you do it right. Much cheaper, and you don't have to learn an entire new networking model (Fibre Channel or Infiniband).
  • MGSsancho - Tuesday, October 5, 2010 - link

    while technically a SAN you can easily make it a NAS with a simple zfs set sharesmb=on as I am sure you are aware.

Log in

Don't have an account? Sign up now