Shortcomings of OpenSolaris

OpenSolaris, while a great platform for a storage system does lack some features that we consider necessary for a dedicated storage array.  One thing that never worked quite right was the LED's on the front of the chassis.  It was very difficult to know which drive was which after they were installed.  The drive names shown in the operating system do not correspond with the physical drives in any consistent way.  This would make troubleshooting a drive failure very difficult, as you'd not know which drive was which drive.  Ideally, a red LED should come on beside the failed drive so it will be easy for a tech to quickly swap the correct drive.

Another shortcoming was the lack of a built-in Web GUI.  The Promise system comes with a web interface to create, destroy, and manage logical volumes.  OpenSolaris has no such interface.  It's all done via command line controls.  Granted once you've become familiar with those command line tools, it's not terrible to set up and destroy volumes, but it'd be nice to have a GUI that allowed you the same control while making it easier for first-timers to manage the system.

The last and possibly most important shortcoming of OpenSolaris is the lack of an automatic notification system if there is a failure.  No email goes out to page a system administrator if a drive dies, so when the system has a drive failure you may never know that a drive has failed.  This presents a very clear danger for usage in the datacenter environment, because most of us just expect to be notified if there is a problem.  The Promise solution does this very well and all you have to do is put in an SMTP server address and an email address to send the notification messages to.

All of these can be solved with custom scripting within OpenSolaris.  An even easier solution is to simply use Nexenta.  They already have the LED's and notifications figured out.  It's a very simple to get Nexenta configured to notify you of any failures. 

Another solution is to buy third-party LED/FMA code.   We have tried the SANtools package and it seems to work pretty well for enabling LED's, but there is still some work to be done before it is as easy as Nexenta.  If you use the code from SANtools to control the LED’s, you will still need to write some scripts to polls FMA and send notifications and launch the SANtools script to control the LED’s.  You can find the SANtools software here:

While this is very possible to script all of this with FMA, I'm not interested in re-inventing the wheel.  Until someone comes up with this code and contributes it into the OpenSolaris project, it is simply not practical for most people to use OpenSolaris directly.  OpenSolaris should have code built into the core for notifying the system administrator and for shining the LED on the correct drive.

Demise of OpenSolaris Things We Would Have Done Differently
Comments Locked

102 Comments

View All Comments

  • Mattbreitbach - Tuesday, October 5, 2010 - link

    Indeed you can, which is one of the most exciting parts about using software based storage appliances. Nexenta really excels in this area, offering iSCSI, NFS, SMB, and WebDAV with simple mouse clicks.
  • MGSsancho - Tuesday, October 5, 2010 - link

    or a single command!
  • FransUrbo - Wednesday, January 11, 2012 - link

    Would be really nice to see how ZoL compares. It's in no way optimized yet (current work is on getting the core functionality stable - which IMHO it is) so it would have no chanse against OpenSolaris or Nexenta, but hopfully it's comparative to the Promise rack.

    http://zfsonlinux.org/
  • gfg - Tuesday, October 5, 2010 - link

    NAS is extremely cost effective in a data center if a large majority of NFS/CIFS users are more interested in capacity, not performance. NDMP can be very efficent for backups, and the snapshots/multi-protocol aspects of NAS systems are fairly easy to manage. Some of the larger Vendor NAS systems can support 100+TB's per NAS fairly effectively.
  • bhigh - Wednesday, October 6, 2010 - link

    Actually, OpenSolaris and Nexenta can act as a SAN device using COMSTAR. You can attach to them with iSCSI, FC, Infiniband, etc. and use any zvols as raw scsi targets.
  • JGabriel - Wednesday, October 6, 2010 - link

    Also, "Testing and Benchmarking"?

    Doesn't that mean the same thing and isn't it redundant? See what I did there?

    .
  • Fritzr - Thursday, October 7, 2010 - link

    This is similar to the NAS<>SAN argument. They are used in a similar manner, but have very different purposes.

    Testing. You are checking to see if the item performance meets your need & looking for bugs or other problems including documentation and support.

    Benchmarking. You are running a series of test sets to measure the performance. Bugs & poor documentation/support may abort some of the measuring tools, but that simply goes into the report of what the benchmarks measured.

    Or in short:
    Test==does it work?
    Benchmark==What does it score on standard performance measures?
  • lwatcdr - Friday, October 8, 2010 - link

    I am no networking expert so please bear with me.
    What are the benfits of a SAN over local drivers and or a NAS?
    I would expect a NAS to have better performance since it would send less data over the wire than a SAN if they both had the same physical connection.
    A local drive/array I would expect to be faster than a SAN since it will not need to go through a network.
    Does it all come down to management? I can see the benefit of having your servers boot over the network and having all your drives in one system. If you set up the servers to boot over the network it would be really easy to replace a server.
    Am I missing something or are the gains all a matter of management?
  • JohanAnandtech - Sunday, October 10, 2010 - link

    A NAS has most of the time worse performance than a similar SAN since there is a file system layer on the storage side. A SAN only manages block and has thus less layers, and is more efficient.

    A local drive array is faster, but is less scalable and depending on the setup, it is harder to give a large read/write cache: you are limited by the amount of RAM your cache controller supports. In a software SAN you can use block based caches in the RAM of your storage server.

    Management advantages over Local drives are huge: for example you can plug a small ESXi/Linux flash drive which only contains the hypervisor/OS, and then boot everything else from a SAN. That means that chances are good that you never have to touch your server during its lifetime and handle all storage and VM needs centrally. Add to that high availability, flexibility to move VMs from one server to another and so on.
  • lwatcdr - Monday, October 11, 2010 - link

    I but that layer must be executed somewhere I thought that decrease in data sent over the physical wire would make up for the extra software cost on the server side.
    Besides you would still want a NAS even with a SAN for shared data. I am guessing that you could have a NAS served data from the SAN if you needed shared directories.
    I also assume that since most SAN are on a separate storage network that the SAN is mainly used to provide storage to servers and than the servers provide data to clients on the lan.
    The rest of it seems very logical to me in a large setup. I am guessing that if you have a really high performance data base server that one might use a DAS instead of SAN or dedicate a SAN server just to the database server.
    Thanks I am just trying to educate myself on SANs vs NAS vs DAS.
    Since I work at a small software development firm our sever setup is much simpler than the average Data center so I don't get to deal this level of hardware often.
    However I am thinking that maybe we should build a SAN and storage network just for our rack.

Log in

Don't have an account? Sign up now