Promise M610i

The Promise M610i has been our go-to unit of choice for the last several years in our datacenter.  The M610i is a hardware based iSCSI/SATA storage unit that allows you to build your own SAN with whatever hard drives you want.  This reduces vendor lock-in for hard drives, and significantly reduces the cost of the storage system. 

We've found them to be reliable, inexpensive, and they perform well for the price point.  Over the years we've populated Promise systems with everything from 250GB SATA drives to 1TB SATA drives and everything in between.  The performance has remained relatively static though due to the static spindle count and 7200RPM rotating speed of those spindles.

The Promise systems incorporate RAID 0,1,5,6,10, and 1E (a form of RAID10 that allows you to use an odd number of drives).  They are hardware controller based, and feature dual gigabit Ethernet ports that can be bonded together.  It also incorporates a web-based management interface, automatic notifications, and a host of LED's that indicate power, activity, and failed drives.

For someone that is just starting out in the SAN world the M610i is a very attractive option with little experience necessary.  The only drawbacks are when you want to expand the units or if you want better caching.  The Promise system allows for a maximum of 2GB of RAM for caching, so if you want additional caching you'll have to shell out for a much more expensive unit.  The Promise unit does not allow for adding additional JBOD enclosures.  This limits you to a maximum of 16 spindles per system.  We would have loved to continue using the M610i's if we could increase the spindle count.

Overall our experiences with the M610i units have been very good.  We plan on doing an in-depth review of one of our M610i units at a later date to give a little bit better insight into the management and feature set of the units.

Nexenta Building the System
Comments Locked

102 Comments

View All Comments

  • MGSsancho - Tuesday, October 5, 2010 - link

    I haven't tried this myself yet but how about using 8kb blocks and using jumbo frames on your network? possibly lower through padding to fill the 9mb packet in exchange for lower latency? I have no idea as this is just a theory. dudes in the #opensolaris irc chan have always recommended 128K or 64K depending on the data.
  • solori - Wednesday, October 20, 2010 - link

    One easy way to check this would be to export the pool from OpenSolaris and directly import it to NexentaStor and re-test. I think you'll find that the differences - as your benchmarks describe - are more linked to write caching at the disk level than partition alignment.

    NexentaStor is focused on data integrity, and tunes for that very conservatively. Since SATA disks are used in your system, NexentaStor will typically disable disk write cache (write hit) and OpenSolaris may typically disable device cache flush operations (write benefit). These two feature differences can provide the benchmark differences you're seeing.

    Also, some "workstation" tuning includes the disabling of ZIL (performance benefit). This is possible - but not recommended - in NexentaStor but has the side effect of risking application data integrity. Disabling the ZIL (in the absence of SLOG) will result in synchronous writes being committed only with transaction group commits - similar performance to having a very fast SLOG (lots of ARC space helpful too).
  • fmatthew5876 - Tuesday, October 5, 2010 - link

    I'd be very interested to see how FreeBSD ZFS benchmark results would compare to Nexenta and Open Solaris.
  • mbreitba - Tuesday, October 5, 2010 - link

    We have benchmarked FreeNAS's implimentation of ZFS on the same hardware, and the performance was abysmal. We've considered looking into the latest releases of FreeBSD but have not completed any of that testing yet.
  • jms703 - Tuesday, October 5, 2010 - link

    Have you benchmarked FreeBSD 8.1? There were a huge number of performance fixes in 8.1.

    Also, when was this article written? OpenSolaris was killed by Sun on August 13th, 2010.
  • mbreitba - Tuesday, October 5, 2010 - link

    There was a lot of work on this article just prior to the official announcement. The development of the Illumos foundation and subsequent OpenIndiana has been so rapidly paced that we wanted to get this article out the door before diving in to OpenIndiana and any other OpenSolaris derivatives. We will probably add more content talking about the demise of OpenSolaris and the Open Source alternatives that have started popping up at a later date.
  • MGSsancho - Tuesday, October 5, 2010 - link

    Not to mention that projects like illumos are currently not recommended for production, Currently only meant as a base for other distros (OpenIndiana.) Then there is Solaris 11 due soon. I'll try out the express version when its released.
  • cdillon - Tuesday, October 5, 2010 - link

    FreeNAS 0.7.x is still using FreeBSD 7.x, and the ZFS code is a bit dated. FreeBSD 8.x has newer ZFS code (v15). Hopefully very soon FreeBSD 9.x will have the latest ZFS code (v24).
  • piroroadkill - Tuesday, October 5, 2010 - link

    This is relevant to my interests, and I've been toying with the idea of setting up a ZFS based server for a while.

    It's nice to see the features it can use when you have the hardware for it.
  • cgaspar - Tuesday, October 5, 2010 - link

    You say that all writes go to a log in ZFS. That's just not true. Only synchronous writes below a certain size go into the log (either built into the pool, or a dedicated log device). All writes are held in memory in a transaction group, and that transaction group is written to the main pool at least every 10 seconds by default (in OpenSolaris - it used to be 30 seconds, and still is in Solaris 10 U9). That's tunable, and commits will happen more frequently if required, based on available ARC and data churn rate. Note that _all_ writes go into the transaction group - the log is only ever used if the box crashes after a synchronous write and before the txg commits.

    Now for the caution - you have chosen SSDs for your SLOG that don't have a backup power source for their on board caches. If you suffer power loss, you may lose data. Several SLC SSDs have recently been released that have a supercapacitor or other power source sufficient to write cache data to flash on power loss, but the current Intel like up doesn't have it. I believe the next generation Intel SSDs will.

Log in

Don't have an account? Sign up now