Promise M610i

The Promise M610i has been our go-to unit of choice for the last several years in our datacenter.  The M610i is a hardware based iSCSI/SATA storage unit that allows you to build your own SAN with whatever hard drives you want.  This reduces vendor lock-in for hard drives, and significantly reduces the cost of the storage system. 

We've found them to be reliable, inexpensive, and they perform well for the price point.  Over the years we've populated Promise systems with everything from 250GB SATA drives to 1TB SATA drives and everything in between.  The performance has remained relatively static though due to the static spindle count and 7200RPM rotating speed of those spindles.

The Promise systems incorporate RAID 0,1,5,6,10, and 1E (a form of RAID10 that allows you to use an odd number of drives).  They are hardware controller based, and feature dual gigabit Ethernet ports that can be bonded together.  It also incorporates a web-based management interface, automatic notifications, and a host of LED's that indicate power, activity, and failed drives.

For someone that is just starting out in the SAN world the M610i is a very attractive option with little experience necessary.  The only drawbacks are when you want to expand the units or if you want better caching.  The Promise system allows for a maximum of 2GB of RAM for caching, so if you want additional caching you'll have to shell out for a much more expensive unit.  The Promise unit does not allow for adding additional JBOD enclosures.  This limits you to a maximum of 16 spindles per system.  We would have loved to continue using the M610i's if we could increase the spindle count.

Overall our experiences with the M610i units have been very good.  We plan on doing an in-depth review of one of our M610i units at a later date to give a little bit better insight into the management and feature set of the units.

Nexenta Building the System
Comments Locked

102 Comments

View All Comments

  • L. - Wednesday, March 16, 2011 - link

    Too bad you already have the 15k drives.

    2) I wanted to say this earlier, but I'm quite confident that SLC is NOT required for a SLOG device, as with current wear leveling, unless you actually write more than <MLC disk capacity> / day there is no way you'll ever need the SLC's extended durability.

    3) Again, MLC SSD's, good stuff

    4) Yes again

    5) not too shabby

    6) Why use 15k or 7k2 rpm drives in the first place

    All in all nice project, just too bad you have to start from used equipment.

    In my view, you can easily trash both your similar system and Anandtech's test system and simply go for what the future is going to be anyway :
    Raid-10 MLC drives, 48+RAM, 4 CPU's (yes those MLC's are going to perform so much faster you will need this - quite a fair chance you'll need AMD stuff on that as 4-socket is their place) and mainly and this is the hardest part, sata 6 Gb/s * many with a controller that can actually handle the bandwidth.

    Overall you'd get a much simpler, faster and cleaner solution (might need to upgrade your networking though to match with the rest).
  • L. - Wednesday, March 16, 2011 - link

    Of course, 6 months later .. .its not the same equation ;) Sorry for the necro
  • B3an - Tuesday, October 5, 2010 - link

    I like seeing stuff like this on Anand. It's a shame it dont draw as much interest as even the poor Apple articles.
  • Tros - Tuesday, October 5, 2010 - link

    Actually, I was just hoping to see a ZFS vs HFS+ comparison for the higher-end Macs. But with the given players (Oracle, Apple), I don't know if the drivers will ever be officially released.
  • Taft12 - Wednesday, October 6, 2010 - link

    Doesn't it? This interests me greatly and judging by the number of comments is as popular as any article about the latest video or desktop CPU tech
  • greenguy - Wednesday, October 6, 2010 - link

    I have to say, kudos to you Anand for featuring an article about ZFS! It is truly the killer app for filesystems right now, and nothing else is going to come close to it for quite some time. What use is performance if you can't automatically verify that your data (and the system files that tells your system how to manipulate that data) was what it was the last time you checked?

    You picked up on the benefits of the SSD (low latency) before anyone else, it is no wonder you've figured out the benefits of ZFS too earlier than most of your compatriots as well. Well done.
  • elopescardozo - Tuesday, October 5, 2010 - link

    Hi Matt,
    Thank you for the extensive report. In your testing results there are a few unexpected results. I find the difference between Nexenta and Open Solaris hard to understand, unless it is due to misalignment of the IO in the case of Nexenta.
    A zvol (the basis for an iSCSI volume) is created on top of the ZFS pool with a certain block size. I believe the default is 8kB. Next you initialize the volume and format it with NTFS. By default the NTFS structure starts at sector 63 (sixty three, not a typo!), which means that every other 4kB cluster (the NTFS allocation size) falls over a zvol block boundary. That has a serious impact on performance. I saw a report of 70% improvement after properly alignment.
    Is it possible that the Open Solaris and Nexenta pools were different in this respect, either because of different zvol block size (e.g. 8kB for Nexenta, 128kB for Open Solaris – larger blocks means less “boundary cases”) or differences in how the volumes were initialized and formatted?
  • mbreitba - Tuesday, October 5, 2010 - link

    It's possible that the sector alignment could be a problem, but I believe the build that we tested, the default sector size was set to 128kB, which was identical to OpenSolaris. If that has changed, then we should re-test with the newest build to see if that makes any differences.
  • cdillon - Tuesday, October 5, 2010 - link

    Windows Server 2008 aligns all newly created partitions at 1MB, so his NTFS block access should have been properly aligned by default.
  • Mattbreitbach - Tuesday, October 5, 2010 - link

    I was unaware that Windows 2008 correctly aligned NTFS partitions now. Thanks for the info!

Log in

Don't have an account? Sign up now