Promise M610i

The Promise M610i has been our go-to unit of choice for the last several years in our datacenter.  The M610i is a hardware based iSCSI/SATA storage unit that allows you to build your own SAN with whatever hard drives you want.  This reduces vendor lock-in for hard drives, and significantly reduces the cost of the storage system. 

We've found them to be reliable, inexpensive, and they perform well for the price point.  Over the years we've populated Promise systems with everything from 250GB SATA drives to 1TB SATA drives and everything in between.  The performance has remained relatively static though due to the static spindle count and 7200RPM rotating speed of those spindles.

The Promise systems incorporate RAID 0,1,5,6,10, and 1E (a form of RAID10 that allows you to use an odd number of drives).  They are hardware controller based, and feature dual gigabit Ethernet ports that can be bonded together.  It also incorporates a web-based management interface, automatic notifications, and a host of LED's that indicate power, activity, and failed drives.

For someone that is just starting out in the SAN world the M610i is a very attractive option with little experience necessary.  The only drawbacks are when you want to expand the units or if you want better caching.  The Promise system allows for a maximum of 2GB of RAM for caching, so if you want additional caching you'll have to shell out for a much more expensive unit.  The Promise unit does not allow for adding additional JBOD enclosures.  This limits you to a maximum of 16 spindles per system.  We would have loved to continue using the M610i's if we could increase the spindle count.

Overall our experiences with the M610i units have been very good.  We plan on doing an in-depth review of one of our M610i units at a later date to give a little bit better insight into the management and feature set of the units.

Nexenta Building the System
Comments Locked

102 Comments

View All Comments

  • Mattbreitbach - Tuesday, October 5, 2010 - link

    Indeed you can, which is one of the most exciting parts about using software based storage appliances. Nexenta really excels in this area, offering iSCSI, NFS, SMB, and WebDAV with simple mouse clicks.
  • MGSsancho - Tuesday, October 5, 2010 - link

    or a single command!
  • FransUrbo - Wednesday, January 11, 2012 - link

    Would be really nice to see how ZoL compares. It's in no way optimized yet (current work is on getting the core functionality stable - which IMHO it is) so it would have no chanse against OpenSolaris or Nexenta, but hopfully it's comparative to the Promise rack.

    http://zfsonlinux.org/
  • gfg - Tuesday, October 5, 2010 - link

    NAS is extremely cost effective in a data center if a large majority of NFS/CIFS users are more interested in capacity, not performance. NDMP can be very efficent for backups, and the snapshots/multi-protocol aspects of NAS systems are fairly easy to manage. Some of the larger Vendor NAS systems can support 100+TB's per NAS fairly effectively.
  • bhigh - Wednesday, October 6, 2010 - link

    Actually, OpenSolaris and Nexenta can act as a SAN device using COMSTAR. You can attach to them with iSCSI, FC, Infiniband, etc. and use any zvols as raw scsi targets.
  • JGabriel - Wednesday, October 6, 2010 - link

    Also, "Testing and Benchmarking"?

    Doesn't that mean the same thing and isn't it redundant? See what I did there?

    .
  • Fritzr - Thursday, October 7, 2010 - link

    This is similar to the NAS<>SAN argument. They are used in a similar manner, but have very different purposes.

    Testing. You are checking to see if the item performance meets your need & looking for bugs or other problems including documentation and support.

    Benchmarking. You are running a series of test sets to measure the performance. Bugs & poor documentation/support may abort some of the measuring tools, but that simply goes into the report of what the benchmarks measured.

    Or in short:
    Test==does it work?
    Benchmark==What does it score on standard performance measures?
  • lwatcdr - Friday, October 8, 2010 - link

    I am no networking expert so please bear with me.
    What are the benfits of a SAN over local drivers and or a NAS?
    I would expect a NAS to have better performance since it would send less data over the wire than a SAN if they both had the same physical connection.
    A local drive/array I would expect to be faster than a SAN since it will not need to go through a network.
    Does it all come down to management? I can see the benefit of having your servers boot over the network and having all your drives in one system. If you set up the servers to boot over the network it would be really easy to replace a server.
    Am I missing something or are the gains all a matter of management?
  • JohanAnandtech - Sunday, October 10, 2010 - link

    A NAS has most of the time worse performance than a similar SAN since there is a file system layer on the storage side. A SAN only manages block and has thus less layers, and is more efficient.

    A local drive array is faster, but is less scalable and depending on the setup, it is harder to give a large read/write cache: you are limited by the amount of RAM your cache controller supports. In a software SAN you can use block based caches in the RAM of your storage server.

    Management advantages over Local drives are huge: for example you can plug a small ESXi/Linux flash drive which only contains the hypervisor/OS, and then boot everything else from a SAN. That means that chances are good that you never have to touch your server during its lifetime and handle all storage and VM needs centrally. Add to that high availability, flexibility to move VMs from one server to another and so on.
  • lwatcdr - Monday, October 11, 2010 - link

    I but that layer must be executed somewhere I thought that decrease in data sent over the physical wire would make up for the extra software cost on the server side.
    Besides you would still want a NAS even with a SAN for shared data. I am guessing that you could have a NAS served data from the SAN if you needed shared directories.
    I also assume that since most SAN are on a separate storage network that the SAN is mainly used to provide storage to servers and than the servers provide data to clients on the lan.
    The rest of it seems very logical to me in a large setup. I am guessing that if you have a really high performance data base server that one might use a DAS instead of SAN or dedicate a SAN server just to the database server.
    Thanks I am just trying to educate myself on SANs vs NAS vs DAS.
    Since I work at a small software development firm our sever setup is much simpler than the average Data center so I don't get to deal this level of hardware often.
    However I am thinking that maybe we should build a SAN and storage network just for our rack.

Log in

Don't have an account? Sign up now