Fusion-IO: the Pioneer

Fusion-IO is telling everyone that wants to listen that it is much more than the vendor of extremely fast PCIe flash cards. Despite the fact that it sells quite a few cards to the storage giants like NetApp, Fusion-IO wants nothing less than to completely change and conquer the storage market.

Fusion-IO's first succesful move was to sell extremely fast ioDrives to the people who live from scale-out applications like Facebook and Apple. These companies ditched their traditional SAN environments very quickly as replacing centralized shared storage by a model where hundreds of servers have a local PCIe flash storage system gave them up to ten times more storage performance at a fraction of the cost of a high-end SAN system.

Ditching your centralized storage is not for everyone of course: your application has to handle replication and thus be able to survive the loss of many server nodes. But as we all know, that is exactly what Google, Facebook, and other scale-out companies did: build applications that replicate data between nodes so that nobody has to worry about a few failing nodes.

The Iodrive: up to 3TB, hundreds of thousands of IOPS

Although scale-out customers were extremely important to Fusion-IO, the company also went also after the virtualization market where centralized storage is king. The ioTurbine is a Hypervisor plug-in that enables server side caching on a virtualized host with a Fusion-IO flash card. The beauty is that ioTurbine does not disable the typical goodies that centralized storage offers in a virtualized environment such as vMotion and High Availability. ioTurbine works with ESXi, Windows 2012/2008 and RHEL.

Fusion-IO ION Data Accelerator is the next generation SAN: PCIe Flash cards inside any decent x86 server, like the Supermicro 6037 or the HP DL380p. ION is typically used for high-end database clusters. Fusion-IO promises that this shared storage can deliver no less than 1 million IOPs.

With the acquisition of NexGen Storage, Fusion-IO is also targeting the midrange market by offering a “flashpool” kind of product. The key difference is that NexGen Storage can use write-back caching, while most vendors do no or limited writing on the flash disks. The Fusion-IO software is also able to provision a certain amount of IOPs for each LUN.

But more than anything else, the Fusion-IO products are offering extreme speeds. Even a one array NexGen N5 series targeted at SMBs promise 100K-300K IOPs, more than any of the much more expensive midrange SANs can offer right now.

The fastest product, the 10TB ioDrive octal, costs around $100k and delivers 1 million IOPs. Even if those numbers are inflated, it is roughly an order of magnitude faster and cheaper (per GB) than the NetApp “Flash Cache”.

NetApp: Automatic Tiering and More Flash Goodness Conclusions
Comments Locked

60 Comments

View All Comments

  • WeaselITB - Tuesday, August 6, 2013 - link

    Fascinating perspective piece. I look forward to the CouldFounders review -- that stuff seems pretty interesting.

    Thanks,
    -Weasel
  • shodanshok - Tuesday, August 6, 2013 - link

    Very interesting article. It basically match my personal option on SAN market: it is an overprice one, with much less performance per $$$ then DAS.

    Anyway, with the advent of thin pools / thin volumes in RHEL 6.4 and dmcache in RHEL 7.0, commodity, cheap Linux distribution (CentOS costs 0, by the way) basically matche the feature-set exposed by most low/mid end SAN. This means that a cheap server with 12-24 2.5'' bays can be converted to SAN-like works, with very good results also.

    In this point of view, the recent S3500 / Crucial M500 disks are very interesting: the first provide enterprise-certified, high performance, yet (relatively) low cost storage, and the second, while not explicitly targeted at the enterprise market, is available at outstanding capacity/cost ratio (the 1TB version is about 650 euros). Moreover it also has a capacitor array to prevent data loss in the case of power failure.

    Bottom line: for high performance, low cost storage, use a Linux server with loads of SATA SSDs. The only drawback is that you _had_ to know the VGS/LVS cli interface, because good GUIs tend to be commercial products and, anyway, for data recovery the cli remains your best friend.

    A note on the RAID level: while most sysadmins continue to use RAID5/6, I think it is really wrong in most cases. The R/M/W penalty is simply too much on mechanincal disks. I've done some tests here: http://www.ilsistemista.net/index.php/linux-a-unix...

    Maybe on SSDs the results are better for RAID5, but the low-performance degraded state (and very slow/dangerous reconstruction process) ramain.
  • Kyrra1234 - Wednesday, August 7, 2013 - link

    The enterprise storage market is about the value-add you get from buying from the big name companies (EMC, Netapp, HP, etc...). All of those will come with support contracts for replacement gear and to help you fix any problems you may run into with the storage system. I'd say the key reasons to buy from some of these big players:

    * Let someone else worry about maintaining the systems (this is helpful for large datacenter operations where the customer has petabytes of data).
    * The data reporting tools you get from these companies will out-shine any home grown solution.
    * When something goes wrong, these systems will have extensive logs about what happened, and those companies will fly out engineers to rescue your data.
    * Hardware/Firmware testing and verification. The testing that is behind these solutions is pretty staggering.

    For smaller operations, rolling out an enterprise SAN is probably overkill. But if your data and uptime is important to you, enterprise storage will be less of a headache when compared to JBOD setups.
  • Adul - Wednesday, August 7, 2013 - link

    We looked at Fusion-IO ioDrive and decided not to go that route as the work loads presented by virtualize desktops we offer would have killed those units in a heartbeat. We opted instead for a product by greenbytes for our VDI offering.
  • Adul - Wednesday, August 7, 2013 - link

    See if you can get one of these devices for review :)

    http://getgreenbytes.com/solutions/vio/

    we have hundreds of VDI instances running on this.
  • Brutalizer - Sunday, August 11, 2013 - link

    These Greenbyte servers are running ZFS and Solaris (illumos)
    http://www.virtualizationpractice.com/greenbytes-a...
  • Brutalizer - Sunday, August 11, 2013 - link

    GreenByte:
    http://www.theregister.co.uk/2012/10/12/greenbytes...

    Also, Tegile is using ZFS and Solaris:
    http://www.theregister.co.uk/2012/06/01/tegile_zeb...

    Who said ZFS is not the future?
  • woogitboogity - Sunday, August 11, 2013 - link

    If there is one thing I absolutely adore about real capitalism it is these moments where the establishment goes down in flames. Just the thought of their jaws dropping and stammering "but that's not fair!" when they themselves were making mockery of fair prices with absurd profit margins... priceless. Working with computers gives you so very many of these wonderful moments of truth...

    On the software end it is almost as much fun as watching plutocrats and dictators alike try to "contain" or "limit" TCP/IP's ability to spread information.
  • wumpus - Wednesday, August 14, 2013 - link

    There also seems to be a disconnect in what Reed-Solomon can do and what they are concerned about (while RAID 6 uses Reed Solomon, it is a specific application and not a general limitation).

    It is almost impossible to scale rotating discs (presumably magnetic, but don't ignore optical forever) to the point where Reed-Solomon becomes an issue. The basic algorithm scales (easily) to 256 disks (or whatever you are striping across) of which typically you want about 16 (or less) parity disks. Any panic over "some byte of data was mangled while a drive died" just means you need to use more parity disks. Somehow using up all 256 is silly (for rotating media) as few applications access data in groups of 256 sectors a time (current 1MB, possibly more by the time somebody might consider it).

    All this goes out the window if you are using flash (and can otherwise deal with the large page clear requirement issue), but I doubt that many are up to such large sizes yet. If extreme multilevel optical disks ever take over, things might get more interesting on this front (I will still expect Reed Solomon to do well, but eventually things might reach the tipping point).
  • equals42 - Saturday, August 17, 2013 - link

    The author misunderstands how NetApp uses NVRAM. NVRAM is not a cache for the hottest data. Writes are always to DRAM memory. The writes are committed to NVRAM (which is mirrored to another controller) before being acknowledged to the host but the write IO and its commitment to disk or SSD via WAFL sequential CP writes is all from DRAM. While any data remains in DRAM, it can be considered cached but the contents of NVRAM do not constitute nor is it used for caching for host reads.

    NVRAM is only to make sure that no writes are ever lost due to a controller loss. This is important to recognize since most mid-range systems (and all the low-end ones I've investigated) do NOT protect from write losses in event of failure. Data loss like this can lead to corruption in block-based scenarios and database corruption in nearly any scenario.

Log in

Don't have an account? Sign up now