The increase in compute density in servers over the past several years has significantly impacted form factors in the enterprise. Whereas you used to have to move to a 4U or 5U chassis if you wanted an 8-core machine, these days you can get there with just a single socket in a 1U or 2U chassis (or smaller if you go the blade route). The transition from 3.5" to 2.5" hard drives helped maintain IO performance as server chassis shrunk, but even then there's a limit to how many drives you can fit into a single enclosure. In network architectures that don't use a beefy SAN or still demand high-speed local storage, PCI Express SSDs are very attractive. As SSDs just need lots of PCB real estate, a 2.5" enclosure can be quite limiting. A PCIe card on the other hand can accommodate a good number of controllers, DRAM and NAND devices. Furthermore, unlike a single 2.5" SAS/SATA SSD, PCIe offers enough bandwidth headroom to scale performance with capacity. Instead of just adding more NAND to reach higher capacities you can add more controllers with the NAND, effectively increasing performance as you add capacity.

The first widely available PCIe SSDs implemented this simple scaling approach. Take the hardware that you'd find on a 2.5" SSD, duplicate it multiple times and put it behind a RAID controller all on a PCIe card. The end user would see the, admittedly rough, illusion of a single SSD without much additional development work on the part of the SSD vendor. There are no new controllers to build and firmwares aren't substantially different from the standalone 2.5" drives. PCIe SSDs with on-board RAID became a quick way of getting your consumer SSDs into a soon to be huge enterprise SSD market. Eventually we'll see native PCIe SSD controllers that won't need the pesky SATA/SAS to PCIe bridge to be present on the card, and there's even a spec (NVMe) to help move things along. For now we're stuck with a bunch of controllers on a PCIe card.

It took surprisingly long for Intel to dip its toe in the PCIe SSD waters. In fact, Intel's SSD behavior post-2008 has been a bit odd. To date Intel still hasn't released a 6Gbps SATA controller based on its own IP. Despite the lack of any modern Intel controllers, its SSDs based on third party controllers with Intel firmware continue to be some of the most dependable and compatible on the market today. Intel hasn't been the fastest for quite a while, but it's still among the best choices. It shouldn't be a surprise that the market eagerly anticipated Intel's SSD move into PCI Express.

When Intel first announced the 910, its first PCIe SSD, some viewed it as a disappointment. After all, Intel's SSD 910 isn't bootable and is just a collection of Intel/Hitachi SAS SSD controllers behind an LSI SAS to PCIe bridge just like most other PCIe SSDs on the market today. To make matters worse, it doesn't even have hardware RAID support - the 910 presents itself as multiple independent SSDs, you have to rely on software RAID if you want a single drive volume.

For its target market however, neither of these exclusions is a deal breaker. It's quite common for servers to have a dedicated boot drive. Physically decoupling data and boot drives remains a good practice in a server. For a virtualized environment, having a single PCIe SSD act as multiple drives can actually be a convenience. And if you're only running a single environment on your box, the lower software RAID levels (0/1) perform just as well as HBA RAID and remove the added point of hardware failure (and cost).

The 910 could certainly be more flexible if it added these two missing features, but I don't believe their absense is a huge issue for most who would be interested in the drive.

The Controller

A few years back Intel announced a partnership with Hitachi to build SAS enterprise SSDs. Intel would contribute its own IP on the controller and firmware side, while Hitachi would help with the SAS interface and build/sell the drives themselves. The resulting controller looks a lot like Intel's X25-M G2/310/320 controller, but with some changes. The big architectural change is obviously support for the SAS interface. Intel also moved from a single core design to a dual-core architecture with the Hitachi controller. One core is responsible for all host side transactions while the other manages the NAND/FTL side of the equation. The Intel/Hitachi controller is still a 10-channel design like its consumer counterpart. Like the earlier Intel controllers, the SAS version does not support hardware accelerated encryption.

Hitachi uses this controller on its Ultrastar SSD400M, but it's also found on the Intel SSD 910. Each controller manages a 200GB partition (more on the actual amount of NAND later). In other words, the 400GB 910 features two controllers while the 800GB 910 has four. As a result there's roughly a doubling of performance between the two drives.

As I mentioned earlier all of the controllers on the 910 are behind a single LSI SAS to PCIe bridge with drivers that are built in to all modern versions of Windows. Linux and VMware support is also guaranteed. By choosing a widely used SAS to PCIe bridge, Intel can deliver the illusion of a plug and play SSD even though it's on a PCIe card with a 3rd party SAS controller.

Price and Specs

One benefit of Intel's relatively simple board design is the 910's remarkably competitive cost structure:

Intel SSD 910 Pricing
  Capacity Price $ per GB
Intel SSD 710 200GB $790 $3.950
Intel SSD 910 400GB $2000 $5.000
Intel SSD 910 800GB $4000 $5.000
OCZ Z-Drive R4 CM84 600GB $3500 $5.833

While you do pay a premium over Intel's SSD 710, the 910 is actually cheaper than the SandForce based OCZ Z-Drive R4. At $5/GB in etail, Intel's 910 is fairly reasonably priced for an enterprise drive - particularly when you take into account the amount of NAND you're getting on board (1792GB for the 800GB drive).

Intel Enterprise SSD Comparison
  Intel SSD 910 Intel SSD 710 Intel X25-E Intel SSD 320
Interface PCIe 2.0 x8 SATA 3Gbps SATA 3Gbps SATA 3Gbps
Capacities 400 / 800 GB 100 / 200 / 300GB 32 / 64GB 80 / 120 / 160 / 300 / 600GB
NAND 25nm MLC-HET 25nm MLC-HET 50nm SLC 25nm MLC
Max Sequential Performance (Reads/Writes) 2000 / 1000 MBps 270 / 210 MBps 250 / 170 MBps 270 / 220 MBps
Max Random Performance (Reads/Writes) 180K / 75K IOPS 38.5K / 2.7K IOPS 35K / 3.3K IOPS 39.5K / 600 IOPS
Endurance (Max Data Written) 7 - 14 PB 500TB - 1.5PB 1 - 2PB 5 - 60TB
Encryption - AES-128 - AES-128

By default the 910 is rated for a 25W max TDP, regardless of capacity. At 25W the 910 requires 200 linear feet per minute of cooling to keep its temperatures below 55C. The 800GB drive has the ability to run in a special performance mode that will cause the drive to dissipate up to 28/38W (average/peak). In its performance mode you get increased sequential write performance, but the drive needs added cooling (300 LFM) and obviously draws more power. The 400GB drive effectively always runs in its performance mode but power consumption and cooling requirements are kept at 25W and 200 LFM, respectively.

The Drive and The Teardown
Comments Locked

39 Comments

View All Comments

  • lorribot - Thursday, August 9, 2012 - link

    I like the idea but coming from a highly redundant arrays point of view how do you set this all up in a a safe and secure way, what are the points of failure? what happens if you lose the bridge chip, is all your data dead and buried?
    Would you be looking to put say 3 of these cards in a server and software raid 5 across the cards for multiple disks?
    No hardware raid solution will work across multiple PCI-e cards so there really needs to be some work in how to manage all this in a sensible way needs to be done.

    I doubt any one in an Enterprise would stick one of these in a server and use it as primary storage for their SAP database it is way too risky a proposition.

    What would be good is a 3 1/5 format drive with a fibre channel interface that could work in existing storage solutions.
  • FunBunny2 - Thursday, August 9, 2012 - link

    -- What would be good is a 3 1/5 format drive with a fibre channel interface that could work in existing storage solutions.

    If memory serves, that's what STEC made and hasn't been all that profitable.
  • Guspaz - Thursday, August 9, 2012 - link

    At the end of the first page, "performnace"
  • happycamperjack - Thursday, August 9, 2012 - link

    Wouldn't it be more fair to compare it to a 800 gb CM88 R4 since it's around the same capacity and price as the intel 910 and quite a bit faster.
  • Elixer - Thursday, August 9, 2012 - link

    What happens when it is over 60% full on these things ? I am betting a huge drop off in speed, just like the desktop parts.
  • MrSpadge - Sunday, August 12, 2012 - link

    Probably not, since they're >50% overprovisioned.
  • Jammrock - Thursday, August 9, 2012 - link

    I would like to see some Fusion-IO tests. They are generally considered the highest end in enterprise SSDs. I've played with some in the past and they were crazy fast and reliable.
  • puffpio - Friday, August 10, 2012 - link

    agreed..any thoughts on a heads up between this and a similar capacity fusion io iodrive2?
  • happycamperjack - Friday, August 10, 2012 - link

    http://hothardware.com/Reviews/Intel-SSD-910-PCI-E...
  • hmmmmmm - Saturday, August 11, 2012 - link

    unfortunately, they are comparing the 910 to a 2009, discontinued card from fusion-io. would like to see a new card in the comparison to be able to compare what's on the market today

Log in

Don't have an account? Sign up now