The increase in compute density in servers over the past several years has significantly impacted form factors in the enterprise. Whereas you used to have to move to a 4U or 5U chassis if you wanted an 8-core machine, these days you can get there with just a single socket in a 1U or 2U chassis (or smaller if you go the blade route). The transition from 3.5" to 2.5" hard drives helped maintain IO performance as server chassis shrunk, but even then there's a limit to how many drives you can fit into a single enclosure. In network architectures that don't use a beefy SAN or still demand high-speed local storage, PCI Express SSDs are very attractive. As SSDs just need lots of PCB real estate, a 2.5" enclosure can be quite limiting. A PCIe card on the other hand can accommodate a good number of controllers, DRAM and NAND devices. Furthermore, unlike a single 2.5" SAS/SATA SSD, PCIe offers enough bandwidth headroom to scale performance with capacity. Instead of just adding more NAND to reach higher capacities you can add more controllers with the NAND, effectively increasing performance as you add capacity.

The first widely available PCIe SSDs implemented this simple scaling approach. Take the hardware that you'd find on a 2.5" SSD, duplicate it multiple times and put it behind a RAID controller all on a PCIe card. The end user would see the, admittedly rough, illusion of a single SSD without much additional development work on the part of the SSD vendor. There are no new controllers to build and firmwares aren't substantially different from the standalone 2.5" drives. PCIe SSDs with on-board RAID became a quick way of getting your consumer SSDs into a soon to be huge enterprise SSD market. Eventually we'll see native PCIe SSD controllers that won't need the pesky SATA/SAS to PCIe bridge to be present on the card, and there's even a spec (NVMe) to help move things along. For now we're stuck with a bunch of controllers on a PCIe card.

It took surprisingly long for Intel to dip its toe in the PCIe SSD waters. In fact, Intel's SSD behavior post-2008 has been a bit odd. To date Intel still hasn't released a 6Gbps SATA controller based on its own IP. Despite the lack of any modern Intel controllers, its SSDs based on third party controllers with Intel firmware continue to be some of the most dependable and compatible on the market today. Intel hasn't been the fastest for quite a while, but it's still among the best choices. It shouldn't be a surprise that the market eagerly anticipated Intel's SSD move into PCI Express.

When Intel first announced the 910, its first PCIe SSD, some viewed it as a disappointment. After all, Intel's SSD 910 isn't bootable and is just a collection of Intel/Hitachi SAS SSD controllers behind an LSI SAS to PCIe bridge just like most other PCIe SSDs on the market today. To make matters worse, it doesn't even have hardware RAID support - the 910 presents itself as multiple independent SSDs, you have to rely on software RAID if you want a single drive volume.

For its target market however, neither of these exclusions is a deal breaker. It's quite common for servers to have a dedicated boot drive. Physically decoupling data and boot drives remains a good practice in a server. For a virtualized environment, having a single PCIe SSD act as multiple drives can actually be a convenience. And if you're only running a single environment on your box, the lower software RAID levels (0/1) perform just as well as HBA RAID and remove the added point of hardware failure (and cost).

The 910 could certainly be more flexible if it added these two missing features, but I don't believe their absense is a huge issue for most who would be interested in the drive.

The Controller

A few years back Intel announced a partnership with Hitachi to build SAS enterprise SSDs. Intel would contribute its own IP on the controller and firmware side, while Hitachi would help with the SAS interface and build/sell the drives themselves. The resulting controller looks a lot like Intel's X25-M G2/310/320 controller, but with some changes. The big architectural change is obviously support for the SAS interface. Intel also moved from a single core design to a dual-core architecture with the Hitachi controller. One core is responsible for all host side transactions while the other manages the NAND/FTL side of the equation. The Intel/Hitachi controller is still a 10-channel design like its consumer counterpart. Like the earlier Intel controllers, the SAS version does not support hardware accelerated encryption.

Hitachi uses this controller on its Ultrastar SSD400M, but it's also found on the Intel SSD 910. Each controller manages a 200GB partition (more on the actual amount of NAND later). In other words, the 400GB 910 features two controllers while the 800GB 910 has four. As a result there's roughly a doubling of performance between the two drives.

As I mentioned earlier all of the controllers on the 910 are behind a single LSI SAS to PCIe bridge with drivers that are built in to all modern versions of Windows. Linux and VMware support is also guaranteed. By choosing a widely used SAS to PCIe bridge, Intel can deliver the illusion of a plug and play SSD even though it's on a PCIe card with a 3rd party SAS controller.

Price and Specs

One benefit of Intel's relatively simple board design is the 910's remarkably competitive cost structure:

Intel SSD 910 Pricing
  Capacity Price $ per GB
Intel SSD 710 200GB $790 $3.950
Intel SSD 910 400GB $2000 $5.000
Intel SSD 910 800GB $4000 $5.000
OCZ Z-Drive R4 CM84 600GB $3500 $5.833

While you do pay a premium over Intel's SSD 710, the 910 is actually cheaper than the SandForce based OCZ Z-Drive R4. At $5/GB in etail, Intel's 910 is fairly reasonably priced for an enterprise drive - particularly when you take into account the amount of NAND you're getting on board (1792GB for the 800GB drive).

Intel Enterprise SSD Comparison
  Intel SSD 910 Intel SSD 710 Intel X25-E Intel SSD 320
Interface PCIe 2.0 x8 SATA 3Gbps SATA 3Gbps SATA 3Gbps
Capacities 400 / 800 GB 100 / 200 / 300GB 32 / 64GB 80 / 120 / 160 / 300 / 600GB
NAND 25nm MLC-HET 25nm MLC-HET 50nm SLC 25nm MLC
Max Sequential Performance (Reads/Writes) 2000 / 1000 MBps 270 / 210 MBps 250 / 170 MBps 270 / 220 MBps
Max Random Performance (Reads/Writes) 180K / 75K IOPS 38.5K / 2.7K IOPS 35K / 3.3K IOPS 39.5K / 600 IOPS
Endurance (Max Data Written) 7 - 14 PB 500TB - 1.5PB 1 - 2PB 5 - 60TB
Encryption - AES-128 - AES-128

By default the 910 is rated for a 25W max TDP, regardless of capacity. At 25W the 910 requires 200 linear feet per minute of cooling to keep its temperatures below 55C. The 800GB drive has the ability to run in a special performance mode that will cause the drive to dissipate up to 28/38W (average/peak). In its performance mode you get increased sequential write performance, but the drive needs added cooling (300 LFM) and obviously draws more power. The 400GB drive effectively always runs in its performance mode but power consumption and cooling requirements are kept at 25W and 200 LFM, respectively.

The Drive and The Teardown
Comments Locked

39 Comments

View All Comments

  • Lazarus52980 - Thursday, August 9, 2012 - link

    Wow, fantastic review. Thanks for the good work Anand.
  • quiksilvr - Thursday, August 9, 2012 - link

    Considering the hefty price on these devices, anything short of AES-256 in this day and age is unacceptable (yes I know you can do software encryption, but hardware accelerated is much more secure)
  • prime2515103 - Thursday, August 9, 2012 - link

    I'm confused. Isn't AES-256 the same whether it's done in hardware or software? I thought hardware encryption just performed better (encrypts/decrypts faster).
  • Rick83 - Thursday, August 9, 2012 - link

    AES-256 is no safer than AES-128, according to somewhat recent cryptanalysis on the algorithms.
  • madmilk - Thursday, August 9, 2012 - link

    Hardware acceleration is probably for performance, even AES-NI can't keep up with multiple PCIe SSDs.

    And yeah, AES-256 easier crack than AES-128 now. Not that it matters, with computational complexity still at what, 2^99?

    It's much easier to just kidnap the sysadmin than attack crypto.
  • JPForums - Friday, August 10, 2012 - link

    And yeah, AES-256 easier crack than AES-128 now. Not that it matters, with computational complexity still at what, 2^99?


    Since you didn't mention which attack you are referring to, I'm going out on a limb and assuming you are talking about the related key attacks on the full AES256 / AES192. I don't know of any other attacks that work on all 14 / 12 rounds. You should be aware that while such an attack is widely considered impractical, it isn't even possible under many circumstances. You need some up front data that you can't always get. To keep in relation to this article, I'll limit my scope to on the fly encryption software as it is closely related to the encryption implemented on this device. Several on the fly encryption packages are known to be immune to this kind of attack (I'll single out TrueCrypt as a popular open source package that isn't vulnerable to related key attacks). As long as you are using a properly programmed encryption package for your full disk encryption, AES-256 is still "more secure" than AES-128.

    That said, if you actually calculated the amount of time it takes to brute force AES-128, you'll find that PCs and possibly humanity will have become a long forgotten relic of the past. Of course, processing power changes, but not that quickly. A bigger concern would be attacks that successfully bring the set of keys down to a manageable level. For this issue, diversity is probably more important than bit length, assuming the brute force key set is sufficiently large (I.E. 2^128).
  • Troff - Thursday, August 9, 2012 - link

    I always end up needing to be able to communicate directly with each "drive", which is usually problematic through a raid controller and software raid is invariably faster and in every case I've tried so far much, much faster.
  • Araemo - Thursday, August 9, 2012 - link

    Isn't this necessary for trim anyways? If the OS can talk to each 'drive', trim should work, and performance will stay good, right?
  • web2dot0 - Thursday, August 9, 2012 - link

    Hey Anand,

    Shouldn't FusionIO cards be part of this comparison? It will most likely destroy the 910.
  • happycamperjack - Thursday, August 9, 2012 - link

    http://hothardware.com/printarticle.aspx?articleid...

    Not according to this article. And ioDrive compared in this article is actually about 2x the price of either intel 910 or z-drive r4.

Log in

Don't have an account? Sign up now