The increase in compute density in servers over the past several years has significantly impacted form factors in the enterprise. Whereas you used to have to move to a 4U or 5U chassis if you wanted an 8-core machine, these days you can get there with just a single socket in a 1U or 2U chassis (or smaller if you go the blade route). The transition from 3.5" to 2.5" hard drives helped maintain IO performance as server chassis shrunk, but even then there's a limit to how many drives you can fit into a single enclosure. In network architectures that don't use a beefy SAN or still demand high-speed local storage, PCI Express SSDs are very attractive. As SSDs just need lots of PCB real estate, a 2.5" enclosure can be quite limiting. A PCIe card on the other hand can accommodate a good number of controllers, DRAM and NAND devices. Furthermore, unlike a single 2.5" SAS/SATA SSD, PCIe offers enough bandwidth headroom to scale performance with capacity. Instead of just adding more NAND to reach higher capacities you can add more controllers with the NAND, effectively increasing performance as you add capacity.

The first widely available PCIe SSDs implemented this simple scaling approach. Take the hardware that you'd find on a 2.5" SSD, duplicate it multiple times and put it behind a RAID controller all on a PCIe card. The end user would see the, admittedly rough, illusion of a single SSD without much additional development work on the part of the SSD vendor. There are no new controllers to build and firmwares aren't substantially different from the standalone 2.5" drives. PCIe SSDs with on-board RAID became a quick way of getting your consumer SSDs into a soon to be huge enterprise SSD market. Eventually we'll see native PCIe SSD controllers that won't need the pesky SATA/SAS to PCIe bridge to be present on the card, and there's even a spec (NVMe) to help move things along. For now we're stuck with a bunch of controllers on a PCIe card.

It took surprisingly long for Intel to dip its toe in the PCIe SSD waters. In fact, Intel's SSD behavior post-2008 has been a bit odd. To date Intel still hasn't released a 6Gbps SATA controller based on its own IP. Despite the lack of any modern Intel controllers, its SSDs based on third party controllers with Intel firmware continue to be some of the most dependable and compatible on the market today. Intel hasn't been the fastest for quite a while, but it's still among the best choices. It shouldn't be a surprise that the market eagerly anticipated Intel's SSD move into PCI Express.

When Intel first announced the 910, its first PCIe SSD, some viewed it as a disappointment. After all, Intel's SSD 910 isn't bootable and is just a collection of Intel/Hitachi SAS SSD controllers behind an LSI SAS to PCIe bridge just like most other PCIe SSDs on the market today. To make matters worse, it doesn't even have hardware RAID support - the 910 presents itself as multiple independent SSDs, you have to rely on software RAID if you want a single drive volume.

For its target market however, neither of these exclusions is a deal breaker. It's quite common for servers to have a dedicated boot drive. Physically decoupling data and boot drives remains a good practice in a server. For a virtualized environment, having a single PCIe SSD act as multiple drives can actually be a convenience. And if you're only running a single environment on your box, the lower software RAID levels (0/1) perform just as well as HBA RAID and remove the added point of hardware failure (and cost).

The 910 could certainly be more flexible if it added these two missing features, but I don't believe their absense is a huge issue for most who would be interested in the drive.

The Controller

A few years back Intel announced a partnership with Hitachi to build SAS enterprise SSDs. Intel would contribute its own IP on the controller and firmware side, while Hitachi would help with the SAS interface and build/sell the drives themselves. The resulting controller looks a lot like Intel's X25-M G2/310/320 controller, but with some changes. The big architectural change is obviously support for the SAS interface. Intel also moved from a single core design to a dual-core architecture with the Hitachi controller. One core is responsible for all host side transactions while the other manages the NAND/FTL side of the equation. The Intel/Hitachi controller is still a 10-channel design like its consumer counterpart. Like the earlier Intel controllers, the SAS version does not support hardware accelerated encryption.

Hitachi uses this controller on its Ultrastar SSD400M, but it's also found on the Intel SSD 910. Each controller manages a 200GB partition (more on the actual amount of NAND later). In other words, the 400GB 910 features two controllers while the 800GB 910 has four. As a result there's roughly a doubling of performance between the two drives.

As I mentioned earlier all of the controllers on the 910 are behind a single LSI SAS to PCIe bridge with drivers that are built in to all modern versions of Windows. Linux and VMware support is also guaranteed. By choosing a widely used SAS to PCIe bridge, Intel can deliver the illusion of a plug and play SSD even though it's on a PCIe card with a 3rd party SAS controller.

Price and Specs

One benefit of Intel's relatively simple board design is the 910's remarkably competitive cost structure:

Intel SSD 910 Pricing
  Capacity Price $ per GB
Intel SSD 710 200GB $790 $3.950
Intel SSD 910 400GB $2000 $5.000
Intel SSD 910 800GB $4000 $5.000
OCZ Z-Drive R4 CM84 600GB $3500 $5.833

While you do pay a premium over Intel's SSD 710, the 910 is actually cheaper than the SandForce based OCZ Z-Drive R4. At $5/GB in etail, Intel's 910 is fairly reasonably priced for an enterprise drive - particularly when you take into account the amount of NAND you're getting on board (1792GB for the 800GB drive).

Intel Enterprise SSD Comparison
  Intel SSD 910 Intel SSD 710 Intel X25-E Intel SSD 320
Interface PCIe 2.0 x8 SATA 3Gbps SATA 3Gbps SATA 3Gbps
Capacities 400 / 800 GB 100 / 200 / 300GB 32 / 64GB 80 / 120 / 160 / 300 / 600GB
NAND 25nm MLC-HET 25nm MLC-HET 50nm SLC 25nm MLC
Max Sequential Performance (Reads/Writes) 2000 / 1000 MBps 270 / 210 MBps 250 / 170 MBps 270 / 220 MBps
Max Random Performance (Reads/Writes) 180K / 75K IOPS 38.5K / 2.7K IOPS 35K / 3.3K IOPS 39.5K / 600 IOPS
Endurance (Max Data Written) 7 - 14 PB 500TB - 1.5PB 1 - 2PB 5 - 60TB
Encryption - AES-128 - AES-128

By default the 910 is rated for a 25W max TDP, regardless of capacity. At 25W the 910 requires 200 linear feet per minute of cooling to keep its temperatures below 55C. The 800GB drive has the ability to run in a special performance mode that will cause the drive to dissipate up to 28/38W (average/peak). In its performance mode you get increased sequential write performance, but the drive needs added cooling (300 LFM) and obviously draws more power. The 400GB drive effectively always runs in its performance mode but power consumption and cooling requirements are kept at 25W and 200 LFM, respectively.

The Drive and The Teardown
Comments Locked

39 Comments

View All Comments

  • JellyRoll - Friday, August 10, 2012 - link

    WOW. low QD testing on an enterprise PCIe storage card is ridiculous. End users of these SSDs will use them in datacenters, and the average QD will be ridiculously high. This evaluation shows absolutely nothing that will be encountered in this type of SSDs actual usage. No administrator in their right mind would purchase these for such ridiculously low workloads.
  • SanX - Friday, August 10, 2012 - link

    and you do not need more then 16/32/64GB size for your speedy needs, then consider almost free RAMdisk with the backup. It will be 4-8x faster then this card
  • marcplante - Friday, August 10, 2012 - link

    It seems that there would be a market for a consumer desktop implementation.
  • Ksman - Friday, August 10, 2012 - link

    Given how well the 520's perform, perhaps a RAID with 520's on a LSI RAID adapter would be a very good solution and a comparison VS the 910 would be interesting. If RAID>0, then one could pull drives and attach direct for TRIM etc which would eliminate the problem where SSD's in a RAID cannot be managed.
  • Pixelpusher6 - Friday, August 10, 2012 - link

    I was wondering the exact same thing. What are the advantages of offering a PCIe solution like this compared to say just throwing in a SAS RAID card and connecting a bunch of SSD SAS drives in a RAID 0? Is the Intel 910 mainly targeted at 1U/2U servers that might not have space available for a 2.5" drive? Is it possible to over-provision any 2.5" drive to increase endurance and reduce write amplification (I think the desktop Samsung 830 I have allows this)? Seeing the performance charts I wonder how 2 of those Toshiba 400GB SAS drives would compare against the Intel 910.

    Is the enterprise market moving towards MLC-HET NAND with tons of spare area vs. SLC NAND because of the low cost of MLC NAND now since fabs have ramped up production? I was under the impression that SLC NAND was preferable in the enterprise segment but I might be wrong. What are some usage scenarios where SLC would be better than MLC-HET and vice versa?

    I think lorribot brought up a good point:

    "I like the idea but coming from a highly redundant arrays point of view how do you set this all up in a a safe and secure way, what are the points of failure? what happens if you lose the bridge chip, is all your data dead and buried?"

    I wonder if it is possible to just swap the 1st PCIe PCB board with all the controllers and DRAM in case of a failure of the bridge chip or controller thus the data remains safe. Can SSD controllers fail? Is it likely that the Intel 910 will be used in RAID 0? I didn't think RAID 0 was used much in enterprise. Sorry for all the questions. I have been visiting this site for over 10 years and I just now registered an account.
  • FunBunny2 - Saturday, August 11, 2012 - link

    eMLC/MLC-HET/foo-MLC are all attempts to get cheaper parts into SSD chassis, even for enterprise companies such as Texas Memory. Part of the motivation is yet more sophisticated controllers, and, I suspect, the realization that enterprises understand duty life far better than consumers (who'll run a HDD forever if it survives infant mortality). The SSD survival curve (due to NAND failure) is more predictable than HDD, so with the very much faster operations, if 5 years remains the lifetime, the parts used don't matter. The part gets swapped out at 90% or 95% of duty life (or whatever %-age the shop decides); end of story. 5 years ago, SLC was the only way to 5 years. That's not true any longer.
  • GatoRat - Sunday, August 12, 2012 - link

    "the 800GB 910 is easily the fastest SSD we've ever tested."

    Yet the tests clearly show that it isn't. In fact, the Oracle tests show it's a dog. In other tests, it doesn't come up on top. The OCZ Z-Drive R4 CM84 600GB is clearly the faster overall drive.
  • Galcobar - Sunday, August 12, 2012 - link

    Grok!

    I'm impressed both to see the literary reference, correctly used, and that nobody has called it a typo in the comments. Not bad for a fifty-year-old novel once dismissed by the New York Times as a puerile mishmash.
  • a50505 - Thursday, August 30, 2012 - link

    So, has anyone heard of a workstation class laptop that with a PCIe based ssd?

Log in

Don't have an account? Sign up now