The increase in compute density in servers over the past several years has significantly impacted form factors in the enterprise. Whereas you used to have to move to a 4U or 5U chassis if you wanted an 8-core machine, these days you can get there with just a single socket in a 1U or 2U chassis (or smaller if you go the blade route). The transition from 3.5" to 2.5" hard drives helped maintain IO performance as server chassis shrunk, but even then there's a limit to how many drives you can fit into a single enclosure. In network architectures that don't use a beefy SAN or still demand high-speed local storage, PCI Express SSDs are very attractive. As SSDs just need lots of PCB real estate, a 2.5" enclosure can be quite limiting. A PCIe card on the other hand can accommodate a good number of controllers, DRAM and NAND devices. Furthermore, unlike a single 2.5" SAS/SATA SSD, PCIe offers enough bandwidth headroom to scale performance with capacity. Instead of just adding more NAND to reach higher capacities you can add more controllers with the NAND, effectively increasing performance as you add capacity.

The first widely available PCIe SSDs implemented this simple scaling approach. Take the hardware that you'd find on a 2.5" SSD, duplicate it multiple times and put it behind a RAID controller all on a PCIe card. The end user would see the, admittedly rough, illusion of a single SSD without much additional development work on the part of the SSD vendor. There are no new controllers to build and firmwares aren't substantially different from the standalone 2.5" drives. PCIe SSDs with on-board RAID became a quick way of getting your consumer SSDs into a soon to be huge enterprise SSD market. Eventually we'll see native PCIe SSD controllers that won't need the pesky SATA/SAS to PCIe bridge to be present on the card, and there's even a spec (NVMe) to help move things along. For now we're stuck with a bunch of controllers on a PCIe card.

It took surprisingly long for Intel to dip its toe in the PCIe SSD waters. In fact, Intel's SSD behavior post-2008 has been a bit odd. To date Intel still hasn't released a 6Gbps SATA controller based on its own IP. Despite the lack of any modern Intel controllers, its SSDs based on third party controllers with Intel firmware continue to be some of the most dependable and compatible on the market today. Intel hasn't been the fastest for quite a while, but it's still among the best choices. It shouldn't be a surprise that the market eagerly anticipated Intel's SSD move into PCI Express.

When Intel first announced the 910, its first PCIe SSD, some viewed it as a disappointment. After all, Intel's SSD 910 isn't bootable and is just a collection of Intel/Hitachi SAS SSD controllers behind an LSI SAS to PCIe bridge just like most other PCIe SSDs on the market today. To make matters worse, it doesn't even have hardware RAID support - the 910 presents itself as multiple independent SSDs, you have to rely on software RAID if you want a single drive volume.

For its target market however, neither of these exclusions is a deal breaker. It's quite common for servers to have a dedicated boot drive. Physically decoupling data and boot drives remains a good practice in a server. For a virtualized environment, having a single PCIe SSD act as multiple drives can actually be a convenience. And if you're only running a single environment on your box, the lower software RAID levels (0/1) perform just as well as HBA RAID and remove the added point of hardware failure (and cost).

The 910 could certainly be more flexible if it added these two missing features, but I don't believe their absense is a huge issue for most who would be interested in the drive.

The Controller

A few years back Intel announced a partnership with Hitachi to build SAS enterprise SSDs. Intel would contribute its own IP on the controller and firmware side, while Hitachi would help with the SAS interface and build/sell the drives themselves. The resulting controller looks a lot like Intel's X25-M G2/310/320 controller, but with some changes. The big architectural change is obviously support for the SAS interface. Intel also moved from a single core design to a dual-core architecture with the Hitachi controller. One core is responsible for all host side transactions while the other manages the NAND/FTL side of the equation. The Intel/Hitachi controller is still a 10-channel design like its consumer counterpart. Like the earlier Intel controllers, the SAS version does not support hardware accelerated encryption.

Hitachi uses this controller on its Ultrastar SSD400M, but it's also found on the Intel SSD 910. Each controller manages a 200GB partition (more on the actual amount of NAND later). In other words, the 400GB 910 features two controllers while the 800GB 910 has four. As a result there's roughly a doubling of performance between the two drives.

As I mentioned earlier all of the controllers on the 910 are behind a single LSI SAS to PCIe bridge with drivers that are built in to all modern versions of Windows. Linux and VMware support is also guaranteed. By choosing a widely used SAS to PCIe bridge, Intel can deliver the illusion of a plug and play SSD even though it's on a PCIe card with a 3rd party SAS controller.

Price and Specs

One benefit of Intel's relatively simple board design is the 910's remarkably competitive cost structure:

Intel SSD 910 Pricing
  Capacity Price $ per GB
Intel SSD 710 200GB $790 $3.950
Intel SSD 910 400GB $2000 $5.000
Intel SSD 910 800GB $4000 $5.000
OCZ Z-Drive R4 CM84 600GB $3500 $5.833

While you do pay a premium over Intel's SSD 710, the 910 is actually cheaper than the SandForce based OCZ Z-Drive R4. At $5/GB in etail, Intel's 910 is fairly reasonably priced for an enterprise drive - particularly when you take into account the amount of NAND you're getting on board (1792GB for the 800GB drive).

Intel Enterprise SSD Comparison
  Intel SSD 910 Intel SSD 710 Intel X25-E Intel SSD 320
Interface PCIe 2.0 x8 SATA 3Gbps SATA 3Gbps SATA 3Gbps
Capacities 400 / 800 GB 100 / 200 / 300GB 32 / 64GB 80 / 120 / 160 / 300 / 600GB
NAND 25nm MLC-HET 25nm MLC-HET 50nm SLC 25nm MLC
Max Sequential Performance (Reads/Writes) 2000 / 1000 MBps 270 / 210 MBps 250 / 170 MBps 270 / 220 MBps
Max Random Performance (Reads/Writes) 180K / 75K IOPS 38.5K / 2.7K IOPS 35K / 3.3K IOPS 39.5K / 600 IOPS
Endurance (Max Data Written) 7 - 14 PB 500TB - 1.5PB 1 - 2PB 5 - 60TB
Encryption - AES-128 - AES-128

By default the 910 is rated for a 25W max TDP, regardless of capacity. At 25W the 910 requires 200 linear feet per minute of cooling to keep its temperatures below 55C. The 800GB drive has the ability to run in a special performance mode that will cause the drive to dissipate up to 28/38W (average/peak). In its performance mode you get increased sequential write performance, but the drive needs added cooling (300 LFM) and obviously draws more power. The 400GB drive effectively always runs in its performance mode but power consumption and cooling requirements are kept at 25W and 200 LFM, respectively.

The Drive and The Teardown
Comments Locked

39 Comments

View All Comments

  • web2dot0 - Friday, August 10, 2012 - link

    That's why you need a comparison buddy. Otherwise, why don't we just read off the spec sheet and declare a winner? Let's face it z-drive r4 is NO FusionIO ok.

    FusionIO is a proven entity backed my a number of reputable companies (Dell, HP, etc...). Those companies didn't sign on because the cards are crap. Who's backing Z-Drive?

    They are the standards in which enterprise SSDs are measured. At least, that's the general consensus.
  • happycamperjack - Friday, August 10, 2012 - link

    Spec sheet? did you even read the benchmarks in that comparison? FusionIO's ioDrive clearly lost out there except for low queue situation.

    As for who's backing OCZ's enterprise SSD, let's see, Microsoft, SAP, ebay just to name a few. I don't know where you get the idea that OCZ's enterprise products do not meet the standard, but they are currently the 4th largest enterprise SSD provider. So you are either very misinformed, or just a clueless FusionIO fanboy.
  • web2dot0 - Sunday, August 12, 2012 - link

    Come on dude.

    You are clearly looking at the specsheets. The feature sets offered by FusionIO cards are light years ahead of OCZ cards.

    The toolset is also light years ahead. It's not always just about performance. Otherwise, everyone will be using XEN and nobody will be using VMWARE. Get it?

    I would like to see a direct comparison of FusionIO cards (on workloads that enterprises matter), not what you THINK it will perform.

    You are either very much misinformed or you are a clueless kid.
  • happycamperjack - Thursday, August 16, 2012 - link

    what spreadsheet? I'm comparing the benchmark charts at later pages, which you obviously have not clicked through. There's enterprise comparisons too ok kid?

    what's great about FIO is its software sets for big data and its low latency and high low queue data access performance. but if just comparing single card performance per GB price ratio, FIO is overpriced IMO. And FIO's PCIe cards' lackluster performance in high queue depth is highlighting what could be the doom of FPGA PCIe cards as the cheap ATIC controllers mature and overthrow the FPGA cards with its abundant number on a board.

    My guess is that in 2 years, FPGA PCIe SSDs would be used only in some specialized Tier 0 storages for high performance computing that would benefit from FPGA's feature sets. Similar to Rambus's RDRAM's fate.

    And if OCZ is good enough for MS's Azure cloud, I don't see why it's not good enough for other enterprise
  • hmmmmmm - Saturday, August 11, 2012 - link

    unfortunately, they are comparing the 910 to a 2009, discontinued card from fusion-io. would like to see a new card in the comparison to be able to compare what's on the market today
  • happycamperjack - Thursday, August 16, 2012 - link

    I love to see some ioDrive 2 comparisons too. Unfortunately I can't find any.
  • zachj - Thursday, August 9, 2012 - link

    Does the 910 have a capacitor to drain contents of DRAM to flash during a power outage?
  • FunBunny2 - Thursday, August 9, 2012 - link

    It looked like it, but I didn't read a mention. Could be bad eyesight.
  • erple2 - Thursday, August 9, 2012 - link

    For the market that this targets, you should never have a power outage that affects your server. These are too expensive to not have some sort of redundant power source like at least a solid ups, or better yet, a server room backup power generator.

    That having been said, if you look at the main PCB, you can see 4 capacitors of some sort.
  • mike_ - Saturday, August 11, 2012 - link

    >>For the market that this targets, you should never have a power outage that affects your server.

    You'd wish it weren't so, but environments can and will fail. If it has capacitors and such that's great, if it doesn't this device is effectively useless. Surprised it didn't get mentioned :)

Log in

Don't have an account? Sign up now