Solid state storage has quickly been able to saturate the SATA interface just as quickly as new standards are introduced. The first generation of well-built MLC SSDs quickly bumped into the limits of 3Gbps SATA, as did the first generation of 6Gbps MLC SSDs. With hard drives no where near running out of headroom on a 6Gbps interface, it's clear that SSDs need to transition to an interface that can offer significantly higher bandwidth.

The obvious choice is PCI Express. A single PCIe 2.0 lane is good for 500MB/s of data upstream and downstream, for an aggregate of 1GB/s. Build a PCIe 2.0 x16 SSD and you're talking 8GB/s in either direction. The first PCIe 3.0 chipsets have already started shipping and they'll offer even higher bandwidth per lane (~1GB/s per lane, per direction).


OCZ's RevoDrive X2 PCIe SSD

PCI Express is easily scalable and it's just as ubiquitous as SATA in modern systems, it's a natural fit for ultra high performance SSDs. While SATA Express will hopefully merge the two in a manner that preserves backwards compatibility for existing SATA drives, the server market needs solutions today.

In the past you needed a huge chassis to deploy an 8-core server, but thanks to Moore's Law you can cram a dozen high-performance x86 cores into a single 1U or 2U chassis. These high density servers are great for compute performance, but they do significantly limit per-server storage capacity. With the largest eMLC drives topping out at 400GB and SLC drives well below that, if you have high performance needs in a small rackmount chassis you need to look beyond traditional 2.5" drives.

Furthermore, if all you're going to do is combine a bunch of SAS/SATA drives behind a PCIe RAID controller it makes more sense to cut out the middleman and combine the two.


Micron's P320h

We've seen PCIe SSDs that do just that, including several from OCZ under the Z-Drive and RevoDrive brands. Although OCZ has delivered many iterations of PCIe SSDs at this point they all still follow the same basic principle: combine independent SAS/SATA SSD controllers on a PCIe card with a SAS/SATA RAID controller of some sort. Eventually we'll see designs that truly cut out the middlemen and use native PCIe-to-NAND SSD controllers and a simple PCIe switch or lane aggregator. Micron has announced one such drive with the P320h. The NVMe specification is designed to support the creation of exactly this type of drive, however we have yet to see any implementations of the spec.

Many companies have followed in OCZ's footsteps and built similar drives, but many share one thing in common: the use of SandForce controllers. If you're working with encrypted or otherwise incompressible data, SandForce isn't your best bet. There are also concerns about validation, compatibility and reliability of SF's controllers.

Intel's SSD 910

Similar to its move into the MLC SSD space, Intel is arriving late to the PCIe SSD game - but it hopes to gain marketshare on the back of good performance, competitive pricing and reliability.

The first member of the new PCIe family is the Intel SSD 910, consistent with Intel's 3-digit model number scheme.

Enterprise SSD Comparison
  Intel SSD 910 Intel SSD 710 Intel X25-E Intel SSD 320
Interface PCIe 2.0 x8 SATA 3Gbps SATA 3Gbps SATA 3Gbps
Capacities 400 / 800 GB 100 / 200 / 300GB 32 / 64GB 80 / 120 / 160 / 300 / 600GB
NAND 25nm MLC-HET 25nm MLC-HET 50nm SLC 25nm MLC
Max Sequential Performance (Reads/Writes) 2000 / 1000 MBps 270 / 210 MBps 250 / 170 MBps 270 / 220 MBps
Max Random Performance (Reads/Writes) 180K / 75K IOPS 38.5K / 2.7K IOPS 35K / 3.3K IOPS 39.5K / 600 IOPS
Endurance (Max Data Written) 7 - 14 PB 500TB - 1.5PB 1 - 2PB 5 - 60TB
Encryption - AES-128 - AES-128

The 910 is a single-slot, half-height, half-length PCIe 2.0 x8 card with either 896GB or 1792GB of Intel's 25nm MLC-HET NAND. Part of the high endurance formula is extra NAND for redundancy as well as larger than normal spare area on the drive itself. Once those two things are accounted for, what remains is either 400GB or 800GB of available storage.

The 910's architecture is surprisingly simple. The solution is a layered design composed of two or three boards stacked on one another. The first PCB features either two or four SAS SSD controllers, jointly developed by Intel and Hitachi (the same controllers are used in Hitachi's Ultrastar SSD400M). These controllers are very similar to Intel's X25-M/G2/310/320 controller family but with a couple of changes. The client controller features a single CPU core, while the Intel/Hitachi controller features two cores (one managing the NAND side of the drive while the other managing the SAS interface). Both are 10-channel designs, although the 910's implementation features 14 NAND packages per controller.

In front of the four controllers is an LSI 2008 SAS to PCIe bridge. There's no support for hardware RAID, each controller presents itself to the OS as a single drive with a 200GiB (186GB) capacity. You are free to use software RAID to aggregate the drives as you see fit but by default you'll see either two or four physical drives appear.

The second PCB is home to 896GB of Intel's 25nm MLC-HET NAND, spread across 28 TSSOP packages. The third PCB is only present if you order the 800GB version, and it adds an extra 896GB of NAND (another 28 packages). Even in a fully populated three-board stack, the 910 only occupies a single PCIe slot.

The 910's TDP is set at 25W and requires cooling capable of moving air at 200 linear feet per minute for proper operation.

The use of LSI's 2008 SAS PCIe controller makes sense as there's widespread OS support for the controller, in many cases you won't need to even supply a 3rd party driver. The 910 isn't bootable, but I don't believe that's much of an issue as you're more likely to deploy a server with a small boot drive anyway. There's also no support for hardware encryption, a more unfortunate omission.

Intel's performance specs for the 910 are understandably awesome:

Intel SSD 910 Performance Specs
  400GB 800GB
Random 4KB Read (Up to) 90K IOPS 180K IOPS
Random 4KB Write (Up to) 38K IOPS 75K IOPS
Sequential Read (Up to) 1000 MB/s 2000 MB/s
Sequential Write (Up to) 750 MB/s 1000 MB/s

Intel's specs come from aggregating performance across all controllers, but you're still looking at a great combination of performance and capacity. These numbers are applicable to both compressible and incompressible data.

The 910 will ship with a software tool that allows you to get even more performance out of the drive (up to 1.5GB/s write speed) by increasing the board's operating power to 28W from 25W.

Intel SSD 910 Endurance Ratings
  400GB 800GB
4KB Random Write Up to 5PB Up to 7PB
8KB Random Write Up to 10PB Up to 14PB

The 910 is rated for up to 2.5PB of 4KB or 3.5PB of 8KB random writes per NAND module (200GB).

The pricing is also fairly reasonable. The 400GB model carries a $1929 MSRP while the 800GB will set you back $3859, both come in below $5/GB. Samples are available today, with the first production of Intel's SSD 910 available sometime in the first half of the year.

Intel SSD 910 Pricing
  400GB 800GB
MSRP $1929 $3859
$ per GB $4.8225 $4.8238

I have to say that I'm pretty excited to see Intel's 910 in action. Intel's reputation as an SSD maker carries a lot of weight in the enterprise market already. The addition of a high-end PCIe solution will likely be well received by its existing customers and others who have been hoping for such a solution.

Comments Locked

71 Comments

View All Comments

  • papapapapapapapababy - Thursday, April 12, 2012 - link



    silly question: my GA-H61M-S2V-B3 has slow SATA 3Gb/s and only 3 x PCI Express 2.0 x1 slots ( hey it was a gift) ... can i plug something like this the single - vga- PCI Express x16 slot? ( using the om-board video of dat awesome G530 ) what about other PCI Express 2.0 x1 SSDs? are there any out here? what about a review of the Z-Drive R5, supertalent corestone? etc. THANKS.
  • ggathagan - Friday, April 13, 2012 - link

    Cost of a new motherboard with two 6Gbps ports: $120-$140 (GA-Z68MA-D2H-B3 or GA-Z77MX-D3H)
    Cost of two Samsung 830 Series 256GB SSD's for RAID0 disk: $600

    Cost of 400GB Intel 910: $1929

    Intel 910 (400GB)
    Random 4KB Read (Up to) 90K IOPS 80K IOPS
    Random 4KB Write (Up to) 38K IOPS 36K IOPS
    Sequential Read (Up to) 1000 MB/s 520 MB/s
    Sequential Write (Up to) 750 MB/s 400 MB/s

    Samsung SSD 830
    Random 4KB Read (Up to) 80K IOPS
    Random 4KB Write (Up to) 36K IOPS
    Sequential Read (Up to) 520 MB/s
    Sequential Write (Up to) 400 MB/s

    Figure on a RAID0 setup roughly doubling your performance.
    I just saved you about $1200!
  • papapapapapapapababy - Friday, April 13, 2012 - link

    Already have the motherboard, plus they cost twice as much or even more over here... so you didn't answer shit. also i already have the bandwidth, hate the sata BS limitations and legacy hdd form factor. and im not particularly interested in this monstrosity ( or any kind of raid silliness) just a good PCIe 2.0 x1 @ 88NV9145 silicon. any suggestions?
  • nexox - Thursday, April 12, 2012 - link

    Looks like people are awfully hung up on the 4KB IOPS specification, which just isn't that important for the enterprise market. I've found that the manufacturer specs on a drive are almost useless in predicting relative performance for a given application. Enterprise customers benchmark, with their application-specific data patterns, for each application. Lower-spec Intel drives frequently come out ahead (in my tests, anyway,) because Intel seems to design drives that perform well in the real world, not just on synthetic benchmarks (4KB IOPS, for instance.)

    Obviously I can't tell from the specs here, and I haven't gotten a 910 to benchmark yet, but I would bet that latency figured (magnitude and consistency,) especially after the drive has run out of erased blocks, are far nicer than similarly-priced alternatives. Most consumer drives (and low-end enterprise drives based on related controllers) suffer from (relatively) large pauses, and ugly interactions between write activity and read latency, which is quite a turn-off for many enterprise users.

    Compared to the FusionIO MLC devices, these look cheap enough to be almost disposable, and they shouldn't have any irritating user-space processes hogging CPU like the FusionIOs do.

    Use of the standard LSI controller is also quite nice, because when you've got a system that works, adding kernel modules and / or binary drivers can really be a huge pain, and incur lots of extra testing.

    And no, I don't work for Intel, and I'm not an Intel fanboy - off the top of my head I can't think of a single piece of Intel hardware that I own personally, though I'm sure I've got an old Pentium 4 or something lurking in my junk pile.
  • Makaveli - Thursday, April 12, 2012 - link

    Your experience and testing mirrors something a co-worker of mine told me.

    In the testing he has done the intel drives usually do far better regardless of the benchmark numbers sandforce likes to throw around. Which easily sway consumers that looking at bar graphs all day long.
  • rimsha - Friday, April 13, 2012 - link

    check more detail
    http://www.gadget-mag.com/intel-ssd-910-series-wit...
  • blanarahul - Saturday, April 14, 2012 - link

    Honestly i absolutely hate the idea of a RAID-on-card solution. I wish Intel would use the P320h conroller, make t's own firmware, validate it and create a super duper cool ssd.

    But i also want OCZ to use their Kilimanjaro platform for making consumer ssds like Revodrive or Vertex. I would absolutely love to have a PCIe-to-NAND ssd in my computer.
  • RaptorHunter - Sunday, April 22, 2012 - link

    So the 400GB cost $1929 and only gets 1000MB/s ???

    Couldn't you just buy 4 128GB normal sata SSD drives and raid them together. That would give you almost 2000MB/s for only $600
  • x0rg - Tuesday, May 8, 2012 - link

    910 is not bootable?? So sad.. The system hard drive with OS on it is a bottle neck. OK, I'll wait...
  • Jon Severinsson - Saturday, July 21, 2012 - link

    Note that "not bootable" does not mean you can't use it for your system drive (/ on Linux, C:\ on Windows), only that you need to load the bootloader (and possible OS kernel) from somewhere else.

    Configuring this on Windows is tricky (but possible), but on Linux putting the bootloader and OS kernel on a separate disk it is a standard install-time option, and installing just the bootloader on a separate disk (or usb stick or whatever) is also trivial (though getting it to read the kernel from a disk not supported by BIOS/EFI is a bit tricky).

    That is either approx. 20 kiB (bootloader) or approx. 15 MiB (bootloader + OS kernel) you need to load from somewhere else *once* on boot. Not exactly a performance bottleneck...

Log in

Don't have an account? Sign up now