The Drive and The Teardown

The 910 starts as a half height PCIe 2.0 x8 card, although a full height bracket comes in the box as well:

Intel sent us the 800GB 910, which features three total PCB layers that are sandwiched together. The 400GB model only has two boards. The top one/two PCBs (400GB/800GB) are home exclusively to NAND packages, the final PCB is where all of the controllers and DRAM reside. Each NAND PCB is home to a total of 28 NAND packages, for a total of 56 NAND devices on an 800GB Intel SSD 910. Here's a shot of the back of the topmost PCB:

Each PCB has 17 NAND packages on the front and 11 on the back. If you look closely (and remember Intel's NAND nomenclature) you'll realize that these are quad-die 25nm MLC-HET NAND packages with a total capacity of 32GB per package. Do the math and that works out to be 1792GB of NAND on a 800GB drive (I originally underestimated how much NAND Intel was putting on these things). Intel uses copious amounts of NAND as spare area in all of its enterprise class SSDs (the 2.5" 200GB Intel SSD 710 used 320GB of NAND). Having tons of spare area helps ensure write amplification remains low and keeps endurance high, allowing the 910 to hit Intel's aggressive 7 - 14 Petabyte endurance target.

Intel SSD 910 Endurance Ratings
  400GB 800GB
4KB Random Write Up to 5PB Up to 7PB
8KB Random Write Up to 10PB Up to 14PB

Remove the topmost PCB on the 800GB drive and you'll see the middle layer with another 28 NAND packages totalling 896GB. The NAND is organized in the same 17 + 11 confguration as the top PCB:

This next shot is the middle PCB again, just removed from the stack completely:

and here's the back of the second PCB:

The final PCB in the stack is home to the four Intel/Hitachi controllers and half of the 2GB of DDR2-800:

Under the heatsink is LSI's 2008 SAS to PCIe bridge, responsible for connecting all of the Intel/Hitachi controllers to the outside world. Finally we have the back of the Intel SSD 910, which is home to the other half of the 2GB of DDR2-800 on the card:

The 910 is a very compact design and is well assembled. The whole thing, even in its half height form factor only occupies a single PCIe slot. Cooling the card isn't a problem for a conventional server, Intel claims you need 200LFM of air to keep the 910 within comfortable temperatures.

Introduction Intel's SSD Data Center Tool


View All Comments

  • JellyRoll - Friday, August 10, 2012 - link

    WOW. low QD testing on an enterprise PCIe storage card is ridiculous. End users of these SSDs will use them in datacenters, and the average QD will be ridiculously high. This evaluation shows absolutely nothing that will be encountered in this type of SSDs actual usage. No administrator in their right mind would purchase these for such ridiculously low workloads. Reply
  • SanX - Friday, August 10, 2012 - link

    and you do not need more then 16/32/64GB size for your speedy needs, then consider almost free RAMdisk with the backup. It will be 4-8x faster then this card Reply
  • marcplante - Friday, August 10, 2012 - link

    It seems that there would be a market for a consumer desktop implementation. Reply
  • Ksman - Friday, August 10, 2012 - link

    Given how well the 520's perform, perhaps a RAID with 520's on a LSI RAID adapter would be a very good solution and a comparison VS the 910 would be interesting. If RAID>0, then one could pull drives and attach direct for TRIM etc which would eliminate the problem where SSD's in a RAID cannot be managed. Reply
  • Pixelpusher6 - Friday, August 10, 2012 - link

    I was wondering the exact same thing. What are the advantages of offering a PCIe solution like this compared to say just throwing in a SAS RAID card and connecting a bunch of SSD SAS drives in a RAID 0? Is the Intel 910 mainly targeted at 1U/2U servers that might not have space available for a 2.5" drive? Is it possible to over-provision any 2.5" drive to increase endurance and reduce write amplification (I think the desktop Samsung 830 I have allows this)? Seeing the performance charts I wonder how 2 of those Toshiba 400GB SAS drives would compare against the Intel 910.

    Is the enterprise market moving towards MLC-HET NAND with tons of spare area vs. SLC NAND because of the low cost of MLC NAND now since fabs have ramped up production? I was under the impression that SLC NAND was preferable in the enterprise segment but I might be wrong. What are some usage scenarios where SLC would be better than MLC-HET and vice versa?

    I think lorribot brought up a good point:

    "I like the idea but coming from a highly redundant arrays point of view how do you set this all up in a a safe and secure way, what are the points of failure? what happens if you lose the bridge chip, is all your data dead and buried?"

    I wonder if it is possible to just swap the 1st PCIe PCB board with all the controllers and DRAM in case of a failure of the bridge chip or controller thus the data remains safe. Can SSD controllers fail? Is it likely that the Intel 910 will be used in RAID 0? I didn't think RAID 0 was used much in enterprise. Sorry for all the questions. I have been visiting this site for over 10 years and I just now registered an account.
  • FunBunny2 - Saturday, August 11, 2012 - link

    eMLC/MLC-HET/foo-MLC are all attempts to get cheaper parts into SSD chassis, even for enterprise companies such as Texas Memory. Part of the motivation is yet more sophisticated controllers, and, I suspect, the realization that enterprises understand duty life far better than consumers (who'll run a HDD forever if it survives infant mortality). The SSD survival curve (due to NAND failure) is more predictable than HDD, so with the very much faster operations, if 5 years remains the lifetime, the parts used don't matter. The part gets swapped out at 90% or 95% of duty life (or whatever %-age the shop decides); end of story. 5 years ago, SLC was the only way to 5 years. That's not true any longer. Reply
  • GatoRat - Sunday, August 12, 2012 - link

    "the 800GB 910 is easily the fastest SSD we've ever tested."

    Yet the tests clearly show that it isn't. In fact, the Oracle tests show it's a dog. In other tests, it doesn't come up on top. The OCZ Z-Drive R4 CM84 600GB is clearly the faster overall drive.
  • Galcobar - Sunday, August 12, 2012 - link


    I'm impressed both to see the literary reference, correctly used, and that nobody has called it a typo in the comments. Not bad for a fifty-year-old novel once dismissed by the New York Times as a puerile mishmash.
  • a50505 - Thursday, August 30, 2012 - link

    So, has anyone heard of a workstation class laptop that with a PCIe based ssd? Reply

Log in

Don't have an account? Sign up now