The Drive and The Teardown

The 910 starts as a half height PCIe 2.0 x8 card, although a full height bracket comes in the box as well:

Intel sent us the 800GB 910, which features three total PCB layers that are sandwiched together. The 400GB model only has two boards. The top one/two PCBs (400GB/800GB) are home exclusively to NAND packages, the final PCB is where all of the controllers and DRAM reside. Each NAND PCB is home to a total of 28 NAND packages, for a total of 56 NAND devices on an 800GB Intel SSD 910. Here's a shot of the back of the topmost PCB:

Each PCB has 17 NAND packages on the front and 11 on the back. If you look closely (and remember Intel's NAND nomenclature) you'll realize that these are quad-die 25nm MLC-HET NAND packages with a total capacity of 32GB per package. Do the math and that works out to be 1792GB of NAND on a 800GB drive (I originally underestimated how much NAND Intel was putting on these things). Intel uses copious amounts of NAND as spare area in all of its enterprise class SSDs (the 2.5" 200GB Intel SSD 710 used 320GB of NAND). Having tons of spare area helps ensure write amplification remains low and keeps endurance high, allowing the 910 to hit Intel's aggressive 7 - 14 Petabyte endurance target.

Intel SSD 910 Endurance Ratings
  400GB 800GB
4KB Random Write Up to 5PB Up to 7PB
8KB Random Write Up to 10PB Up to 14PB

Remove the topmost PCB on the 800GB drive and you'll see the middle layer with another 28 NAND packages totalling 896GB. The NAND is organized in the same 17 + 11 confguration as the top PCB:

This next shot is the middle PCB again, just removed from the stack completely:

and here's the back of the second PCB:

The final PCB in the stack is home to the four Intel/Hitachi controllers and half of the 2GB of DDR2-800:

Under the heatsink is LSI's 2008 SAS to PCIe bridge, responsible for connecting all of the Intel/Hitachi controllers to the outside world. Finally we have the back of the Intel SSD 910, which is home to the other half of the 2GB of DDR2-800 on the card:

The 910 is a very compact design and is well assembled. The whole thing, even in its half height form factor only occupies a single PCIe slot. Cooling the card isn't a problem for a conventional server, Intel claims you need 200LFM of air to keep the 910 within comfortable temperatures.

Introduction Intel's SSD Data Center Tool
Comments Locked

39 Comments

View All Comments

  • web2dot0 - Friday, August 10, 2012 - link

    That's why you need a comparison buddy. Otherwise, why don't we just read off the spec sheet and declare a winner? Let's face it z-drive r4 is NO FusionIO ok.

    FusionIO is a proven entity backed my a number of reputable companies (Dell, HP, etc...). Those companies didn't sign on because the cards are crap. Who's backing Z-Drive?

    They are the standards in which enterprise SSDs are measured. At least, that's the general consensus.
  • happycamperjack - Friday, August 10, 2012 - link

    Spec sheet? did you even read the benchmarks in that comparison? FusionIO's ioDrive clearly lost out there except for low queue situation.

    As for who's backing OCZ's enterprise SSD, let's see, Microsoft, SAP, ebay just to name a few. I don't know where you get the idea that OCZ's enterprise products do not meet the standard, but they are currently the 4th largest enterprise SSD provider. So you are either very misinformed, or just a clueless FusionIO fanboy.
  • web2dot0 - Sunday, August 12, 2012 - link

    Come on dude.

    You are clearly looking at the specsheets. The feature sets offered by FusionIO cards are light years ahead of OCZ cards.

    The toolset is also light years ahead. It's not always just about performance. Otherwise, everyone will be using XEN and nobody will be using VMWARE. Get it?

    I would like to see a direct comparison of FusionIO cards (on workloads that enterprises matter), not what you THINK it will perform.

    You are either very much misinformed or you are a clueless kid.
  • happycamperjack - Thursday, August 16, 2012 - link

    what spreadsheet? I'm comparing the benchmark charts at later pages, which you obviously have not clicked through. There's enterprise comparisons too ok kid?

    what's great about FIO is its software sets for big data and its low latency and high low queue data access performance. but if just comparing single card performance per GB price ratio, FIO is overpriced IMO. And FIO's PCIe cards' lackluster performance in high queue depth is highlighting what could be the doom of FPGA PCIe cards as the cheap ATIC controllers mature and overthrow the FPGA cards with its abundant number on a board.

    My guess is that in 2 years, FPGA PCIe SSDs would be used only in some specialized Tier 0 storages for high performance computing that would benefit from FPGA's feature sets. Similar to Rambus's RDRAM's fate.

    And if OCZ is good enough for MS's Azure cloud, I don't see why it's not good enough for other enterprise
  • hmmmmmm - Saturday, August 11, 2012 - link

    unfortunately, they are comparing the 910 to a 2009, discontinued card from fusion-io. would like to see a new card in the comparison to be able to compare what's on the market today
  • happycamperjack - Thursday, August 16, 2012 - link

    I love to see some ioDrive 2 comparisons too. Unfortunately I can't find any.
  • zachj - Thursday, August 9, 2012 - link

    Does the 910 have a capacitor to drain contents of DRAM to flash during a power outage?
  • FunBunny2 - Thursday, August 9, 2012 - link

    It looked like it, but I didn't read a mention. Could be bad eyesight.
  • erple2 - Thursday, August 9, 2012 - link

    For the market that this targets, you should never have a power outage that affects your server. These are too expensive to not have some sort of redundant power source like at least a solid ups, or better yet, a server room backup power generator.

    That having been said, if you look at the main PCB, you can see 4 capacitors of some sort.
  • mike_ - Saturday, August 11, 2012 - link

    >>For the market that this targets, you should never have a power outage that affects your server.

    You'd wish it weren't so, but environments can and will fail. If it has capacitors and such that's great, if it doesn't this device is effectively useless. Surprised it didn't get mentioned :)

Log in

Don't have an account? Sign up now