Update: Micron tells us that the P320h doesn't support NVMe, we are digging to understand how Micron's controller differs from the NVMe IDT controller with a similar part number.

Well over a year ago Micron announced something unique in a sea of PCIe SSDs that were otherwise nothing more than SATA drives in RAID on a PCIe card. The drive Micron announced was the P320h, featuring a custom ASIC and a native PCIe interface. The vast majority of PCIe SSDs we've looked at thus far feature multiple SATA/SAS SSD controllers with their associated NAND behind a SATA/SAS RAID controller on a PCIe card. These PCIe SSDs basically deliver the performance of a multi-drive SSD RAID-0 on a single card instead of requiring multiple 2.5" bays. There's decent interest in these types of PCIe SSDs simply because of the form factor advantage as many servers these days have moved to slimmer form factors (1U/2U) that don't have all that many 2.5" drive bays. Long term however, this SATA/SAS RAID on a PCIe card SSD solution is clunky at best. Ideally you'd want a native PCIe controller that could talk directly to the NAND, rather than going through an unnecessary layer of abstraction. That's exactly what Micron's P320h promised. Today, we have a review of that very drive.

Although it was publicly announced a long time ago (in SSD terms), the P320h's specifications are still very competitive:

Micron P320h
Capacity 350GB 700GB
Interface PCIe 2.0 x8
NAND 34nm ONFI 2.1 SLC
Max Sequential Performance (Reads/Writes) 3.2 / 1.9 GBps
Max Random Performance (Reads/Writes) 785K / 205K IOPS
Max Latency (QD=1, Read/Write) 47 µs / 311 µs (nonposted)
Endurance (Max Data Written) 25PB 50PB
Encryption N
TDP 25W
Form Factor Half-Height, Half-Length PCIe
68.9mm x 167.65mm x 18.71mm

In fact, the only indication that this product was announced over a year ago is the fact that it is launching using 34nm SLC NAND. Most of the enterprise SSDs we review these days have shifted to 2x-nm eMLC or MLC-HET. Micron will be making a 25nm SLC version available as well as eMLC/MLC-HET versions in the future, but the launch product uses 34nm SLC NAND. I don't have official pricing from Micron yet, but I would expect it to be pretty high given the amount of expensive SLC NAND on each of the drives (512GB for the 350GB drive, 1TB for the 700GB drive).

The obvious benefit from using SLC NAND is endurance. While Intel's MLC-HET based 910 SSD tops out at 14PB of writes over the life of the 800GB, the 350GB P320h is rated for 25PB. The 700GB drive doubles that to 50 petabytes of writes.

Micron is also quite proud of its low read/write latencies, enabled by its low overhead PCIe controller and driver stack.

As a native PCIe SSD, the P320h features a single controller on the card - a giant 1517-pin controller made by IDT. The huge pin count is needed to connect the controller to its 32 independent NAND channels, 4x what we see from most SATA SSD controllers:

There are no bridge chips or RAID controllers on-board, that single Micron developed IDT manufactured controller is all that's needed. Talk about clean.

Each of the 32 channels can talk to up to 8 targets, with a maximum capacity of 4TB although Micron only uses 1TB of NAND on-board. Twenty two percent of the on-board NAND is set aside as spare area for garbage collection, bad block replacement and wear leveling. An additional 1/8 of the user capacity is reserved for parity data.

The IDT controller features a configurable hardware RAID-5 that stripes accesses across multiple logical units. The logical units are broken down into blocks and pages as is standard for NAND based SSDs. Blocks and pages are striped across logical units, with parity data calculated from every 7 blocks/pages.

Micron picked 7+1P as its preferred balance of performance, user capacity and failure protection:

Calculating parity based on fewer blocks/pages would be able to withstand greater failures but capacity and performance would suffer. As NAND failures should be far more rare/predictable than mechanical storage failures, this tradeoff shouldn't be a problem.

The P320h is available in one form factor: a half-height, half-length PCIe 2.0 x8 card. In the box are both half and full height brackets allowing the P320h to fit in both types of cases:

Unlike most 2.5" SATA/SAS SSDs, these PCIe SSDs are pretty interesting to look at. With much more bandwidth to saturate, the drive makers have become more creative in finding ways to cram as many NAND devices onto a half height, half length PCIe card as possible. While sticking to a single slot profile, Micron uses two smaller daughterboards attached via high density interface connectors to the main P320h card to double the amount of NAND on the drive.

 

Each daughtercard has sixteen 34nm 128Gb NAND packages for a total of 256GB of NAND. That's 512GB of NAND on cards, and then another 512GB on the main P320h card itself for a total of 1TB of NAND for a 700GB drive. The 350GB drive keeps the daughtercards but moves to 64Gb NAND packages instead. Remember that these are 34nm SLC NAND die, so you're looking at only 2GB per die vs. the 8GB per die we get from 25nm MLC NAND (or 4GB per die from 25nm SLC NAND).

Of course with a huge increase in the number of NAND devices, there's a correspondingly large increase in the number of DRAM devices to keep track of all of the LBAs and flash mapping tables. The P320h features nine 256MB DDR3-1333 devices (also made by Micron) for a total of 2.25GB of on-board DRAM. 

There's a relatively small heatsink on the custom PCIe controller itself. Micron claims it only needs 1.5m/s of airflow in order to maintain its operating temperature. Prying the heatsink off reveals IDT's NVMe (Non-Volatile Memory Express) controller. This is a native PCIe controller that supports up to 32 NAND channels, as well as a full implementation of the NVMe spec. Although the controller itself is PCIe Gen 3, Micron only certifies it for PCIe Gen 2 operation. With 8 PCIe lanes there's more than enough host bandwidth on PCIe 2.x so this isn't an issue. Update: Micron tells us that the P320h doesn't support NVMe, we are digging to understand how Micron's controller differs from the NVMe IDT controller with a similar part number.

The NVMe spec promises a lower overhead, more efficient command set for native PCIe SSDs. This is a transition that makes a lot of sense as the current approach of just using SATA/SAS controllers behind a PCIe switch is unnecessarily complex. With NVMe the NAND talks to a native PCIe controller which can in turn deliver tons of bandwidth to the host vs. being bottlenecked by 6Gbps SATA or SAS. The NVMe host spec also scales the number of concurrent IOs supported all the way up to 64,000 (a max of 256 currently supported under Windows vs 32 for SATA based SSDs), well beyond what most current workloads would be able to generate.

As NVMe spec defines the driver interface between the SSD and the host OS, it requires a new set of drivers to function. The goal is down the road these drivers will be built into the OS, but in the short term you'd hopefully only need one NVMe driver that would work on all NVMe SSDs rather than the current mess of having an individual driver for every PCIe SSD. Companies like Intel have gotten around the driver issues by simply using SATA/SAS to PCIe controllers whose drivers are already integrated into modern OSes (e.g. LSI's Falcon 2008 controller on the Intel SSD 910).

In the long run NVMe SSDs should enjoy the same plug and play benefits that SATA drives enjoy today. You never have to worry about installing a SATA driver to make your new SSD work (you shouldn't at least), and the same will hopefully be true for NVMe SSDs. The reality today is much more complicated than that.

Micron provided us with drivers for the P320h under the guidance that the driver was only tested/validated for certain server configurations. Even having other PCIe devices installed in the system could cause incompatibilities. In practice I found Micron's warnings accurate. While the P320h had no issues working on our X79 testbed, our H67 testbed wouldn't boot into Windows with the P320h installed. What was really strange about the P320h in the H67 system was that the simple presence of the card caused graphical corruption at POST. I noticed other incompatibilities with certain PCIe video cards installed in our X79 system. I eventually ended up with a stable configuration that let me run through our suite of tests, but even then I noticed the P320h would sometimes drop out of the system entirely - requiring a power cycle to come back again.

Micron made no attempt to hide the fact that the P320h is only validated on specific servers, but it's something worth considering if you're looking at this drive. Apparently the state of Linux drivers is much better than Windows, unfortunately most of our tests run under Windows which forced us into dealing with these compatibility issues head on.

 

Random & Sequential Performance
Comments Locked

57 Comments

View All Comments

  • zlyles - Wednesday, October 17, 2012 - link

    Because it wasn't meant to be... it makes me laugh to see how many people think Micron, Intel, and OCZ are developing these PCI-e SSD's for the consumer market. This is an enterprise class drive, and as such is not meant to be a bootable drive unless you are booting VM's on a hypervisor.
  • Cloakstar - Monday, October 15, 2012 - link

    I get the impression these tests did not stress this drive.

    Compare the relationship between disk busy time and average QD for the test tp the other drives. The higher the QD, the lower the relative disk busy time compared to the competition.
    -In IOMeter tests with QD32, no disk busy time is recorded, but the drive is in a solid lead for random noncompressible data throughtput.
    -The poorest numbers for this drive happen at lowest QD.
    -The highest listed QD for any test, here is 32.
    -"Micron claims much higher sequential read/write numbers under Linux at 256 concurrent IOs."
  • apmon2 - Tuesday, October 16, 2012 - link

    "I get the impression these tests did not stress this drive."

    Yes, that seems to be the case. thessdreview.com have what looks like a really nice review [1] including tests of a QD up to 512. There one can see that it achieves only about a 1/3 of its peak performance with a QD of 32. Not till a QD of 128 or even 256 does it achieve its full potential. Then however it seems to perform truly amazing, and is able to completely saturate the 8x PCIe 2.1 bus with 4k random reads! It supposedly can sustain 3.3GB/s of 4kb random reads.

    Even at small 512B read requests it can, according to them, still achieve on the order of 600Mb/s, achieving well in access of 1.5 million IOps. Even then the limiting factor was the CPU, not the device, despite using a core i7 that was overclocked to 4.9 GHz.

    So if those numbers are true, Anand didn't even come close to stressing this SSD to its limit (or intended purpose).

    [1] http://thessdreview.com/our-reviews/micron-p320h-h...
  • bthanos - Monday, October 15, 2012 - link

    Hi Anand,

    Nice Article, however the Micron p320h is not a NVME interface drive. Its PCIe Gen 2 , AHCI.
  • Jaybus - Tuesday, October 16, 2012 - link

    As stated in the article, the drive is using IDT's 32-channel PCIe gen 3 x8 controller, but operating it in gen 2 mode. Since x8 gen 2 is sufficiently faster than the drive is capable of, it is a good choice, as it allows compatibility for use in systems without gen 3 slots. IDT claims full compliance with the NVM Express standard. See http://www.idt.com/products/interface-connectivity... for controller specs.

    Looks like a NVM Express drive to me. Why would you say it is not?
  • bthanos - Tuesday, October 16, 2012 - link

    Because the IDT controllers released at Flash Memory Summit are new SOCs, the Micron p320h drive is using a previous jointly developed SOC which is not NVME. See comment from Anand..
  • colonelpepper - Monday, October 15, 2012 - link

    On the issue of durability or whatever you want to call it...

    "50 petabytes of writes" is totally meaningless marketing intellect abuse.

    It only takes on a meaning if you were to fill up the entire drive at once and then erase it and then write to the entire volume again etc until you reached 50 petabytes of writes.

    Show me a hard drive that is ever used like that and I'll donate the pot of gold I've got stashed out back to your favorite charity.
  • DataC - Tuesday, October 16, 2012 - link

    Colonel Pepper, at Micron we spec TBW or total bytes written, but it’s closely related to another standard you’ll see in the enterprise industry, “X drive fills per day for 5 years.” The two specs are simply different ways to express the same number. The spec tracks the amount of bytes you can write to the SSD before the NAND exceeds its wear life and reverts to a write-protect (read only) mode. It includes any and every write ever made to the drive, not just the full drive fills and erases you’ve described.
  • rrohbeck - Monday, October 15, 2012 - link

    What I don't understand though is why SSD controllers don't have PHYs that can talk PCIe as well as SATA/SAS. Then manufacturers could leverage the high volume/high performance SATA/SAS designs for PCIe too. The firmware would probably be even simpler.
  • DanNeely - Monday, October 15, 2012 - link

    Because it would be adding additional complexity, die size, and cost, to mass produced consumer parts with very thin margins. All of which are good ways to go out of business.

Log in

Don't have an account? Sign up now