Final Words

 

Update: Micron tells us that the P320h doesn't support NVMe, we are digging to understand how Micron's controller differs from the NVMe IDT controller with a similar part number.

For Micron's first PCIe SSD, the P320h performs very well. Random read and write performance are untouched by any non-SandForce architecture we've tested here. Average service times in our application based workload traces are also class leading, presumably as a result of the IDT controller and lightweight PCIe controller. Sequential performance is also very good and potentially even better under heavier workloads. The fact that there's no claimed performance difference between the 350GB and 700GB drives is good for users who don't have giant workload footprints. Overall it's an impressive step forward. The native PCIe architecture makes a lot of sense and will hopefully quickly supplant the current crop of SATA-RAID-on-a-PCIe-card solutions on the market today. Where things will get really interesting is when we start coupling multiple PCIe SSDs in a system.

The downsides to the P320h are obvious. By using 34nm SLC NAND Micron ensures wonderful endurance, but prices the solution out of the reach of many customers whose needs don't require such high endurance. Until Micron brings eMLC/MLC-HET NAND to the P320h, I suspect the more conventional PCIe SSDs (e.g. Intel's SSD 910) will remain better values. For the subset of users who require SLC endurance however, the P320h should definitely fit the bill.

The second downside is just as fundamental: the driver stack is still in its infancy. Although the ultimate goal is SATA-like compatibility with all systems, it will take some time to get there. Until that day comes, if you're considering the P320h you'll want to make sure that Micron has validated the drive on your platform.

PCIe is the future. I don't expect a smooth ride to get us there, but it's where solid state storage is headed - particularly in the enterprise market. The P320h is a good starting point, I'm eager to see where Micron takes it.

Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance
Comments Locked

57 Comments

View All Comments

  • JellyRoll - Monday, October 15, 2012 - link

    Of course you have absolutely no experience with Virtualization, which would mean that for your archaic workloads you wouldn't need something of this nature.
    users that purchase this will not be running one database at such low queue depths, that would be an insane waste of money.
    This is designed for high load OLTP and virtualized environments, not to run the database of one website.
    you may be in IT at some small company, but you havent seen anything on datacenter scale apparently.
  • DataC - Tuesday, October 16, 2012 - link

    JellyRoll is correct. I work for Micron, and we developed the P320h’s controller and firmware through collaboration with enterprise OEMs—which is why we optimized for higher queue depths. When the P320h is run in these environments (which are common in datacenters), you’ll see significantly higher performance than what’s shown in the charts above.
  • jospoortvliet - Tuesday, October 16, 2012 - link

    Yup. And it should be tested on a proper enterprise platform - this test is like running a Nascar vehicle with the handbrakes on.

    Time for an upgrade to a real OS, Anand.
  • Denithor - Monday, October 15, 2012 - link

    Would have liked to see the fastest consumer-grade drive thrown in just to see exactly how much faster enterprise drives go. Also would like to see how this drive would perform in the standard Light and Heavy Bench tests.
  • FunBunny2 - Monday, October 15, 2012 - link

    Actually, against a Fusion-io part, the closest example.
  • jwilliams4200 - Monday, October 15, 2012 - link

    Right, enterprise drives should get all the standard consumer SSD tests run on them in addition to the enterprise tests.
  • mckirkus - Wednesday, October 17, 2012 - link

    And I'd argue a RAMDisk should be included just to get a sense of relative performance.
  • Kevin G - Monday, October 15, 2012 - link

    I'm kinda surprised that there wasn't as much discussion about the effects of the native PCI-e controller. Lower latency results do crop up in various benchmarks here. I wonder if the impact is merely 'benchmark only' and not anything that'd be noticeable in more real world tests.

    By going with 34 nm SLC, they have limited capacity but his article seems to indicate that the controller is capable of support MLC in the 20 to 30 nm range. That would allow it to hit the 4 TB maximum capacity of the controller. I'm also curious on how such a change would perform. The current P320h does need a PCI-e 2.0 8x connection as some of the benchmarks are (barely) exceeding what a PCI-e 2.0 4x link can provide. With faster NAND, a move to PCI-e 3.0 8x or PCI-e 2.0 16x may be warranted.

    I'm also curious if multiple P320h's can be used in a system behind a RAID. Overkill the overkill?

    Now for a few general comments about NVMe. I'd love to see NAND chips on DIMMs at the enterprise level. If the controller detects NAND failure or chips reaching their maximum endurance, they could potentially be swapped out. This is akin to current ECC DIMMs. Along those same lines it would be nice to see a SAS or SATA port on the board so that it could fail over to a hard drive in the event of multiple impending NAND failures. The main reasoning I can see to avoid DIMMs would simply be physical space.

    This is also a good preview of what to expect with SATA-Express drives next year. They won't reach such bandwidth figures as they'll be limited to two PCI-e lanes but the latency improvements should carry over with a good controller.
  • PCTC2 - Monday, October 15, 2012 - link

    You could probably just do an OS-level software stripe (like in Linux). I think that would be more beneficial just in terms of usable capacity rather than the increase in performance. However, the increase in performance could be tangible, depending on your workload.

    As for the link, I think we're more constrained by the controller to the performance than the NAND. I don't think we need the PCIe 3.0 or PCIe 2.0 x16 links for this iteration of the controller. I don't think it would saturate the link. As you said, some of the tests don't even saturate a PCIe x4 link, if you don't include overhead (there is overhead).

    Also, Anand did point out a 25nm eMLC version is coming out in the future.

    As for putting chips on DIMMs, for a HH/HL PCIe card, that is a waste of space, as you said yourself. Between the controller, DRAM, and then the NAND, the sockets would just take up space. The daughterboard direction allows a much more compact, proprietary size depending on the board itself. If you wanted a FH/HL card, I'm sure DIMMs would be more possible.
  • FunBunny2 - Monday, October 15, 2012 - link

    Check out the Sun/Oracle flash appliance. Other niche Enterprise flash storage also exist.

Log in

Don't have an account? Sign up now