Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews. For our enterprise suite we make a few changes to our usual tests.

Our first test writes 4KB in a completely random pattern over all LBAs on the drive (compared to an 8GB address space in our desktop reviews). We perform 32 concurrent IOs (compared to 3) and run the test until the drive being tested reaches its steady state. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Enterprise Iometer - 4KB Random Write

Excluding the two SandForce data points using highly compressible data, the P320h is the new king here. At least in the 700GB configuration the P320h is able to offer better steady state 4KB random write performance than Intel's SSD 910. The drive also delivers over 6x the performance of Micron's 2.5" P400e.

Enterprise Iometer - 4KB Random Read

Random read performance is an even more impressive showing for the P320h at 758MB/s. This is truly the benefit of having 32 NAND concurrently accessible channels, given a heavy workload there's more than enough data to parallelize and stripe across all channels.

Sequential Read/Write Speed

Similar to our other Enterprise Iometer tests, queue depths are much higher in our sequential benchmarks. To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 32. The results reported are in average MB/s over the entire test length.

Enterprise Iometer - 128KB Sequential Write

Peak sequential write performance is slightly behind Intel's SSD 910 operating in its 38W high performance mode, but still very competitive. At 1357MB/s workloads that need to move large blocks of data will enjoy great performance on the P320h. Micron claims much higher sequential read/write numbers under Linux at 256 concurrent IOs.

Enterprise Iometer - 128KB Sequential Read

Sequential read performance is also very strong at 1817MB/s. The 910 as well as OCZ's Z-Drive R4 manage better performance here.

Introduction Enterprise Storage Bench - Oracle Swingbench
Comments Locked

57 Comments

View All Comments

  • JellyRoll - Monday, October 15, 2012 - link

    Of course you have absolutely no experience with Virtualization, which would mean that for your archaic workloads you wouldn't need something of this nature.
    users that purchase this will not be running one database at such low queue depths, that would be an insane waste of money.
    This is designed for high load OLTP and virtualized environments, not to run the database of one website.
    you may be in IT at some small company, but you havent seen anything on datacenter scale apparently.
  • DataC - Tuesday, October 16, 2012 - link

    JellyRoll is correct. I work for Micron, and we developed the P320h’s controller and firmware through collaboration with enterprise OEMs—which is why we optimized for higher queue depths. When the P320h is run in these environments (which are common in datacenters), you’ll see significantly higher performance than what’s shown in the charts above.
  • jospoortvliet - Tuesday, October 16, 2012 - link

    Yup. And it should be tested on a proper enterprise platform - this test is like running a Nascar vehicle with the handbrakes on.

    Time for an upgrade to a real OS, Anand.
  • Denithor - Monday, October 15, 2012 - link

    Would have liked to see the fastest consumer-grade drive thrown in just to see exactly how much faster enterprise drives go. Also would like to see how this drive would perform in the standard Light and Heavy Bench tests.
  • FunBunny2 - Monday, October 15, 2012 - link

    Actually, against a Fusion-io part, the closest example.
  • jwilliams4200 - Monday, October 15, 2012 - link

    Right, enterprise drives should get all the standard consumer SSD tests run on them in addition to the enterprise tests.
  • mckirkus - Wednesday, October 17, 2012 - link

    And I'd argue a RAMDisk should be included just to get a sense of relative performance.
  • Kevin G - Monday, October 15, 2012 - link

    I'm kinda surprised that there wasn't as much discussion about the effects of the native PCI-e controller. Lower latency results do crop up in various benchmarks here. I wonder if the impact is merely 'benchmark only' and not anything that'd be noticeable in more real world tests.

    By going with 34 nm SLC, they have limited capacity but his article seems to indicate that the controller is capable of support MLC in the 20 to 30 nm range. That would allow it to hit the 4 TB maximum capacity of the controller. I'm also curious on how such a change would perform. The current P320h does need a PCI-e 2.0 8x connection as some of the benchmarks are (barely) exceeding what a PCI-e 2.0 4x link can provide. With faster NAND, a move to PCI-e 3.0 8x or PCI-e 2.0 16x may be warranted.

    I'm also curious if multiple P320h's can be used in a system behind a RAID. Overkill the overkill?

    Now for a few general comments about NVMe. I'd love to see NAND chips on DIMMs at the enterprise level. If the controller detects NAND failure or chips reaching their maximum endurance, they could potentially be swapped out. This is akin to current ECC DIMMs. Along those same lines it would be nice to see a SAS or SATA port on the board so that it could fail over to a hard drive in the event of multiple impending NAND failures. The main reasoning I can see to avoid DIMMs would simply be physical space.

    This is also a good preview of what to expect with SATA-Express drives next year. They won't reach such bandwidth figures as they'll be limited to two PCI-e lanes but the latency improvements should carry over with a good controller.
  • PCTC2 - Monday, October 15, 2012 - link

    You could probably just do an OS-level software stripe (like in Linux). I think that would be more beneficial just in terms of usable capacity rather than the increase in performance. However, the increase in performance could be tangible, depending on your workload.

    As for the link, I think we're more constrained by the controller to the performance than the NAND. I don't think we need the PCIe 3.0 or PCIe 2.0 x16 links for this iteration of the controller. I don't think it would saturate the link. As you said, some of the tests don't even saturate a PCIe x4 link, if you don't include overhead (there is overhead).

    Also, Anand did point out a 25nm eMLC version is coming out in the future.

    As for putting chips on DIMMs, for a HH/HL PCIe card, that is a waste of space, as you said yourself. Between the controller, DRAM, and then the NAND, the sockets would just take up space. The daughterboard direction allows a much more compact, proprietary size depending on the board itself. If you wanted a FH/HL card, I'm sure DIMMs would be more possible.
  • FunBunny2 - Monday, October 15, 2012 - link

    Check out the Sun/Oracle flash appliance. Other niche Enterprise flash storage also exist.

Log in

Don't have an account? Sign up now