Random Read Performance

Although sequential performance is important, a true staple of any multi-user server is an IO load that appears highly random. For our small block random read test we first fill all drives sequentially, then perform one full drive write using a random 4K pass at a queue depth of 128. We then perform a 3 minute random read run at each queue depth, plotting bandwidth and latency along the way.

Small block random read operations have inherent limits when it comes to parallelism. In the case of all of the drives here, QD1 performance ends up around 20 - 40MB/s. The P3700 manages 36.5MB/s (~8900 IOPS) compared to 27.2MB/s (~6600 IOPS) for the SATA S3700. Even at a queue depth of 8 there's only a small advantage to the P3700 from a bandwidth perspective (~77000 IOPS vs. ~58400 IOPS). Performance does scale incredibly well with increasing queue depths though. By QD16 we see the P3700 pull away, at even as low as QD32 the P3700 delivers roughly 3.5x the performance of the S3700. There's a 70% advantage at QD32 compared to Intel's SSD 910, but that advantage grows to 135% at QD128.

Micron's P420m is incredibly competitive, substantially outperforming the P3700 at the highest queue depth.

Random read latency is incredibly important for applications where response time matters. Even more important for these applications is keeping latency below a certain threshold, what we're looking for here is a flat curve across all queue depths:

And that's almost exactly what the P3700 delivers. While the average latency for Intel's SSD DC S3700 (SATA) sky rockets after QD32, the P3700 remains mostly flat throughout the sweep. It's only at QD128 that we see a bit of an uptick. Even the 910 shows bigger jumps at higher queue depths.

If we remove the SATA drive and look exclusively at PCIe solutions, we get a better idea of the P3700's low latencies:

In this next chart we'll look at some specific numbers. Here we've got average latency (expressed in µs) for 4KB reads at a queue depth of 32. This is the same data as in the charts above, just formatted differently:

Average Latency - 4KB Random Read QD32

The P3700's latency advantage over its SATA counterpart is huge. Compared to other PCIe solutions, the P3700 is still leading but definitely not by as large of a margin. Micron's P420m comes fairly close.

Next up is average latency, but now at our highest tested queue depth: 128.

Average Latency - 4KB Random Read QD128

Micron's P420m definitely takes over here. Micron seems to have optimized the P420m for operation at higher queue depths while Intel focused the P3700 a bit lower. The SATA based S3700 is just laughable here, average completion latency is over 1.6ms.

Looking at maximum latency is interesting from a workload perspective, as well as from a drive architecture perspective. Latency sensitive workloads tend to have a max latency they can't exceed, but at the same time a high max latency but low average latency implies that the drive sees these max latencies infrequently. From an architectural perspective, consistent max latencies across the entire QD sweep give us insight into how the drive works at a lower level. It's during these max latency events that the drive's controller can schedule cleanup and defragmentation routines. I recorded max latency at each queue depth and presented an average of all max latencies across the QD sweet (From QD1 - QD128). In general, max latencies remained consistent across the sweep.

Max Latency - 4KB Random Read

The 910's max latencies never really get out of hand. Part of the advantage is each of the 910's four controllers only ever see a queue depth of 32, so no individual controller is ever stressed all that much. The S3700 is next up with remarkably consistent performance here. The range of values the S3700 had was 2ms - 10ms, not correlating in any recognizable way to queue depth. Note the huge gap between max and average latency for the S3700 - it's an order of magnitude. These high latency events are fairly rare.

The P3700 sees two types of long latency events: one that takes around 3ms and another that takes around 15ms. The result is a higher max latency than the other two Intel drives, but with a lower average latency than both it's still fairly rare.

Micron's P420m runs the longest background task routine of anything here, averaging nearly 53ms. Whatever Micron is doing here, it seems consistent across all queue depths.

Random Write Performance

Now we get to the truly difficult workload: a steady state 4KB random write test. We first fill the drive to capacity, then perform a 4KB (QD128) random write workload until we fill the drive once more. We then run a 3 minute 4KB random write test across all queue depths, recording bandwidth and latency values. This gives us a good indication of steady state performance, which should be where the drives end up over days/weeks/months of continued use in a server.

Despite the more strenuous workload, the P3700 absolutely shines here. We see peak performance attained at a queue depth of 8 and it's sustained throughout the rest of the range.

Average latency is also class leading - it's particularly impressive when you compare the P3700 to its SATA counterpart.

Average Latency - 4KB Random Write QD32

Average Latency - 4KB Random Write QD128

The absolute average latency numbers are particularly impressive. The P3700 at a queue depth of 128 can sustain 4KB random writes with IOs completing at 0.86ms.

Max Latency - 4KB Random Write

Sequential Read & Write Performance Mixed Read/Write Performance


View All Comments

  • extide - Tuesday, June 3, 2014 - link

    Yeah, this except more correctly it is SATA vs PCIe as the interface and AHCI vs NVMe as the protocol.

    M.2 --> Supports AHCI over SATA, AHCI over PCIe, and NVMe over PCIe
    SFF-8639 --> Supports AHCI over PCIe and NVMe over PCIe
    PCIe card --> AHCI over PCIe, and NVMe over PCIe

    Now the latter 2 (and even the first one if you really wanted to...) could have a PCIe based SATA controller on it which would go PCIe --> SATA/SATA RAID Controller -> SATA SSD Controller(s), (For example this is how the OCZ Revo Drive works)
  • Galatian - Wednesday, June 4, 2014 - link

    That's not what I meant with my comment. I'm upset that besides ASRock on the Extreme 6 and 9 and ASUS on their Impact no other manufacture included a higher bandwidth M.2 connector. I guess all upcoming PCIe M.2 drives will already be bottlenecked by the lackluster M.2 speeds most mainboard manufactures are building into their products, Reply
  • hpvd - Tuesday, June 3, 2014 - link

    hmmm are you sure? no new mainboard needed? No new Bios? Should it work in all boards which could boot existing PCIe SSDs? Reply
  • hpvd - Tuesday, June 3, 2014 - link

    I would really appreciate a short test of this. How should this work when AHCI is the standard on todays Mainboards/Bios/UEFI? There is alreday some work done until the Windows-/Linux driver take over the helm
    (which is of course already available: http://www.nvmexpress.org/blog/open-fabrics-allian...
  • TelstarTOS - Tuesday, June 3, 2014 - link

    404 Reply
  • j00d - Friday, June 6, 2014 - link

    just take off the trailing ) in the url Reply
  • Ryan Smith - Tuesday, June 3, 2014 - link

    Since a couple of you asked, I threw it in our X79 testbed.

    Windows 8.1U1 sees the drive without issue, however it is not bootable as our motherboard cannot see the drive as a bootable devices. I preface that with the fact that our X79 testbed is a consumer platform (ASRock X79 Professional) and X79 is a rather old chipset. So I can't speak for how this would behave on a brand spanking new Z97 board, or a server board for that matter.
  • hpvd - Wednesday, June 4, 2014 - link

    many thanks for giving this a try! Should be further investigated... Reply
  • hpvd - Wednesday, June 4, 2014 - link

    PCIe booting may be a general prob with standard bios setting on these boards. I found a tiny bios setting guide how to fix this (on a similar Asrock X97 board). Would be awesome If you could try this:
    You would be the very first in web booting from an NVMe device :-)
  • hpvd - Wednesday, June 4, 2014 - link

    the other way around the question is:
    - your board
    - with this bios version
    - with this bios settings
    - in this PCIe slot
    see other bootable PCIe SSD devices?

    if so this new Intel PCIe NVMe SSD behave somehow different
    If others couldn't be seen either - there is still hope for "normal" boot support :-)
    You just have the right board, bios settings...

Log in

Don't have an account? Sign up now