Random Read Performance

Although sequential performance is important, a true staple of any multi-user server is an IO load that appears highly random. For our small block random read test we first fill all drives sequentially, then perform one full drive write using a random 4K pass at a queue depth of 128. We then perform a 3 minute random read run at each queue depth, plotting bandwidth and latency along the way.

Small block random read operations have inherent limits when it comes to parallelism. In the case of all of the drives here, QD1 performance ends up around 20 - 40MB/s. The P3700 manages 36.5MB/s (~8900 IOPS) compared to 27.2MB/s (~6600 IOPS) for the SATA S3700. Even at a queue depth of 8 there's only a small advantage to the P3700 from a bandwidth perspective (~77000 IOPS vs. ~58400 IOPS). Performance does scale incredibly well with increasing queue depths though. By QD16 we see the P3700 pull away, at even as low as QD32 the P3700 delivers roughly 3.5x the performance of the S3700. There's a 70% advantage at QD32 compared to Intel's SSD 910, but that advantage grows to 135% at QD128.

Micron's P420m is incredibly competitive, substantially outperforming the P3700 at the highest queue depth.

Random read latency is incredibly important for applications where response time matters. Even more important for these applications is keeping latency below a certain threshold, what we're looking for here is a flat curve across all queue depths:

And that's almost exactly what the P3700 delivers. While the average latency for Intel's SSD DC S3700 (SATA) sky rockets after QD32, the P3700 remains mostly flat throughout the sweep. It's only at QD128 that we see a bit of an uptick. Even the 910 shows bigger jumps at higher queue depths.

If we remove the SATA drive and look exclusively at PCIe solutions, we get a better idea of the P3700's low latencies:

In this next chart we'll look at some specific numbers. Here we've got average latency (expressed in µs) for 4KB reads at a queue depth of 32. This is the same data as in the charts above, just formatted differently:

Average Latency - 4KB Random Read QD32

The P3700's latency advantage over its SATA counterpart is huge. Compared to other PCIe solutions, the P3700 is still leading but definitely not by as large of a margin. Micron's P420m comes fairly close.

Next up is average latency, but now at our highest tested queue depth: 128.

Average Latency - 4KB Random Read QD128

Micron's P420m definitely takes over here. Micron seems to have optimized the P420m for operation at higher queue depths while Intel focused the P3700 a bit lower. The SATA based S3700 is just laughable here, average completion latency is over 1.6ms.

Looking at maximum latency is interesting from a workload perspective, as well as from a drive architecture perspective. Latency sensitive workloads tend to have a max latency they can't exceed, but at the same time a high max latency but low average latency implies that the drive sees these max latencies infrequently. From an architectural perspective, consistent max latencies across the entire QD sweep give us insight into how the drive works at a lower level. It's during these max latency events that the drive's controller can schedule cleanup and defragmentation routines. I recorded max latency at each queue depth and presented an average of all max latencies across the QD sweet (From QD1 - QD128). In general, max latencies remained consistent across the sweep.

Max Latency - 4KB Random Read

The 910's max latencies never really get out of hand. Part of the advantage is each of the 910's four controllers only ever see a queue depth of 32, so no individual controller is ever stressed all that much. The S3700 is next up with remarkably consistent performance here. The range of values the S3700 had was 2ms - 10ms, not correlating in any recognizable way to queue depth. Note the huge gap between max and average latency for the S3700 - it's an order of magnitude. These high latency events are fairly rare.

The P3700 sees two types of long latency events: one that takes around 3ms and another that takes around 15ms. The result is a higher max latency than the other two Intel drives, but with a lower average latency than both it's still fairly rare.

Micron's P420m runs the longest background task routine of anything here, averaging nearly 53ms. Whatever Micron is doing here, it seems consistent across all queue depths.

Random Write Performance

Now we get to the truly difficult workload: a steady state 4KB random write test. We first fill the drive to capacity, then perform a 4KB (QD128) random write workload until we fill the drive once more. We then run a 3 minute 4KB random write test across all queue depths, recording bandwidth and latency values. This gives us a good indication of steady state performance, which should be where the drives end up over days/weeks/months of continued use in a server.

Despite the more strenuous workload, the P3700 absolutely shines here. We see peak performance attained at a queue depth of 8 and it's sustained throughout the rest of the range.

Average latency is also class leading - it's particularly impressive when you compare the P3700 to its SATA counterpart.

Average Latency - 4KB Random Write QD32

Average Latency - 4KB Random Write QD128

The absolute average latency numbers are particularly impressive. The P3700 at a queue depth of 128 can sustain 4KB random writes with IOs completing at 0.86ms.

Max Latency - 4KB Random Write

Sequential Read & Write Performance Mixed Read/Write Performance
Comments Locked

85 Comments

View All Comments

  • 457R4LDR34DKN07 - Tuesday, June 3, 2014 - link

    No, they are 4x pcie 2.5" SFF-8639 drives here is a good article describing the differences between satae and 2.5" SFF-8639 drives:

    http://www.anandtech.com/show/6294/breaking-the-sa...
  • Qasar - Tuesday, June 3, 2014 - link

    ok.. BUT.. that's not what i asked.... will this type of drive, ie the NVMe type.. be on some other type of connection besides PCIe 4x ?? as i said :

    depending on ones usage... finding a PCIe slot to put a drive like this in.. may not be possible, specially in SLI/Crossfire... add the possibility of a sound card or raid card..

    cause one can quickly run out of PCIe slots, or have slots covered/blocked by other PCIe cards ... right now, for example. i have an Asus P6T and due to my 7970.. the 2nd PCIe 16 slot.. is unusable and the 3rd slot.. has a raid card in it.. on a newer board.. it may be different.. but sill SLI/Crossfire.. can quickly cover up slots ... or block them ... hence.. will NVMe type drives also be on sata express ??
  • 457R4LDR34DKN07 - Wednesday, June 4, 2014 - link

    right and what I told you is that 2.5" SFF-8639 is also offered. You can probably plug it into a sata express connector but you will only realize 2x pci-e 3.0 speeds IE 10gb/s.
  • xdrol - Tuesday, June 3, 2014 - link

    It takes 5x 200 GB drives to match the performance of a 1.6 TB drive? That does not sound THAT good... Make it 8x and it's even.
  • Lonyo - Tuesday, June 3, 2014 - link

    Now make a motherboard with 8xPCIe slots to put those drives in.
  • hpvd - Tuesday, June 3, 2014 - link

    sorry only 7 :-(
    http://www.supermicro.nl/products/motherboard/Xeon...
    :-)
  • hpvd - Tuesday, June 3, 2014 - link

    some technical data for the lower capicity models could be fund here:
    http://www.intel.com/content/www/us/en/solid-state...
    maybe this is interesting to be added to the article...
  • huge pile of sticks - Tuesday, June 3, 2014 - link

    but can it run crysis?
  • Homeles - Tuesday, June 3, 2014 - link

    It can run 1,000 instances of Crysis. A kilocrysis, if you will.
  • Shadowmaster625 - Tuesday, June 3, 2014 - link

    How is 200 uS considered low latency? What a joke. If intel had any ambitions besides playing second fiddle to apple and ARM, then they would put the SSD controller on the cpu and create a DIMM type interface for the NAND. Then they would have read latencies in the 1 to 10 uS range, and even less latency as they improve their caching techniques. It's true that you wouldnt be able to address more than a couple TB of NAND through such an interface, but it would be so blazing fast that it could be shadowed using SATA SSDs with very little perceived performance loss over the entire address space. Think big cache for NAND, call it L5 or whatnot. It would do for storage what L2 did for cpus.

Log in

Don't have an account? Sign up now