Checking Intel's Numbers

The product brief for the Optane SSD DC P4800X provides a limited set of performance specifications, entirely omitting any standards for sequential throughput. Some latency and throughput targets are provided for 4kB random reads, writes, and a 70/30 mix of reads and writes.

This section has our results for how the Optane SSD measures up to Intel's advertised specifications and how the flash SSDs fare on the same tests. The rest of this review provides deeper analysis of how these drives perform across a range of queue depths, transfer sizes, and read/write mixes.

4kB Random Read at a Queue Depth of 1 (QD1)
Drive Throughput Latency (µs)
MB/s IOPS Mean Median 99th 99.999th
Intel Optane SSD DC P4800X 375GB 413.0 108.3k 8.9 9 10 37
Intel SSD DC P3700 800GB 48.7 12.8k 77.9 76 96 2768
Micron 9100 MAX 2.4TB 35.3 9.2k 107.7 104 117 306

Intel's queue depth 1 specifications are expressed in terms of latency, and at a throughput specification at QD1 would be redundant. Intel specifies a "typical" latency of less than 10µs, and most QD1 random reads on the Optane SSD take 8 or 9µs; even the 99th percentile latency is still 10µs.

The 99.999th percentile target is less than 60µs, which the Optane SSD beats by a wide margin. Overall, the Optane SSD passes with ease. The flash SSDs are 8-12x slower on average, and the 99.999th percentile latency of the Intel P3700 is far worse, at around 75x slower.

4kB Random Read at a Queue Depth of 16 (QD16)
Drive Throughput Latency (µs)
MB/s IOPS Mean Median 99th 99.999th
Intel Optane SSD DC P4800X 375GB 2231.0 584.8k 25.5 25 41 81
Intel SSD DC P3700 800GB 637.9 167.2k 93.9 91 163 2320
Micron 9100 MAX 2.4TB 517.5 135.7k 116.2 114 205 1560

Intel's QD16 random read result is 584.8k IOPS for throughput, which is above the official specification of 550k IOPS by a few percent. The 99.999th percentile latency scores 81µs, significantly under the target of less than 150µs. The flash SSDs are 3-5x slower on most metrics, but 20-30 times slower at the 99.999th percentile for latency.

4kB Random Write at a Queue Depth of 1 (QD1)
Drive Throughput Latency (µs)
MB/s IOPS Mean Median 99th 99.999th
Intel Optane SSD DC P4800X 375GB 360.6 94.5k 8.9 9 10 64
Intel SSD DC P3700 800GB 350.6 91.9k 9.2 9 18 81
Micron 9100 MAX 2.4TB 160.9 42.2k 22.2 22 24 76

In the specifications, the QD1 random write specifications are 10µs on latency, while the 99.999th percentile for latency is relaxed from 60µs to 100µs. In our results, the QD1 random write throughput (360.6 MB/s) of the Optane SSD is a bit lower than the QD1 random read throughput (413.0 MB/s), but the latency is roughly the same (8.9µs mean, 10µs on 99th).

However it is worth noting that the Optane SSD only manages a passing score when the application uses asynchronous I/O APIs. Using simple synchronous write() system calls pushes the average latency up to 11-12µs.

Also, due to the capacitor-backed DRAM caches, the flash SSDs also handle QD1 random writes very well. The Intel P3700 also manages to keep latency mostly below 10µs, and all three drives have 99.999th percentile latency below Intel's 100µs standard for the Optane SSD.

4kB Random Write at a Queue Depth of 16 (QD16)
Drive Throughput Latency (µs)
MB/s IOPS Mean Median 99th 99.999th
Intel Optane SSD DC P4800X 375GB 2122.5 556.4 27.0 23 65 147
Intel SSD DC P3700 800GB 446.3 117.0 134.8 43 1336 9536
Micron 9100 MAX 2.4TB 1144.4 300.0 51.6 34 620 3504

The Optane SSD DC P4800X is specified for 500k random write IOPS using four threads to provide a total queue depth of 16. In our tests, the Optane SSD scored 556.4k IOPs, exceeding the specification by more than 11%. This equates to a random write throughput of more than 2GB/s.

The flash SSDs are more dependent on the parallelism benefits of higher capacities, and as a result can be slow at the same capacity. Hence in this case the 2.4TB Micron 9100 fares much better than the 800GB Intel P3700. The Micron 9100 hits its own specification right on the nose with 300k IOPS and the Intel P3700 comfortably exceeds its own 90k IOPS specification, although remaining the slowest of the three by far. The Optane SSD stays well below its 200µs limit for 99.999th percentile latency by scoring 147µs, while the flash SSDs have outliers of several milliseconds. Even at the 99th percentile the flash SSDs are 10-20x slower than Optane.

4kB Random Mixed 70/30 Read/Write Queue Depth 16
Drive Throughput Latency (µs)
MB/s IOPS Mean Median 99th 99.999th
Intel Optane SSD DC P4800X 375GB 1929.7 505.9 29.7 28 65 107
Intel SSD DC P3700 800GB 519.9 136.3 115.5 79 1672 5536
Micron 9100 MAX 2.4TB 518.0 135.8 116.0 105 1112 3152

On a 70/30 read/write mix, the Optane SSD DC P4800X scores 505.9k IOPS, which beats the specification of 500k IOPS by 1%. Both of the flash SSDs deliver roughly the same throughput, a little over a quarter of the speed of the Optane SSD. Intel doesn't provide a latency specification for this workload, but the measurements unsurprisingly fall in between the random read and random write results. While low-end consumer SSDs sometimes perform dramatically worse on mixed workloads than on pure read or write workloads, none of these drives have that problem due to their market positioning and capabilities therein.

Test Configurations Random Access Performance
Comments Locked

117 Comments

View All Comments

  • extide - Thursday, April 20, 2017 - link

    Queue depth is how many commands the computer has queued up for the drive. The computer can issue commands to the drive faster than it can service them -- so, for example, SATA can support a queue of up to 32 commands. Typical desktop use just doesn't generate enough traffic on the drives to queue up much data so you usually are in the low 1-2, maybe 4 QD. Some server workloads can be higher, but even on a DB server, if you are seeing QD's of 16 I would say your storage is not fast enough for what you are trying to do, so being able to get good performance at low queue depths is truly a breakthrough.
  • bcronce - Thursday, April 20, 2017 - link

    For file servers, it's not just the queue depth that's important, it's the number of queues. FreeBSD and OpenZFS have had a lot of blogs and videos about the issues of scaling up servers, especially in regards to multi-core.

    SATA only supports 1 queue. NVMe supports up to ~65,000 with a depth of ~65,000 each. They're actually having issues saturating high end SSDs because their IO stack can't handle the throughput.

    If you have a lot of SATA drives, then you effectively have many queues, but if you want a single/few super fast device(s), like say L2ARC, you need to take advantage of the new protocol.
  • tuxRoller - Friday, April 21, 2017 - link

    The answer is something like the Linux kernel's block multiqueue (ongoing, still not the default for all devices but it shouldn't be more than a few more cycles). Its been a massive undertaking and involved rewriting many drivers.

    https://lwn.net/Articles/552904/
  • Shadowmaster625 - Thursday, April 20, 2017 - link

    It is a pity intel doesnt make video cards, because 16GB of this would go very well with 4GB of RAM and a decent memory controller. It would lower the overall cost and not impact performance at all.
  • ddriver - Friday, April 21, 2017 - link

    "It would lower the overall cost and not impact performance at all."

    Yeah, I bet. /s
  • Mugur - Friday, April 21, 2017 - link

    I think I read something like this when i740 was launched... :-)

    Sorry, couldn't resist. But the analogy stands.
  • ridic987 - Friday, April 21, 2017 - link

    "It would lower the overall cost and not impact performance at all."

    What? This stuff is around 50x slower than DRAM, which itself is reaching its limits in GPUs, hence features like delta color compression... Right now when your gpu runs out of ram it uses your system ram as extra space, this is a far better system.
  • anynigma - Thursday, April 20, 2017 - link

    "Intel's new 3D XPoint non-volatile memory technology, which has been on the cards publically for the last couple of years"

    I think you mean "IN the cards". In this context, "ON the cards" makes it sound like we've all been missing out on 3D xPoint PCI cards for a "couple of years" :)
  • SaolDan - Thursday, April 20, 2017 - link

    bI think he means it like Its been in the works publicly for a couple of years.
  • DrunkenDonkey - Thursday, April 20, 2017 - link

    A bit of a suggestion - can you divide (or provide in final thoughts) SSD reviews per consumer base? Desktop user absolutely does not care about sequential performance or QD16 or even write for what matters (except for the odd time installing something). Database can't care less about sequential or low QD, etc. Giving the tables is good for the odd few % of the readers that actually know what to look for, the rest just take a look at the end of the graph and take a stunningly wrong idea. Just a few comparisons tailored per use will make it so easy for the masses. It was Anand that fought for that during the early sandforce days, he forced ocz to reconsider their ways to tweak SSDs for real world performance, not graph based and got me as a follower. Let that not die in vain and let those, that lack the specific knowledge be informed. Just look at the comments and see how people interpret the results.
    I know this is enterprise grade SSD, but it is also a showcase for a new technology that will come in our hands soonish.

Log in

Don't have an account? Sign up now