Checking Intel's Numbers

The product brief for the Optane SSD DC P4800X provides a limited set of performance specifications, entirely omitting any standards for sequential throughput. Some latency and throughput targets are provided for 4kB random reads, writes, and a 70/30 mix of reads and writes.

This section has our results for how the Optane SSD measures up to Intel's advertised specifications and how the flash SSDs fare on the same tests. The rest of this review provides deeper analysis of how these drives perform across a range of queue depths, transfer sizes, and read/write mixes.

4kB Random Read at a Queue Depth of 1 (QD1)
Drive Throughput Latency (µs)
MB/s IOPS Mean Median 99th 99.999th
Intel Optane SSD DC P4800X 375GB 413.0 108.3k 8.9 9 10 37
Intel SSD DC P3700 800GB 48.7 12.8k 77.9 76 96 2768
Micron 9100 MAX 2.4TB 35.3 9.2k 107.7 104 117 306

Intel's queue depth 1 specifications are expressed in terms of latency, and at a throughput specification at QD1 would be redundant. Intel specifies a "typical" latency of less than 10µs, and most QD1 random reads on the Optane SSD take 8 or 9µs; even the 99th percentile latency is still 10µs.

The 99.999th percentile target is less than 60µs, which the Optane SSD beats by a wide margin. Overall, the Optane SSD passes with ease. The flash SSDs are 8-12x slower on average, and the 99.999th percentile latency of the Intel P3700 is far worse, at around 75x slower.

4kB Random Read at a Queue Depth of 16 (QD16)
Drive Throughput Latency (µs)
MB/s IOPS Mean Median 99th 99.999th
Intel Optane SSD DC P4800X 375GB 2231.0 584.8k 25.5 25 41 81
Intel SSD DC P3700 800GB 637.9 167.2k 93.9 91 163 2320
Micron 9100 MAX 2.4TB 517.5 135.7k 116.2 114 205 1560

Intel's QD16 random read result is 584.8k IOPS for throughput, which is above the official specification of 550k IOPS by a few percent. The 99.999th percentile latency scores 81µs, significantly under the target of less than 150µs. The flash SSDs are 3-5x slower on most metrics, but 20-30 times slower at the 99.999th percentile for latency.

4kB Random Write at a Queue Depth of 1 (QD1)
Drive Throughput Latency (µs)
MB/s IOPS Mean Median 99th 99.999th
Intel Optane SSD DC P4800X 375GB 360.6 94.5k 8.9 9 10 64
Intel SSD DC P3700 800GB 350.6 91.9k 9.2 9 18 81
Micron 9100 MAX 2.4TB 160.9 42.2k 22.2 22 24 76

In the specifications, the QD1 random write specifications are 10µs on latency, while the 99.999th percentile for latency is relaxed from 60µs to 100µs. In our results, the QD1 random write throughput (360.6 MB/s) of the Optane SSD is a bit lower than the QD1 random read throughput (413.0 MB/s), but the latency is roughly the same (8.9µs mean, 10µs on 99th).

However it is worth noting that the Optane SSD only manages a passing score when the application uses asynchronous I/O APIs. Using simple synchronous write() system calls pushes the average latency up to 11-12µs.

Also, due to the capacitor-backed DRAM caches, the flash SSDs also handle QD1 random writes very well. The Intel P3700 also manages to keep latency mostly below 10µs, and all three drives have 99.999th percentile latency below Intel's 100µs standard for the Optane SSD.

4kB Random Write at a Queue Depth of 16 (QD16)
Drive Throughput Latency (µs)
MB/s IOPS Mean Median 99th 99.999th
Intel Optane SSD DC P4800X 375GB 2122.5 556.4 27.0 23 65 147
Intel SSD DC P3700 800GB 446.3 117.0 134.8 43 1336 9536
Micron 9100 MAX 2.4TB 1144.4 300.0 51.6 34 620 3504

The Optane SSD DC P4800X is specified for 500k random write IOPS using four threads to provide a total queue depth of 16. In our tests, the Optane SSD scored 556.4k IOPs, exceeding the specification by more than 11%. This equates to a random write throughput of more than 2GB/s.

The flash SSDs are more dependent on the parallelism benefits of higher capacities, and as a result can be slow at the same capacity. Hence in this case the 2.4TB Micron 9100 fares much better than the 800GB Intel P3700. The Micron 9100 hits its own specification right on the nose with 300k IOPS and the Intel P3700 comfortably exceeds its own 90k IOPS specification, although remaining the slowest of the three by far. The Optane SSD stays well below its 200µs limit for 99.999th percentile latency by scoring 147µs, while the flash SSDs have outliers of several milliseconds. Even at the 99th percentile the flash SSDs are 10-20x slower than Optane.

4kB Random Mixed 70/30 Read/Write Queue Depth 16
Drive Throughput Latency (µs)
MB/s IOPS Mean Median 99th 99.999th
Intel Optane SSD DC P4800X 375GB 1929.7 505.9 29.7 28 65 107
Intel SSD DC P3700 800GB 519.9 136.3 115.5 79 1672 5536
Micron 9100 MAX 2.4TB 518.0 135.8 116.0 105 1112 3152

On a 70/30 read/write mix, the Optane SSD DC P4800X scores 505.9k IOPS, which beats the specification of 500k IOPS by 1%. Both of the flash SSDs deliver roughly the same throughput, a little over a quarter of the speed of the Optane SSD. Intel doesn't provide a latency specification for this workload, but the measurements unsurprisingly fall in between the random read and random write results. While low-end consumer SSDs sometimes perform dramatically worse on mixed workloads than on pure read or write workloads, none of these drives have that problem due to their market positioning and capabilities therein.

Test Configurations Random Access Performance
Comments Locked

117 Comments

View All Comments

  • ddriver - Sunday, April 23, 2017 - link

    It is not expensive because it is new, it is expensive because intel and micron wasted a crapload of money on RDing it and it turned out to be mediocre - significantly weaker than good old and almost forgotten SLC. So now they hype and lie about it and sell it significantly overpriced in hopes they will see some returns of the investment.

    Also, it seems like you are quite ignorant, ignorant enough to not know what "order of magnitude" means. You just heard someone smart using it and decided to imitate, following some brilliant logic that it will make you look smart. Well, it doesn't. It does exactly the opposite. Now either stop using it, or at the very least, look it up, understand and remember what it actually means, so the next time you use it, you don't end up embarrassing yourself.
  • factual - Sunday, April 23, 2017 - link

    "significantly weaker than good old and almost forgotten SLC"

    Seriously ?! You must be getting paid to spew this bs! no one can be this ignorant!! can you read numbers ?! what part of 8.9us latency don't you understand, this is at least 10x better than the latest and greatest NVMe SSDs (be it TLC, VNAND or whatever bs marketing terms they feed idiots like you nowadays).

    what part of 95K/108K QD1 IOPS don't you understand ?! This is 3-10x compared to this best SSDs on the market.

    So I repeat again, Xpoint is orders of magnitude better performing than the latest and greatest SSDs (from Samsung or whichever company) on the market. This is a fact.

    You don't even understand basic math, stop embarrassing yourself by posting these idiotic comments!
  • ddriver - Monday, April 24, 2017 - link

    LOL, your intellect is apparently equal to that of a parrot.
  • factual - Monday, April 24, 2017 - link

    Well if this fruitless exchange is any evidence my intellect is far superior to yours. So If my intellect is equal to that of a parrot, yours must be equal to that of a maggot ... lol
  • evilpaul666 - Saturday, April 22, 2017 - link

    So where are the 32gb client ones?
  • tomatus89 - Saturday, April 22, 2017 - link

    Who is this ddriver troll? Hahaha you are hillarious. And the worse is that people keep feeding him instead of ignoring him.
  • peevee - Saturday, May 27, 2017 - link

    From your testing, looks like the drive offers real advantages on low QD, i.e. for desktop/small office server use. For these uses a normal SSD is also enough though.
    Given that modern Xeons have up to 28 cores (running 56 threads each) and server motherboards have 2 or more CPU slots, a properly loaded server will offer QD > 64 all day long, and certainly not just 4 active threads - where the Micron 9100 offers even higher performance, and if the performance is good enough there, it certainly good enough on lower QDs where it is even better PER REQUEST.
    And who cares what 99.999% latency is, as long as it is milliseconds and not seconds - network and other latencies on the accesses to these servers will be higher anyway.

    An incredibly good first attempt, but it really does not push the envelope in the market it is priced for - high-performance storage-bottlenecked servers.

Log in

Don't have an account? Sign up now