Random Read

Random read speed is the most difficult performance metric for flash-based SSDs to improve on. There is very limited opportunity for a drive to do useful prefetching or caching, and parallelism from multiple dies and channels can only help at higher queue depths. The NVMe protocol reduces overhead slightly, but even a high-end enterprise PCIe SSD can struggle to offer random read throughput that would saturate a SATA link.

Real-world random reads are often blocking operations for an application, such as when traversing the filesystem to look up which logical blocks store the contents of a file. Opening even an non-fragmented file can require the OS to perform a chain of several random reads, and since each is dependent on the result of the last, they cannot be queued.

Our first test of random read performance looks at the dependence on transfer size. Most SSDs focus on 4kB random access as that is the most common page size for virtual memory systems and it is a common filesystem block size. Maximizing 4kB performance has gotten more difficult as NAND flash has moved to page sizes that are larger than 4kB, and some SSD vendors have started including 8kB random access specifications. It is worth noting that 3D XPoint memory, from a fundamental standpoint, does not impose any inherent block size restrictions on the Optane SSD, but for compatibility purposes the P4800X by default exposes a 512B sector size.

Queue Depth 1

For our test, each transfer size was tested for four minutes and the statistics exclude the first minute. The drives were preconditioned to steady state by filling them with 4kB random writes twice over.

Random Read
Vertical Axis scale: Linear Logarithmic

The Optane SSD starts off with about eight times the throughput of the other drives for small random reads. As the transfer sizes grow past 16kB the Optane SSD's performance starts to level off and the flash SSDs start to catch up, with the Micron 9100 overtaking the Intel P3700. At 1MB transfer size the Optane SSD is only providing an additional 50% higher throughput than the Micron 9100.

Queue Depth >1

Next, we consider 4kB random read performance at queue depths greater than one. A single-threaded process is not capable of saturating the Optane SSD DC P4800X with random reads so this test is conducted with up to four threads. The queue depths of each thread are adjusted so that the queue depth seen by the SSD varies from 1 to 64, with every single queue depth from 1 through 16, then 18, 20, and factors of four up to 64 (so 24, 28, 32... to 64). The timing is the same as for the other tests: four minutes for each tested queue depth, with the first minute excluded from the statistics.

Looking just at the range of throughputs and latencies achieved, it is clear that the Optane SSD DC P4800X is in a different league entirely from the flash SSDs. The Optane SSD saturates part way through the test with a throughput +30% higher than what the Micron 9100 can deliver even at QD64, and at the same time its 99.999th percentile latency is half of the Micron 9100's median latency.

Between the two flash SSDs, the Intel P3700 has better performance on average through most of the test, but its maximum achieved throughput is slightly lower than the Micron 9100's peak and the 9100 offers lower latency at the high end. The Micron 9100 also has much better 99.999th percentile latency across almost the entire range of queue depths.

Random Read Throughput
Vertical Axis units: IOPS MB/s

In absolute terms, the Optane SSD's performance is uncontested. Even though the Optane SSD's random read throughput is saturating at QD8, by QD6 it's outperforming what either flash SSD can deliver at any reasonable queue depth. Beyond QD8 the Optane SSD does not deliver even incremental improvement in throughput and increasing queue depth just adds latency. This test stops at QD64, which isn't enough to saturate either flash SSD. The Micron 9100 MAX is rated for a maximum of 750k random read IOPS, but clearly the Optane SSD delivers far better performance at the kinds of queue depths that are reasonably attainable.

Random Read Latency
Mean Median 99th Percentile 99.999th Percentile

All three SSDs show median latency growing slowly across a wide range of queue depths. At QD1 the 99th percentile curves are very close to the median latency curves, but at high queue depths the 99th percentile latency is around twice the median. For the Optane SSD and the Micron 9100 MAX, the 99.999th percentile latency is higher by another factor of two or so, but the Intel P3700 cannot deliver such tight regulation and its worst-case latencies are well over a millisecond.

Random Write

Flash memory write operations are far slower than read operations. This is not always reflected in the performance specifications of SSDs because writes can be deferred and combined, allowing the SSD to signal completion before the data has actually moved from the drive's cache to the flash memory. The 3D XPoint memory used by the Optane SSD DC P4800X does have slower writes than reads, and it was commented that Intel did not specificy read latency when Optane was initially announced, but our results show that the disparity is not as large. With inherently fast writes and no page size and erase block limitations, the Optane SSD should be far less reliant on write combining and large spare areas to offer high throughput random writes. The drive's translation layer is probably far simpler than what flash SSDs require, potentially giving a latency advantage.

Queue Depth 1

As with random reads, we first examine QD1 random write performance of different transfer sizes. 4kB is usually the most important size, but some applications will make smaller writes when the drive has a 512B sector size. Larger transfer sizes make the workload somewhat less random, reducing the amount of bookkeeping the SSD controller needs to do and generally allowing for increased performance.

Random Write
Vertical Axis scale: Linear Logarithmic

The Micron 9100 really doesn't like random writes smaller than 4kB, but both Intel drives handle it relatively well. The Optane SSD DC P4800X has only a 30% higher throughput result than the P3700 for transfer sizes of 4kB and smaller. The Intel P3700 (owing mainly to its relatively low capacity) doesn't benefit very much as transfer sizes grow beyond 4kB, as it saturates soon after. The Optane SSD maintains a clear lead for transfers of 8kB and larger, averaging about twice the throughput of the Micron 9100 as both show diminishing returns from increased transfer sizes.

Queue Depth >1

The test of 4kB random write throughput at different queue depths is structured identically to its counterpart random write test above. Queue depths from 1 to 64 are tested, with up to four threads used to generate this workload. Each tested queue depth is run for four minutes and the first minute is ignored when computing the statistics.

The QD1 starting points for all three drives are somewhat close together, with the fastest drive (the Optane SSD, of course) only offering about twice the random write throughput than the Micron 9100, with less than half the average latency. From there, the gaps widen quickly. The Intel P3700 reaches its maximum throughput very quickly and then the latency just piles up. The Micron 9100 keeps its median and 99th percentile latency reasonably well controlled until reaching its maximum throughput, which is half of what the Optane SSD can deliver.

Random Write Throughput
Vertical Axis units: IOPS MB/s

While QD64 wasn't enough to completely saturate the flash SSDs with random reads, here with random writes, QD8 is enough for any of the drives, and the P3700 is done around QD2. The Micron 9100 starts out as the slowest of the three but soon overtakes the Intel P3700.

When examining the latency statistics, we should keep in mind that all three drives reached their full throughput by QD8. At queue depths higher than that, latency increases with no improvement to throughput. A well-tuned server will generally not be operating the drives in that regime, so the right half of these graphs can be mostly ignored.

Random Write Latency
Mean Median 99th Percentile 99.999th Percentile

Median latency for these drives is quite flat until they reach saturation. 99th percentile latency for the flash SSDs shoots up when they're operated at unnecessarily high queue depths. The 99.999th percentile latency of the Intel P3700 is never less than 1ms and actually exceeds 10ms at the end of the test. The Micron 9100's 99.999th percentile latency is fairly close to that of the Optane SSD until the 9100 hits QD4, where it spikes and surpasses 1ms shortly before the drive reaches full throughput. Meanwhile, the Optane SSD's 99.999th percentile latency only climbs up to a third of a millisecond even at QD64.

Checking Intel's Numbers Sequential Access Performance


View All Comments

  • Ninhalem - Thursday, April 20, 2017 - link

    At last, this is the start of transitioning from hard drive/memory to just memory. Reply
  • ATC9001 - Thursday, April 20, 2017 - link

    This is still significantly slower than RAM....maybe for some typical consumer workloads it can take over as an all in one storage solution, but for servers and power users, we'll still need RAM as we know it today...and the fastest "RAM" if you will is on die L1 cache...which has physical limits to it's speed and size based on speed of light!

    I can see SSD's going away depending on manufacturing costs but so many computers are shipping with spinning disks still I'd say it's well over a decade before we see SSD's become the replacement for all spinning disk consumer products.

    Intel is pricing this right between SSD's and RAM which makes sense, I just hope this will help the industry start to drive down prices of SSD's!
  • DanNeely - Thursday, April 20, 2017 - link

    Estimates from about 2 years back had the cost/GB price of SSDs undercutting that of HDDs in the early 2020's. AFAIK those were business as usual projections, but I wouldn't be surprised to see it happen a bit sooner as HDD makers pull the plug on R&D for the generation that would otherwise be overtaken due to sales projections falling below the minimums needed to justify the cost of bringing it to market with its useful lifespan cut significantly short. Reply
  • Guspaz - Saturday, April 22, 2017 - link

    Hard drive storage cost has not changed significantly in at least half a decade, while ssd prices have continued to fall (albeit at a much slower rate than in the past). This bodes well for the crossover. Reply
  • Santoval - Tuesday, June 6, 2017 - link

    Actually it has, unless you regard HDDs with double density at the same price every 2 - 2.5 years as not an actual falling cost. $ per GB is what matters, and that is falling steadily, for both HDDs and SSDs (although the latter have lately spiked in price due to flash shortage). Reply
  • bcronce - Thursday, April 20, 2017 - link

    The latency specs include PCIe and controller overhead. Get rid of those by dropping this memory in a DIMM slot and it'll be much faster. Still not as fast as current memory, but it's going to be getting close. Normal system memory is in the range of 0.5us. 60us is getting very close. Reply
  • tuxRoller - Friday, April 21, 2017 - link

    They also include context switching, isr (pretty board specific), and block layer abstraction overheads. Reply
  • ddriver - Friday, April 21, 2017 - link

    PCIE latency is below 1 us. I don't see how subtracting less than 1 from 60 gets you anywhere near 0.5.

    All in all, if you want the best value for your money and the best performance, that money is better spent on 128 gigs of ecc memory.

    Sure, xpoint is non volatile, but so what? It is not like servers run on the grid and reboot every time the power flickers LOL. Servers have at the very least several minutes of backup power before they shut down, which is more than enough to flush memory.

    Despite intel's BS PR claims, this thing is tremendously slower than RAM, meaning that if you use it for working memory, it will massacre your performance. Also, working memory is much more write intensive, so you are looking at your money investment crapping out potentially in a matter of months. Whereas RAM will be much, much faster and work for years.

    4 fast NVME SSDs will give you like 12 GB\s bandwidth, meaning that in the case of an imminent shutdown, you can flush and restore the entire content of those 128 gigs of ram in like 10 seconds or less. Totally acceptable trade-back for tremendously better performance and endurance.

    There is only one single, very narrow niche where this purchase could make sense. Database usage, for databases with frequent low queue access. This is an extremely rare and atypical application scenario, probably less than 1/1000 in server use. Which is why this review doesn't feature any actual real life workloads, because it is impossible to make this product look good in anything other than synthetic benches. Especially if used as working memory rather than storage.
  • IntelUser2000 - Friday, April 21, 2017 - link

    ddriver: Do you work for the memory industry? Or hold a stock in them? You have a personal gripe about the company that goes beyond logic.

    PCI Express latency is far higher than 1us. There are unavoidable costs of implementing a controller on the interface and there's also software related latency.
  • ddriver - Friday, April 21, 2017 - link

    I have a personal gripe with lying. Which is what intel has been doing every since it announced hypetane. If you find having a problem with lying a problem with logic, I'd say logic ain't your strong point.

    Lying is also what you do. PCIE latency is around 0.5 us. We are talking PHY here. Controller and software overhead affect equally every communication protocol.

    Xpoint will see only minuscule latency improvements from moving to dram slots. Even if PCIE has about 10 times the latency of dram, we are still talking ns, while xpoint is far slower in the realm of us. And it ain't no dram either, so the actual latency improvement will be nowhere nearly the approx 450 us.

    It *could* however see significant bandwidth improvements, as the dram interface is much wider, however that will require significantly increased level of parallelism and a controller that can handle it, and clearly, the current one cannot even saturate a pcie x4 link. More bandwidth could help mitigate the high latency by masking it through buffering, but it will still come nowhere near to replacing dram without a tremendous performance hit.

Log in

Don't have an account? Sign up now