Mixed Read/Write Performance

Workloads consisting of a mix of reads and writes can be particularly challenging for flash based SSDs. When a write operation interrupts a string of reads, it will block access to at least one flash chip for a period of time that is substantially longer than a read operation takes. This hurts the latency of any read operations that were waiting on that chip, and with enough write operations throughput can be severely impacted. If the write command triggers an erase operation on one or more flash chips, the traffic jam is many times worse.

The occasional read interrupting a string of write commands doesn't necessarily cause much of a backlog, because writes are usually buffered by the controller anyways. But depending on how much unwritten data the controller is willing to buffer and for how long, a burst of reads could force the drive to begin flushing outstanding writes before they've all been coalesced into optimal sized writes.

This mixed workload test is an extension of what Intel describes in their specifications for the Optane SSD DC P4800X. A total queue depth of 16 is achieved using four worker threads, each performing a mix of random reads and random writes. Instead of just testing a 70% read mixture, the full range from pure reads to pure writes is tested at 10% increments. These tests were conducted on the Optane Memory as a standalone SSD, not in any caching configuration. Client and consumer workloads do consist of a mix of reads and writes, but never at queue depths this high; this test is included primarily for comparison between the two Optane devices.

Mixed Random Read/Write Throughput
Vertical Axis units: IOPS MB/s

At the beginning of the test where the workload is purely random reads, the four drives almost form a geometric progression: the Optane Memory is a little under half as fast as the P4800X and a little under twice as fast as the Samsung 960 EVO, and the MX300 is about a third as fast as the 960 EVO. As the proportion of writes increases, the flash SSDs lose throughput quickly. The Optane Memory declines across the entire test but gradually, ending up at a random write speed around one fourth of its random read speed. The P4800X has enough random write throughput to rebound during the final phases of the test, ending up with a random write throughput almost as high as the random read throughput.

Random Read Latency
Mean Median 99th Percentile 99.999th Percentile

The flash SSDs actually manage to deliver better median latency than the Optane Memory through a portion of the test, after they've shed most of their throughput. For the 99th and 99.999th percentile latencies, the flash SSDs perform much worse once writes are added to the mix, ending up almost 100 times slower than the Optane Memory.

Idle Power Consumption

There are two main ways that a NVMe SSD can save power when idle. The first is through suspending the PCIe link through the Active State Power Management (ASPM) mechanism, analogous to the SATA Link Power Management mechanism. Both define two power saving modes: an intermediate power saving mode with strict wake-up latency requirements (eg. 10µs for SATA "Partial" state) and a deeper state with looser wake-up requirements (eg. 10ms for SATA "Slumber" state). SATA Link Power Management is supported by almost all SSDs and host systems, though it is commonly off by default for desktops. PCIe ASPM support on the other hand is a minefield and it is common to encounter devices that do not implement it or implement it incorrectly, especially among desktops. Forcing PCIe ASPM on for a system that defaults to disabling it may lead to the system locking up.

The NVMe standard also defines a drive power management mechanism that is separate from PCIe link power management. The SSD can define up to 32 different power states and inform the host of the time taken to enter and exit these states. Some of these power states can be operational states where the drive continues to perform I/O with a restricted power budget, while others are non-operational idle states. The host system can either directly set these power states, or it can declare rules for which power states the drive may autonomously transition to after being idle for different lengths of time. NVMe power management including Autonomous Power State Transition (APST) fortunately does not depend on motherboard support the way PCIe ASPM does, so it should eventually reach the same widespread availability that SATA Link Power Management enjoys.

We report two idle power values for each drive: an active idle measurement taken with none of the above power management states engaged, and an idle power measurement with either SATA LPM Slumber state or the lowest-power NVMe non-operational power state, if supported. These tests were conducted on the Optane Memory as a standalone SSD, not in any caching configuration.

Idle Power ConsumptionActive Idle Power Consumption (No LPM)

With no support for NVMe idle power states, the Optane Memory draws the rated 1W at idle while the SATA and flash-based NVMe drives drop to low power states with a tenth of the power draw or less. Even without using low power states, the Crucial MX300 uses a fraction of the power, and the Samsung 960 EVO uses only 150mW more to keep twice as many PCIe lanes connected.

The Optane Memory is a tough sell for anyone concerned with power consumption. In a typical desktop it won't be enough to worry about, but Intel definitely needs to add proper power management to the next iteration of this product.

Sequential Access Performance First Thoughts
Comments Locked

110 Comments

View All Comments

  • evilpaul666 - Thursday, April 27, 2017 - link

    Everyone presumes that technology will improve over time. Talking up 1000x improvements, making people wait for a year or more, and then releasing a stupid expensive small drive for the Enterprise segment, and a not particularly useful tiny drive for whoever is running a Core i3 7000 series or better CPU with a mechanical hard drive, for some reason, is slightly disappointing.

    We wanted better stuff now after a year of waiting not at some point in the future which was where we've always been.
  • Lehti - Tuesday, April 25, 2017 - link

    Hmm... And how does this compare to regular SSD caching using Smart Response? So far I can't see why anyone would want an Optane cache as opposed to that or, even better, a boot SSD paired with a storage hard drive.
  • Calin - Tuesday, April 25, 2017 - link

    Did you brought the WD Caviar to steady state by filling it twice with random data in random files? Performance of magnetic media varies greatly based on drive fragmentation
  • Billy Tallis - Wednesday, April 26, 2017 - link

    I didn't pre-condition any of the drives for SYSmark, just for the synthetic tests (which the hard drive wasn't included in). For the SYSmark test runs, the drives were all secure erased then imaged with Windows.
  • MrSpadge - Tuesday, April 25, 2017 - link

    "Queue Depth > 1

    When testing sequential writes at varying queue depths, the Intel SSD DC P3700's performance was highly erratic. We did not have sufficient time to determine what was going wrong, so its results have been excluded from the graphs and analysis below."

    Yes, the DC P3700 is definitely excluded from these graphs.. and the other ones ;)
  • Billy Tallis - Wednesday, April 26, 2017 - link

    Oops. I copied a little too much from the P4800X review...
  • MrSpadge - Tuesday, April 25, 2017 - link

    Billy, why is the 960 Evo performing so badly under Sysmark 2014, when it wins almost all synthetic benchmarks against the MX300? Sure, it's got fewer dies.. but that applies to the low level measurements as well.
  • Billy Tallis - Wednesday, April 26, 2017 - link

    I don't know for sure yet. I'll be re-doing the SYSmark tests with a fresh install of Windows 10 Creators Update, and I'll experiment with NVMe drivers and settings. My suspicion is that the 960 EVO was being held back by Microsoft's horrific NVMe driver default behavior, while the synthetic tests in this review were run on Linux.
  • MrSpadge - Wednesday, April 26, 2017 - link

    That makes sense, thanks for answering!
  • Valantar - Tuesday, April 25, 2017 - link

    Is there any reason why one couldn't stick this in any old NVMe-compatible motherboard regardless of paltform and use a software caching system like PrimoCache on it? It identifies to the system as a standard NVMe drive, no? Or does it somehow have the system identify itself on POST and refuse to communicate if it provides the "wrong" identifier?

Log in

Don't have an account? Sign up now