Mixed Read/Write Performance

Workloads consisting of a mix of reads and writes can be particularly challenging for flash based SSDs. When a write operation interrupts a string of reads, it will block access to at least one flash chip for a period of time that is substantially longer than a read operation takes. This hurts the latency of any read operations that were waiting on that chip, and with enough write operations throughput can be severely impacted. If the write command triggers an erase operation on one or more flash chips, the traffic jam is many times worse.

The occasional read interrupting a string of write commands doesn't necessarily cause much of a backlog, because writes are usually buffered by the controller anyways. But depending on how much unwritten data the controller is willing to buffer and for how long, a burst of reads could force the drive to begin flushing outstanding writes before they've all been coalesced into optimal sized writes.

Our first mixed workload test is an extension of what Intel describes in their specifications for throughput of mixed workloads. A total of 16 threads are used, each performing a mix of random reads and random writes at a queue depth of 1. Instead of just testing a 70% read mixture, the full range from pure reads to pure writes is tested at 10% increments.

Mixed Random Read/Write Throughput
Vertical Axis units: IOPS MB/s

The Intel Optane SSD DC P4800X is slightly faster than the Optane SSD 900p throughout this test, but either is far faster than the flash-based SSDs. Performance from the Optane SSDs isn't entirely flat across the test, but the minor decline in the middle is nothing to complain about. The Intel P3608 and Micron 9100 both show strong increases near the end of the test due to caching and combining writes.

Random Read Latency
Mean Median 99th Percentile 99.999th Percentile

The mean latency graphs are simply the reciprocal of the throughput graphs above, but the latency percentile graphs reveal a bit more. The median latency of all of the flash SSDs drops significantly once the workload consists of more writes than reads, because the median operation is now a cacheable write instead of an uncacheable read. A graph of the median write latency would likely show writes to be competitive on the flash SSDs even during the read-heavy portion of the test.

The 99th percentile latency chart shows that the flash SSDs have much better QoS on pure read or write workloads than on mixed workloads, but they still cannot approach the stable low latency of the Optane SSDs.

Aerospike Certification Tool

Aerospike is a high-performance NoSQL database designed for use with solid state storage. The developers of Aerospike provide the Aerospike Certification Tool (ACT), a benchmark that emulates the typical storage workload generated by the Aerospike database. This workload consists of a mix of large-block 128kB reads and writes, and small 1.5kB reads. When the ACT was initially released back in the early days of SATA SSDs, the baseline workload was defined to consist of 2000 reads per second and 1000 writes per second. A drive is considered to pass the test if it meets the following latency criteria:

  • fewer than 5% of transactions exceed 1ms
  • fewer than 1% of transactions exceed 8ms
  • fewer than 0.1% of transactions exceed 64ms

Drives can be scored based on the highest throughput they can sustain while satisfying the latency QoS requirements. Scores are normalized relative to the baseline 1x workload, so a score of 50 indicates 100,000 reads per second and 50,000 writes per second. We used the default settings for queue and thread counts and did not manually constrain the benchmark to a single NUMA node, so this test produced a total of 64 threads sharing a total of 32 CPU cores split across two sockets.

The usual runtime for ACT is 24 hours, which makes determining a drive's throughput limit a long process. In order to have results in time for this review, much shorter ACT runtimes were used. Fortunately, none of these SSDs take anywhere near 24h to reach steady state. Once the drives were in steady state, a series of 5-minute ACT runs was used to estimate the drive's throughput limit, and then ACT was run on each drive for two hours to ensure performance remained stable under sustained load.

Aerospike Certification Tool Throughput

ACT Transaction Latency
Drive % over 1ms % over 2ms
Intel Optane SSD DC P4800X 750GB 0.82 0.16
Intel Optane SSD 900p 280GB 1.53 0.36
Micron 9100 MAX 2.4TB 4.94 0.44
Intel SSD DC P3700 1.6TB 4.64 2.22
Intel SSD DC P3608 (single controller) 800GB 4.51 2.29

When held to a specific QoS standard, the two Optane SSDs deliver more than twice the throughput than any of the flash-based SSDs. More significantly, even at their throughput limit, they are well below the QoS limits: the CPU is actually the bottleneck at that rate, leading to overall transaction times that are far higher than the actual drive I/O time. Somewhat higher throughput could be achieved by tweaking the thread and queue counts in the ACT configuration. Meanwhile, the flash SSDs are all close to the 5% limit for 1ms transactions, but are so far under the limit for longer latencies that I've left those numbers out of the above table.

Single-Threaded Performance Conclusion
Comments Locked

58 Comments

View All Comments

  • Lord of the Bored - Thursday, November 9, 2017 - link

    Me too. ddriver is most of why I read the comments.
  • mkaibear - Friday, November 10, 2017 - link

    He is always good for a giggle. I suppose he's busy directing hard drive manufacturers to make special hard drive platters for him solely out of hand-gathered sand from the Sahara. Or something.

    Still it's a shame to miss the laughs. It's always the second thing I do on SSD articles - first read the conclusion, then go and see what deedee has said. Ah well.
  • extide - Friday, November 10, 2017 - link

    Please.. don't jinx us!
  • rocky12345 - Thursday, November 9, 2017 - link

    Interesting drive to say the least. Also a well written review thanks.
  • PeachNCream - Thursday, November 9, 2017 - link

    30 DWPD over the course of 5 years turns into a really large amount of data when you're talking about 750GB of capacity. Isn't the typical endurance rating more like 0.3 DPWD for enterprise solid state?

    So this thing about Optane on DIMMs is really interesting to me. Is the plan for it to replace RAM and storage all at once or to act as a cache or some sort between faster DRAM and conventional solid state? Even with the endurance its offering right now, it seems like it would need to be more durable still for it to replace RAM.

    Oh (sorry case of shinies) could this be like a DIMM behind HBM on the CPU package where HBM does more of the write heavy stuff and then Optane lives between HBM and SSD or HDD storage? Has Intel let much out of the bag about this sorta thing?
  • Billy Tallis - Thursday, November 9, 2017 - link

    Enterprise SSDs are usually sorted into two or three endurance tiers. Drives meant for mostly-read workloads typically have endurance ratings of 0.3 DWPD. High-endurance drives for write-intensive uses are usually 10, 25 or 30 DWPD, but the ratings of high-endurance drives have decayed somewhat in recent years as the market realized few applications really need that much endurance.
  • lazarpandar - Thursday, November 9, 2017 - link

    Can this be used to supplement addressable system memory? I remember Intel talking about that during the product launch.
  • Billy Tallis - Thursday, November 9, 2017 - link

    Yes. It makes for a great swap device, especially with a recent Linux kernel. Alternatively, Intel will sell it bundled with a hypervisor that presents the guest OS with a pool of memory equal in size to the system's DRAM plus about 85% of the Optane drive's capacity. The hypervisor manages memory placement, so from the guest OS's perspective the memory is a homogeneous pool, not x GB of DRAM and y GB of Optane.
  • tuxRoller - Friday, November 10, 2017 - link

    It's a bit odd Intel would go for the hypervisor solution since the kernel can handle tiered pmem and it's in a better position to know where to place data.
    I suppose it's useful because it's cross-platform?
  • xype - Friday, November 10, 2017 - link

    I’d guess a hypervisor solution would also allow any critical fixes to be propagated faster/easier than having to go through a 3rd party (kernel) provider?

Log in

Don't have an account? Sign up now