Mixed Read/Write Performance

Workloads consisting of a mix of reads and writes can be particularly challenging for flash based SSDs. When a write operation interrupts a string of reads, it will block access to at least one flash chip for a period of time that is substantially longer than a read operation takes. This hurts the latency of any read operations that were waiting on that chip, and with enough write operations throughput can be severely impacted. If the write command triggers an erase operation on one or more flash chips, the traffic jam is many times worse.

The occasional read interrupting a string of write commands doesn't necessarily cause much of a backlog, because writes are usually buffered by the controller anyways. But depending on how much unwritten data the controller is willing to buffer and for how long, a burst of reads could force the drive to begin flushing outstanding writes before they've all been coalesced into optimal sized writes.

Our first mixed workload test is an extension of what Intel describes in their specifications for throughput of mixed workloads. A total of 16 threads are used, each performing a mix of random reads and random writes at a queue depth of 1. Instead of just testing a 70% read mixture, the full range from pure reads to pure writes is tested at 10% increments.

Mixed Random Read/Write Throughput
Vertical Axis units: IOPS MB/s

The Intel Optane SSD DC P4800X is slightly faster than the Optane SSD 900p throughout this test, but either is far faster than the flash-based SSDs. Performance from the Optane SSDs isn't entirely flat across the test, but the minor decline in the middle is nothing to complain about. The Intel P3608 and Micron 9100 both show strong increases near the end of the test due to caching and combining writes.

Random Read Latency
Mean Median 99th Percentile 99.999th Percentile

The mean latency graphs are simply the reciprocal of the throughput graphs above, but the latency percentile graphs reveal a bit more. The median latency of all of the flash SSDs drops significantly once the workload consists of more writes than reads, because the median operation is now a cacheable write instead of an uncacheable read. A graph of the median write latency would likely show writes to be competitive on the flash SSDs even during the read-heavy portion of the test.

The 99th percentile latency chart shows that the flash SSDs have much better QoS on pure read or write workloads than on mixed workloads, but they still cannot approach the stable low latency of the Optane SSDs.

Aerospike Certification Tool

Aerospike is a high-performance NoSQL database designed for use with solid state storage. The developers of Aerospike provide the Aerospike Certification Tool (ACT), a benchmark that emulates the typical storage workload generated by the Aerospike database. This workload consists of a mix of large-block 128kB reads and writes, and small 1.5kB reads. When the ACT was initially released back in the early days of SATA SSDs, the baseline workload was defined to consist of 2000 reads per second and 1000 writes per second. A drive is considered to pass the test if it meets the following latency criteria:

  • fewer than 5% of transactions exceed 1ms
  • fewer than 1% of transactions exceed 8ms
  • fewer than 0.1% of transactions exceed 64ms

Drives can be scored based on the highest throughput they can sustain while satisfying the latency QoS requirements. Scores are normalized relative to the baseline 1x workload, so a score of 50 indicates 100,000 reads per second and 50,000 writes per second. We used the default settings for queue and thread counts and did not manually constrain the benchmark to a single NUMA node, so this test produced a total of 64 threads sharing a total of 32 CPU cores split across two sockets.

The usual runtime for ACT is 24 hours, which makes determining a drive's throughput limit a long process. In order to have results in time for this review, much shorter ACT runtimes were used. Fortunately, none of these SSDs take anywhere near 24h to reach steady state. Once the drives were in steady state, a series of 5-minute ACT runs was used to estimate the drive's throughput limit, and then ACT was run on each drive for two hours to ensure performance remained stable under sustained load.

Aerospike Certification Tool Throughput

ACT Transaction Latency
Drive % over 1ms % over 2ms
Intel Optane SSD DC P4800X 750GB 0.82 0.16
Intel Optane SSD 900p 280GB 1.53 0.36
Micron 9100 MAX 2.4TB 4.94 0.44
Intel SSD DC P3700 1.6TB 4.64 2.22
Intel SSD DC P3608 (single controller) 800GB 4.51 2.29

When held to a specific QoS standard, the two Optane SSDs deliver more than twice the throughput than any of the flash-based SSDs. More significantly, even at their throughput limit, they are well below the QoS limits: the CPU is actually the bottleneck at that rate, leading to overall transaction times that are far higher than the actual drive I/O time. Somewhat higher throughput could be achieved by tweaking the thread and queue counts in the ACT configuration. Meanwhile, the flash SSDs are all close to the 5% limit for 1ms transactions, but are so far under the limit for longer latencies that I've left those numbers out of the above table.

Single-Threaded Performance Conclusion
Comments Locked

58 Comments

View All Comments

  • Elstar - Thursday, November 9, 2017 - link

    I thought the whole point of 3D XPoint memory would be that it is DIMM friendly too.
    1) When will we see this in DIMM form?
    2) Would the DIMM version also need/have the reserved/spare capacity?
    3) Why is this spare capacity even needed? 1/6th seems like a potentially high internal failure rate (or other fundamental problem.)
  • PeachNCream - Thursday, November 9, 2017 - link

    It seems like current 3D XPoint doesn't have enough endurance yet to sit in a DIMM slot unless it's gonna be just a storage drive in DIMM form factor. That and because we're only just now seeing early enterprise and retail products, I bet that we're gonna need another generation or two before we get DIMM versions. :(
  • Billy Tallis - Thursday, November 9, 2017 - link

    Intel hasn't said much about 3D XPoint DIMMs this year, other than to promise we'll hear more next year.

    It's not clear how much fault tolerance Intel is trying to build in to the Optane SSDs with the extra capacity. A bit of it is necessary to support sector formats with protection information (eg. 16 extra bytes per 512B sector, or 128B extra for each 4kB sector). Beyond that, there needs to be room for the drive's internal data structures, which aren't as complicated as a flash translation layer but still impose some space overhead. The rest is probably taken by a fairly simple erasure coding/ECC scheme, because it's almost impossible to do LDPC at the speed necessary for this drive. (That's also why DIMMs use simple ECC instead of more space-efficient codes.)
  • woggs - Thursday, November 9, 2017 - link

    Most all intel SSDs have a parity die, so one full die likely provides internal raid protection of data. The rest is for ECC, internal system information, media management and mapping out defects... Impossible to know which of these is driving the actual spare implemented. I count 14 packages, so 1/14th (7%) is already the internal parity. 16% is big relative to nand consumer SSDs but comparable to enterprise. Doesn't seem particularly out of line or indicative of something wrong.
  • CheapSushi - Thursday, November 9, 2017 - link

    Micron is probably working on the DIMM version.
  • woggs - Tuesday, November 14, 2017 - link

    Intel is working on DIMMs... "Now, of course, SSDs are important, but in the long run, Intel also wants to have Optane 3D XPoint memory slot into the same sockets as DDR4 main memory, and Krzanich brought a mechanical model of an Optane DIMM to show off." https://www.nextplatform.com/2015/10/28/intel-show...
  • MajGenRelativity - Thursday, November 9, 2017 - link

    I enjoyed the review. Keep up the good work!
  • melgross - Thursday, November 9, 2017 - link

    I’m curious as to how this will perform when PCI 4 is out next year. That is, one with a PCI 4 interface. How throughput limiting is PCI 3 for this right now?
  • MajGenRelativity - Thursday, November 9, 2017 - link

    It shouldn't be that limiting, as PCIe 3.0 x4 allows for a higher throughput than 2.4 GB/s. There could be some latency improvements (probably small), but I don't think throughput is the issue
  • woggs - Thursday, November 9, 2017 - link

    If the question is "could a drive be made to saturate gen 4?" then, yes, of course, if intel chooses to do so. Will require a whole drive. Latency is a more interesting question because that is what 3dxp is really providing. QD1 latency is <10us ((impressive!). I don't expect that to improve since it should be limited by the 3dxp itself. The PCIe and driver overhead is probably 5us of that. Maybe gen 4 will improve that part.

Log in

Don't have an account? Sign up now