Mixed Random Performance

Real-world storage workloads usually aren't pure reads or writes but a mix of both. It is completely impractical to test and graph the full range of possible mixed I/O workloads—varying the proportion of reads vs writes, sequential vs random and differing block sizes leads to far too many configurations. Instead, we're going to focus on just a few scenarios that are most commonly referred to by vendors, when they provide a mixed I/O performance specification at all. We tested a range of 4kB random read/write mixes at queue depths of 32 and 128. This gives us a good picture of the maximum throughput these drives can sustain for mixed random I/O, but in many cases the queue depth will be far higher than necessary, so we can't draw meaningful conclusions about latency from this test. As with our tests of pure random reads or writes, we are using 32 (or 128) threads each issuing one read or write request at a time. This spreads the work over many CPU cores, and for NVMe drives it also spreads the I/O across the drive's several queues.

The full range of read/write mixes is graphed below, but we'll primarily focus on the 70% read, 30% write case that is a fairly common stand-in for moderately read-heavy mixed workloads.

4kB Mixed Random Read/Write
Queue Depth 32 Queue Depth 128

At the lower queue depth of 32, the PBlaze5 drives have a modest performance advantage over the other flash-based SSDs, and the latest PBlaze5 C916 is the fastest. At the higher queue depth, the PBlaze5 SSDs in general pull way ahead of the other flash-based drives but the C916 no longer has a clear lead over the older models. The Intel P4510's performance increases slightly with the larger queue depth but the Samsung drives are already saturated at QD32.

4kB Mixed Random Read/Write
QD32 Power Efficiency in MB/s/W QD32 Average Power in W
QD128 Power Efficiency in MB/s/W QD128 Average Power in W

As usual the latest PBlaze5 uses less power than its predecessors even before the 10W limit is applied, but on this test that doesn't translate to a clear win in overall efficiency. The Intel Optane SSD is the only one that really stands out with great power efficiency on this test, and compared to that the TLC drives all score fairly close to each other for efficiency, especially at the lower queue depth.

QD32
QD128

The 10W limit has a significant impact on the PBlaze5 C916 through almost all portions of the mixed I/O tests. With or without the power limit, the C916 performs lower than expected at the pure read end of the test, but follows a more normal performance curve through the rest of the IO mixes. The performance declines as more writes are added to the mix are relatively shallow, especially in the read-heavy half of the tests. The older PBlaze5 drives with more extreme overprovisioning hold up a bit better on the write-heavy half of the test than the C916.

Aerospike Certification Tool

Aerospike is a high-performance NoSQL database designed for use with solid state storage. The developers of Aerospike provide the Aerospike Certification Tool (ACT), a benchmark that emulates the typical storage workload generated by the Aerospike database. This workload consists of a mix of large-block 128kB reads and writes, and small 1.5kB reads. When the ACT was initially released back in the early days of SATA SSDs, the baseline workload was defined to consist of 2000 reads per second and 1000 writes per second. A drive is considered to pass the test if it meets the following latency criteria:

  • fewer than 5% of transactions exceed 1ms
  • fewer than 1% of transactions exceed 8ms
  • fewer than 0.1% of transactions exceed 64ms

Drives can be scored based on the highest throughput they can sustain while satisfying the latency QoS requirements. Scores are normalized relative to the baseline 1x workload, so a score of 50 indicates 100,000 reads per second and 50,000 writes per second. Since this test uses fixed IO rates, the queue depths experienced by each drive will depend on their latency, and can fluctuate during the test run if the drive slows down temporarily for a garbage collection cycle. The test will give up early if it detects the queue depths growing excessively, or if the large block IO threads can't keep up with the random reads.

We used the default settings for queue and thread counts and did not manually constrain the benchmark to a single NUMA node, so this test produced a total of 64 threads scheduled across all 72 virtual (36 physical) cores.

The usual runtime for ACT is 24 hours, which makes determining a drive's throughput limit a long process. For fast NVMe SSDs, this is far longer than necessary for drives to reach steady-state. In order to find the maximum rate at which a drive can pass the test, we start at an unsustainably high rate (at least 150x) and incrementally reduce the rate until the test can run for a full hour, and the decrease the rate further if necessary to get the drive under the latency limits.

Aerospike Certification Tool Score

The performance of the PBlaze5 C916 on the Aerospike test is a bit lower than the older C900 delivered, but is still well above what the lower-endurance SSDs can sustain. Even with a 10W limit, the C916 is still able to sustain higher throughput than the Intel P4510.

Aerospike ACT: Power Efficiency
Power Efficiency Average Power in W

The power consumption of the C916 is lower than the C900, but the efficiency score isn't improved because the performance drop roughly matched the power savings. The C916 is still more efficient than the competing drives on this test when is power consumption is unconstrained, but with the 10W limit its efficiency advantage is mostly eliminated.

Peak Throughput And Steady State Conclusion
Comments Locked

13 Comments

View All Comments

  • MrRuckus - Wednesday, March 13, 2019 - link

    Because the PCIe lane count is dictated by the processor and Intel has been notoriously light on the number of PCIe lanes for their mainstream products. So is AMD for that matter (Ryzen). Threadripper though has a large number of PCIe lanes, along with EPYC. XEON is also more then standard desktop procs. From reading around it looks like cost is the main reason for the limited PCIe lanes.
  • DanNeely - Thursday, March 14, 2019 - link

    And the reason for limited PCIe lanes is that the number of them are controlled by socket size, and socket size is constrained by cost. (And once you get up to truly enormous ones like LGA3647 or SP3 by the fact that they take up so much physical space that smaller form factors like ITX become nearly impossible and highly wasteful because you're unable to use most of the CPUs IO.)
  • mikmod - Tuesday, April 30, 2019 - link

    It would be great to be able to buy such drive for high-end workstation at home, even if they're only for enterprise. Such write endurance and power loss protection cap... Is there any pricing revealed anywhere?

Log in

Don't have an account? Sign up now