Advanced Synthetic Tests

Our benchmark suite includes a variety of tests that are less about replicating any real-world IO patterns, and more about exposing the inner workings of a drive with narrowly-focused tests. Many of these tests will show exaggerated differences between drives, and for the most part that should not be taken as a sign that one drive will be drastically faster for real-world usage. These tests are about satisfying curiosity, and are not good measures of overall drive performance.

Sequential Drive Fill

The main purpose of the sequential drive fill tests are to estimate the size of a drive's SLC write cache. This test is also one of the most likely to trigger thermal throttling, because it is the longest-running sustained IO test in our suite. This test performs two passes of writing to the drive. The first is conducted after erasing the drive and giving it a few minutes to cool down and finish any background work. This first pass of sequential writes shows us the best-case SLC cache capacity, since any variable-sized cache will be at its largest when starting on an empty drive. The second pass is conducted after giving the drive some idle time and performing some read performance tests. By the time the second write pass begins, the drive should have finished any background work and we should observe the worst-case SLC cache capacity for drives that have a variable size cache.

As the second sequential write pass continues, the SLC cache will eventually be filled and even drives that don't use SLC caching will usually show some performance drop. This is pushing the drive well beyond the limits of any real-world consumer workload, so aside from any SLC cache at the beginning, performance during the second pass is irrelevant. However, since this second pass is overwriting data that was also written sequentially, the drive's garbage collection during this process is quite straightforward. Overwriting the drive with random writes instead of sequential writes would be more likely to fill the drive's spare area and induce more severe performance drops.

Pass 1
Pass 2

 

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

After both passes of sequential writes are complete, the last 20% of the drive is TRIMed and the drive is given plenty of idle time. This prepares the drive for the battery of tests that are conducted on an 80%-full drive—full enough that SLC cache size is significantly reduced, but still leaving some empty space to avoid testing the absolute worst-case scenario of performance on a completely full drive.

Working Set Size

This test performs random 4kB reads at queue depth 1 while varying the working set size: the size of the dataset that the random reads are coming from. When the working set size is small, the access pattern has a high degree of spatial locality, and DRAMless drives should have no trouble caching the limited amount of NAND mapping information needed to handle the reads. As the working set size increases, drives with little or no RAM are likely to show reduced performance from an increasing number of FTL cache misses. Often there is a sharp drop in performance that suggests the size of any on-controller SRAM or HMB cache in use. Drives with some DRAM but not the full 1GB per 1TB ratio may be able to handle very large working set sizes with good performance, but typically still show reduced performance when random reads span the entire drive.

This test also provides an opportunity to verify that the TRIM command is working properly: when attempting to read data from a portion of the drive that is empty (or has been trimmed), the drive should return a bunch of zeros as soon as it has looked up the relevant LBAs in the FTL and determined that there isn't actually any real flash memory currently allocated to those addresses. So in addition to running the working set size test on a full drive, we also run it when the drive is 32GB full and 80% full, expecting to see substantially increased performance when many or most of the reads should be handled without actually touching the NAND flash memory.  These extra test runs aren't included in the graphs we publish, but we're keeping an eye out for drives that don't behave as expected.

Performance vs Block Size

Industry standard practice is to measure random IO performance using 4kB operations and sequential IO performance using 128kB operations. But SSDs permit IOs as small as 512 bytes, and real-world workloads include a wide variety of actual IO block sizes. Our trace-based tests subject drives to IOs of various sizes, but are ill-suited for analyzing how specific block sizes perform.

These tests perform 1GB of IO at each block size, at a queue depth of 1 and with the usual idle time after each step. Like our other synthetic tests, they're performed both with the drive 32GB full and 80% full, to capture any differences due to things like SLC caching. Regular readers may recognize these tests as based on ones we use as part of our enterprise SSD test suite. The principle is the same, but the configuration here has been adjusted to match the rest of our synthetic tests, and we're now testing up to block sizes of 2MB. As with some of the other tests, the fact that we're testing under Linux means that IOs larger than 128kB get split up by the OS and issued to the drive as a batch. For example, IO with a 1MB block size ends up looking to the drive like eight operations of 128kB issued at the same time.

Random Read
Random Write
Sequential Read
Sequential Write

There are several interesting phenomena to keep an eye out for. With block sizes smaller than 4kB, we generally see performance that is roughly the same IOPS as with a 4kB block size. This is a consequence of the fact that virtually all flash-based SSDs manage the NAND flash memory in 4kB chunks, even when configured to expose a 512-byte LBA size. Some drives exhibit pathologically low performance with sub-4kB block sizes, especially for writes, where a read-modify-write cycle may be necessary for the drive to preserve the data in the rest of the 4kB block.

Sequential IO with small to medium block sizes can also reveal some surprises, such as drives that seem to assume any 4kB access will be a random access and choose not to read and cache the rest of the (typically ~16kB) NAND page. Quite a few drives also show little improvement in sequential throughput with the medium block sizes, but show significant throughput scaling once the block size is well past 128kB. This is part of why we changed our burst sequential IO tests to use 1MB block sizes instead of 128kB.

Synthetic Tests: Basic IO Patterns Power Management, Conclusions
Comments Locked

70 Comments

View All Comments

  • bobbaniak - Tuesday, February 2, 2021 - link

    It's a bit strange that the fastest drive on the market (WD Black SN850) has not been tested :/
  • Oxford Guy - Tuesday, February 2, 2021 - link

    I'd like an article that has all the Inland drives. All of them. Micro Center has an entire case devoted to them and they've been available on Amazon for quite some time.

    Why is it that, when I go to look for a review, I find just one TechPowerUp review of just one drive in just one size?

    It's really a silly situation, particularly considering that, unless I've missed something, the brand has never offered a firmware update + secure erase tool for its drives.
  • Billy Tallis - Tuesday, February 2, 2021 - link

    It has, and the results are in Bench. I'll be writing up a full review for the SN850 soon, but for this article I wanted to use drives that had also been through the old test suite so I could know what to expect when debugging the new test suite.
  • PKShadow - Tuesday, February 2, 2021 - link

    Great article! I would love to see a few more older drives added for reference (maybe a 970 Pro, WD Black, 970 Evo Plus), plus several of the other PCIe 4 drives now on the market - Sabrent Rocket/Plus/Q4, Corsair MP600/Pro/Core, Seagate Firecuda and Gigabyte Aorus. Thanks!
  • Billy Tallis - Tuesday, February 2, 2021 - link

    The back catalog of drives will all be tested eventually. I'm currently prioritizing drives based on what comparison data I'll need for the next several reviews.

    Seagate FireCuda 510 and 520 are already in Bench with partial results. Sabrent Rocket Q4 is currently on the testbed, and the 8TB Rocket Q will get its turn at some point (it takes a long time to test large QLC drives). The Corsair MP600 CORE is in line, and the MP600 PRO will be when it arrives. I try not to spend all my time re-testing the same Phison hardware from different brands, so you won't be able to get results for all of the drives from all of your favorite brands. That makes for painfully boring reviews, and most brands aren't interested in sampling drives just to contribute to Bench without a full review.
  • PKShadow - Wednesday, February 3, 2021 - link

    Awesome - thanks for all your hard work! Definitely helps!
  • jabber - Wednesday, February 3, 2021 - link

    Be nice to see a roundup of all those ultra cheap SSD brands you see on Amazon all the time. I've bought one or two for test/pre builds and such and to be honest a few years later they are all still trucking.
  • wr3zzz - Wednesday, February 3, 2021 - link

    PCIe4 SSD is nearly 2x the price of PCIe3 where I am. I am willing to pay for performance if it's worth it but I really can't tell from this article.

    Also, I read that the upcoming DirectStorage will require PCIe4 but I have yet found anything to explain why. If that were the case then it's really no debate on what to get for a new build.
  • Billy Tallis - Thursday, February 4, 2021 - link

    I really doubt DirectStorage will require PCIe 4 SSDs. There will probably be at least a subset of DirectStorage functionality that doesn't even require NVMe.
  • racerx_is_alive - Wednesday, February 24, 2021 - link

    While it likely won't require PCIe 4 SSDs I'm assuming DirectStorage will finally be the thing that makes PCIe 4 SSDs worth the extra cost. I keep going back and forth on buying a 1TB Samsung 980 Pro or if I should pocket the difference and get a WD Black 750. Obviously the 980 is a better drive (just looking at the bench!) but I am hoping that the additional queues in the PCIe 4 SSDs will really show their value later with DirectStorage.

Log in

Don't have an account? Sign up now