Whole-Drive Fill

This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.

The Sabrent Rocket Q takes the strategy of providing the largest practical SLC cache size, which in this case is a whopping 2TB. The Samsung 870 QVO takes the opposite (and less common for QLC drives) approach of limiting the SLC cache to just 78GB, the same as on the 2TB and 4TB models.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

Both drives maintain fairly steady write performance after their caches run out, but the Sabrent Rocket Q's post-cache write speed is twice as high. The post-cache write speed of the Rocket Q is still a bit slower than a TLC SATA drive, and is just a fraction of what's typical for TLC NVMe SSDs.

On paper, Samsung's 92L QLC is capable of a program throughput of 18MB/s per die, and the 8TB 870 QVO has 64 of those dies, for an aggregate theoretical write throughput of over 1GB/s. SLC caching can account for some of the performance loss, but the lack of performance scaling beyond the 2TB model is a controller limitation rather than a NAND limitation. The Rocket Q is affected by a similar limitation, but also benefits from QLC NAND with a considerably higher program throughput of 30MB/s per die.

Working Set Size

Most mainstream SSDs have enough DRAM to store the entire mapping table that translates logical block addresses into physical flash memory addresses. DRAMless drives only have small buffers to cache a portion of this mapping information. Some NVMe SSDs support the Host Memory Buffer feature and can borrow a piece of the host system's DRAM for this cache rather needing lots of on-controller memory.

When accessing a logical block whose mapping is not cached, the drive needs to read the mapping from the full table stored on the flash memory before it can read the user data stored at that logical block. This adds extra latency to read operations and in the worst case may double random read latency.

We can see the effects of the size of any mapping buffer by performing random reads from different sized portions of the drive. When performing random reads from a small slice of the drive, we expect the mappings to all fit in the cache, and when performing random reads from the entire drive, we expect mostly cache misses.

When performing this test on mainstream drives with a full-sized DRAM cache, we expect performance to be generally constant regardless of the working set size, or for performance to drop only slightly as the working set size increases.

The Sabrent Rocket Q's random read performance is unusually unsteady at small working set sizes, but levels out at a bit over 8k IOPS for working set sizes of at least 16GB. Reads scattered across the entire drive do show a substantial drop in performance, due to the limited size of the DRAM buffer on this drive.

The Samsung drive has the full 8GB of DRAM and can keep the entire drive's address mapping mapping table in RAM, so its random read performance does not vary with working set size. However, it's clearly slower than the smaller capacities of the 870 QVO; there's some extra overhead in connecting this much flash to a 4-channel controller.

Introduction AnandTech Storage Bench
Comments Locked

150 Comments

View All Comments

  • heffeque - Friday, December 4, 2020 - link

    No worries on a NAS: BTRFS will take care of that in the background.
  • Billy Tallis - Friday, December 4, 2020 - link

    Not sure if that's a joke about BTRFS RAID5/6 ensuring you lose your data.

    A BTRFS scrub isn't automatic; you need a cron job or similar to automate periodic scrubbing. But assuming you do that and stay away from the more dangerous/less tested RAID modes, you shouldn't have to worry about silent data loss. I've been using BTRFS RAID1 with various SSDs as my primary NAS ever since I amassed enough 1 and 2TB models, and it's worked well so far. ZFS would also work reasonably well, but it is less convenient when you're using a pile of mismatched drives.

    Getting back to the question of data retention of QLC itself: the write endurance rating of a drive is supposed to be chosen so that at the end of the rated write endurance the NAND is still healthy enough to provide 1 year unpowered data retention. (For client/consumer drives; for enterprise drives the standard is just 3 months, so they can afford to wear out the NAND a bit further, and that's part of why enterprise drives have higher TBW ratings.)
  • heffeque - Wednesday, December 9, 2020 - link

    BTRFS background self-healing is automatic in Synology as of DSM 6.1 and above.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    Long term cold storage of any flash memory is terrible. QLC wont be any better then TLC in this regard.
  • Oxford Guy - Sunday, December 6, 2020 - link

    How could it possibly be better (than 3D TLC)?

    It can only be worse unless the TLC is really shoddy quality. This is because it has 16 voltage states rather than 8.
  • TheinsanegamerN - Monday, December 7, 2020 - link

    Hence why I said it wont be any better, because it cant be. That leaves the door open for it to be worse.

    Reeding iz hard.
  • Oxford Guy - Monday, December 7, 2020 - link

    But your comment obviously wasn't clear enough, was it?

    QLC is worse than TLC. Next time write that since that's the clear truth, not that QLC and TLC are somehow equivalent.
  • joesiv - Friday, December 4, 2020 - link

    I love the idea of 8TB SSD drives, it's the perfect size for a local data drive, I could finally be rid of my spinning rust! Just need the price to drop a bit, maybe next year!

    Thank you for the review. Though I wish reviews of SSD's would be more clear to consumers what endurance really means to the end user. "DWPD" and TB/D, are mentioned, noone seems to highlight the fact that, it's not end user's writes that matter in these specifications, it's "writes to nand", which can be totally different from user/OS writes. It is reliant on the firmware, and some firmwares do some wonky things for data collection, speed, or even have bugs, which drastically drop the endurance of a drive.

    Of course I would love an exhaustive endrance test in the review, at the bare minimum, if anandtech could check the smart data after the benchmark is done, and verify two things, it would give you some useful information.

    Check:
    - nand writes (average block erases is usually available)
    - OS writes (sometimes is not easily available), but since you run a standardized bench suite, perhaps you have an idea of how many GB's you typically run through your drives anyways.

    You might need to do a bit of math on the block erase count, to get it back to GBs, and you might need to contact the manufacturer for SMART data attribute documentation, but if they don't have good smart data attributes, or documentation available, perhaps it's something to highlight in the review.

    But then you could weed out, and present to the consumer drives that have firmwares have outrageously inefficient nand write patterns.

    My company has had several failures, and because of that, have had to test in this way potential drives for our products, and have had to outright skip drives that's specs were great, but the firmwares were doing very inefficient drive writes, limiting their endurance.

    anyways, feedback, and fingers crossed!

    Keep up the good work, and thanks for the quality content!
  • heffeque - Friday, December 4, 2020 - link

    Well... 2 TB per day every day seems like a lot of writes. Not sure it'll be a problem for normal use.
  • joesiv - Friday, December 4, 2020 - link

    well firmware bugs can cause writes to be magnified 10x, 100x higher than what is expected. I've seen it. So, you're 2TB's, would just be 20GB's... Of course we hope that firmwares don't have such bugs, but how would we know unless someone looked at the numbers?

Log in

Don't have an account? Sign up now