Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

Burst 4kB Random Read (Queue Depth 1)

The burst random read performance from the 8TB Samsung 870 QVO is even worse than the smaller 870s; even though these drives have the full amount of DRAM necessary to hold the logical to physical address mapping tables, there are other significant sources of overhead affecting the higher capacity models.

The Sabrent Rocket Q's burst random read performance doesn't quite fall at the opposite end of the spectrum, but it does clearly offer decent random read latency that is comparable to other drives using the Phison E12(S) controller and not too far behind the NVMe drives using Silicon Motion controllers.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

The QLC drives almost all fare poorly on the longer random read test. The Sabrent Rocket Q falls to be the second-slowest NVMe drive in this batch, and a bit slower than Samsung's TLC SATA drives. The 8TB Samsung 870 QVO is no longer the slowest capacity; while it is again a bit slower than the 4TB model, the 1TB 870 QVO takes last place in this test.

Sustained 4kB Random Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency scores are mostly in line with the performance scores, with the slower drives tending to also be less efficient. The QLC drives follow this pattern quite well. The outliers are the particularly efficient Mushkin Helix DRAMless TLC drive, and the enterprise NVMe SSDs that show poor efficiency because they are underutilized by the low queue depths tested here.

The Sabrent Rocket Q shows good performance scaling as queue depths increase during the random read test. The Samsung 870 QVO seems to be approaching saturation past QD16, even though the SATA interface is capable of delivering higher performance.

Random Reads - All Drives
Sabrent Rocket Q 8TB Samsung 870 QVO 8TB

Comparing the 8TB drives against everything else we've tested, neither is breaking new ground. Both drives have power consumption that's on the high side but not at all unprecedented, and random read performance that doesn't push the limits of their respective interfaces.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

The two 8TB drives have opposite results for the burst random write performance test. The 8TB Sabrent Rocket Q it at the top of the chart with excellent SLC cache write latency, while the 8TB Samsung 870 QVO is a bit slower than the smaller capacities and turns in the worst score in this bunch.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

On the longer random write test, the 8TB Rocket Q is still relying mostly on its SLC cache and continues to hang with the high-end NVMe drives. The 8TB 870 QVO is only slightly slower than the other SATA SSDs, and faster than some of the low-end DRAMless TLC NVMe drives.

Sustained 4kB Random Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Despite their dramatically different random write performance, the two 8TB QLC drives end up with similar power efficiency that's fairly middle of the road: better than the enterprise drives and the slow DRAMless TLC drives, but clearly worse than the better TLC NVMe drives.

The random write performance of the Rocket Q scales a bit unevenly, but seems to saturate around QD8. Power consumption actually drops after QD4, possibly because the drive is busy enough at that point with random writes that it cuts back on background cleanup work. The Samsung 870 QVO reaches full random write performance at QD4 and steadily maintains that performance through the rest of the test.

Random Writes - All Drives
Sabrent Rocket Q 8TB Samsung 870 QVO 8TB

Unlike on the random read test, the Samsung 870 QVO comes across as having reasonably low power consumption on the random write test, especially at higher queue depths. The Sabrent Rocket Q's power consumption is still clearly on the high side, especially the spike at QD4 where it seemed to be doing a lot of background work instead of just directing writes to the SLC cache.

PCMark 10 Storage Sequential I/O Performance
Comments Locked

150 Comments

View All Comments

  • heffeque - Friday, December 4, 2020 - link

    No worries on a NAS: BTRFS will take care of that in the background.
  • Billy Tallis - Friday, December 4, 2020 - link

    Not sure if that's a joke about BTRFS RAID5/6 ensuring you lose your data.

    A BTRFS scrub isn't automatic; you need a cron job or similar to automate periodic scrubbing. But assuming you do that and stay away from the more dangerous/less tested RAID modes, you shouldn't have to worry about silent data loss. I've been using BTRFS RAID1 with various SSDs as my primary NAS ever since I amassed enough 1 and 2TB models, and it's worked well so far. ZFS would also work reasonably well, but it is less convenient when you're using a pile of mismatched drives.

    Getting back to the question of data retention of QLC itself: the write endurance rating of a drive is supposed to be chosen so that at the end of the rated write endurance the NAND is still healthy enough to provide 1 year unpowered data retention. (For client/consumer drives; for enterprise drives the standard is just 3 months, so they can afford to wear out the NAND a bit further, and that's part of why enterprise drives have higher TBW ratings.)
  • heffeque - Wednesday, December 9, 2020 - link

    BTRFS background self-healing is automatic in Synology as of DSM 6.1 and above.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    Long term cold storage of any flash memory is terrible. QLC wont be any better then TLC in this regard.
  • Oxford Guy - Sunday, December 6, 2020 - link

    How could it possibly be better (than 3D TLC)?

    It can only be worse unless the TLC is really shoddy quality. This is because it has 16 voltage states rather than 8.
  • TheinsanegamerN - Monday, December 7, 2020 - link

    Hence why I said it wont be any better, because it cant be. That leaves the door open for it to be worse.

    Reeding iz hard.
  • Oxford Guy - Monday, December 7, 2020 - link

    But your comment obviously wasn't clear enough, was it?

    QLC is worse than TLC. Next time write that since that's the clear truth, not that QLC and TLC are somehow equivalent.
  • joesiv - Friday, December 4, 2020 - link

    I love the idea of 8TB SSD drives, it's the perfect size for a local data drive, I could finally be rid of my spinning rust! Just need the price to drop a bit, maybe next year!

    Thank you for the review. Though I wish reviews of SSD's would be more clear to consumers what endurance really means to the end user. "DWPD" and TB/D, are mentioned, noone seems to highlight the fact that, it's not end user's writes that matter in these specifications, it's "writes to nand", which can be totally different from user/OS writes. It is reliant on the firmware, and some firmwares do some wonky things for data collection, speed, or even have bugs, which drastically drop the endurance of a drive.

    Of course I would love an exhaustive endrance test in the review, at the bare minimum, if anandtech could check the smart data after the benchmark is done, and verify two things, it would give you some useful information.

    Check:
    - nand writes (average block erases is usually available)
    - OS writes (sometimes is not easily available), but since you run a standardized bench suite, perhaps you have an idea of how many GB's you typically run through your drives anyways.

    You might need to do a bit of math on the block erase count, to get it back to GBs, and you might need to contact the manufacturer for SMART data attribute documentation, but if they don't have good smart data attributes, or documentation available, perhaps it's something to highlight in the review.

    But then you could weed out, and present to the consumer drives that have firmwares have outrageously inefficient nand write patterns.

    My company has had several failures, and because of that, have had to test in this way potential drives for our products, and have had to outright skip drives that's specs were great, but the firmwares were doing very inefficient drive writes, limiting their endurance.

    anyways, feedback, and fingers crossed!

    Keep up the good work, and thanks for the quality content!
  • heffeque - Friday, December 4, 2020 - link

    Well... 2 TB per day every day seems like a lot of writes. Not sure it'll be a problem for normal use.
  • joesiv - Friday, December 4, 2020 - link

    well firmware bugs can cause writes to be magnified 10x, 100x higher than what is expected. I've seen it. So, you're 2TB's, would just be 20GB's... Of course we hope that firmwares don't have such bugs, but how would we know unless someone looked at the numbers?

Log in

Don't have an account? Sign up now