Burst IO Performance

Our burst IO tests operate at queue depth 1 and perform several short data transfers interspersed with idle time. The random read and write tests consist of 32 bursts of up to 64MB each. The sequential read and write tests use eight bursts of up to 128MB each. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

QD1 Burst IO Performance
Random Read Random Write
Sequential Read Sequential Write

The random read performance from the SLC caches on the Rocket Q4 and MP600 CORE are plenty fast, albeit still not able to match the Intel SSD 670p. Outside the cache, the two Phison E16 QLC drives are definitely slower than mainstream TLC drives, but their performance is fine by QLC standards.

For short bursts of writes that don't overflow the SLC cache, these drives are very fast and their caches are still useful even when the drive as a whole is 80% full.

As is often the case for Phison-based drives, the low queue depth sequential read performance is still not great, but sequential writes are very fast and do reach PCIe Gen4 speeds.

Sustained IO Performance

Our sustained IO tests exercise a range of queue depths and transfer more data than the burst IO tests, but still have limits to keep the duration somewhat realistic. The primary scores we report are focused on the low queue depths that make up the bulk of consumer storage workloads. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Sustained IO Performance
Random Read Throughput Power Efficiency
Random Write Throughput Power Efficiency
Sequential Read Throughput Power Efficiency
Sequential Write Throughput Power Efficiency

The random read performance from the Rocket Q4 and MP600 CORE is greatly improved over the earlier Phison E12 QLC SSDs, at least when the test is hitting a narrow slice of the drive that should be entirely within the SLC cache. Random and sequential write results are both decent given that these are in many ways still low-end drives. These two drives are large enough (and have enough SLC cache) to handle much larger bursts of writes than 1TB models, even when the drives are 80% full.

Random Read
Random Write
Sequential Read
Sequential Write

The E16-based QLC drives are able to continue scaling up random read throughput (from the SLC cache, at least) long past the points where other QLC drives hit a performance wall. The 4TB Rocket Q4 scales better than the 2TB MP600 CORE, and by QD128 it is handling random reads at 2.5GB/s and still has headroom for more performance. For random writes, the Rocket Q4 and MP600 CORE saturate around QD4 or a bit later, which is fairly typical behavior.

When testing sequential reads, we get the characteristic behavior from Phison controllers: poor throughput until around QD16 or so, at which point there's enough data in flight at any given time to allow the SSD controller to stay properly busy. The sequential write results are messy since both drives run out of SLC cache frequently during this test, and that makes it hard to identify what the performance limits would be in more favorable conditions. But it appears that sequential write speed is saturating around QD2, and stays flat with increasing queue depth except when the caching troubles come up.

Random Read Latency

This test illustrates how drives with higher throughput don't always offer better IO latency and Quality of Service (QoS), and that latency often gets much worse when a drive is pushed to its limits. This test is more intense than real-world consumer workloads and the results can be a bit noisy, but large differences that show up clearly on a log scale plot are meaningful. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

The Sabrent Rocket Q4 and Corsair MP660 CORE show better latency on this test than the other low-end NVMe drives, but all the mainstream TLC drives with DRAM have clear advantages. Even the Samsung 870 EVO has lower latency until it gets close to saturating the SATA link. Between the 2TB MP600 CORE and the 4TB Rocket Q4, the larger drive unsurprisingly can sustain higher random read throughput and its latency climbs more gradually, but the Rocket Q4 does have more transient latency spikes along the way.

Trace Tests: AnandTech Storage Bench and PCMark 10 Advanced Synthetic Tests: Block Sizes and Cache Size Effects
Comments Locked

60 Comments

View All Comments

  • ZolaIII - Friday, April 9, 2021 - link

    Actually 5.6 years but compared to same MP600 TLC 8x that much or 44.8 years and for just a little more money. But seriously buying a 1 TB mp600 which will be enough regarding capacity and which will last 22.4 years under same explanation (vs 2.8 for Core) then that makes a hell of a difference.
  • WaltC - Saturday, April 10, 2021 - link

    In far less than 22 years your entire system will have been replaced...;) IE, for the use-life of the drive you will never wear it out. The importance some people place on "endurance" is really weird. I have a 960 EVO NVMe with endurance estimates of 75TB: the drive is three years old this month and served as my boot drive for two of those three years, and I've used 19.6TB of write as of today. Rounding off, I have 55TB of write endurance remaining. That makes for an average of 6.5 TBs written per year--but the drive is no longer my boot/Win10-build install drive, so an average of 5TBs per year as strictly a data drive is probably overestimating, but just for fun, let's call it 5 TBs write per year. That means I have *at least* 11 years of write endurance remaining for this drive--which would mean the drive would have lasted at least 14 years in daily use before wearing out. Anyone think that 11 years from now I'll still be using that drive on a daily basis? I don't...;) The fact is that people worry needlessly about write endurance unless they are using these drives in some kind of mega heavy-use commercial setting. Write endurance estimates of 20-30 years are absurd and when choosing a drive for your personal system such estimates should be ignored as they have no meaning--they will be obsolete long before they wear out. So, buy the drive performance at the price you want to pay and don't worry about write endurance as even 75TB is plenty for personal systems.
  • GeoffreyA - Sunday, April 11, 2021 - link

    It would be interesting to put today's drives to an endurance experiment and see if their actual and advertised ratings square.
  • ZolaIII - Sunday, April 11, 2021 - link

    I have 2 TB writes per month, using PC for productivity, gaming and transcoding and still not to much. If I used it professionally for video that number would be much higher (high bandwidth mastering codes). To hell transcoding a single Blu-ray movie quickly (with GPU for sakes of making it HLG10+) will eat up to 150GB of writes and that's not a rocket science task to perform. By the way its not that PCIe interface will go anywhere and you can mont old NVMe to a new machine.
  • Oxford Guy - Sunday, April 11, 2021 - link

    One can't choose performance with QLC. It's inherently slower.

    It's also inherently reduced in longevity.

    Remember, it has twice as many voltage states (causing a much bigger issue with drift) for just a 30% density increase.

    That's diminished returns.
  • haukionkannel - Friday, April 9, 2021 - link

    Well, soon QLS can be seen only in highend top models, when middle range and low end go to PLS or what ever...
    for SSD manufacturers it makes a lot of Sense because they save money in that way. Profit!
  • nandnandnand - Saturday, April 10, 2021 - link

    5/6/8 bits per cell might be ok if NAND manufacturers found some magic sauce to increase endurance. There was research to that effect going on a decade ago: https://ieeexplore.ieee.org/abstract/document/6479...

    TLC is not going away just yet, and they can just increase drive capacities to make it unlikely an average user will hit the limits.
  • Samus - Sunday, April 11, 2021 - link

    When you consider how well perfected TLC is now that it has gone full 3D and the SLC cache + overprovisioning eliminate most of the performance\endurance issues, it makes you wonder if MLC will ever come back. It's almost completely disappeared even in enterprise.
  • Oxford Guy - Sunday, April 11, 2021 - link

    3D manufacturing killed MLC. It made TLC viable.

    There is no such magic bullet for QLC.
  • FunBunny2 - Sunday, April 11, 2021 - link

    "There is no such magic bullet for QLC."

    well... the same bullet, ver. 2, might work. that would require two steps:
    - moving 'back' to an even larger node, assuming that there's sufficient machinery at such node available at scale
    - getting two or three times the layers as TLC currently uses

    I've no idea whether either is feasible, but willing to bet both gonads that both, at least, are required.

Log in

Don't have an account? Sign up now