Mixed IO Performance

For details on our mixed IO tests, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Mixed IO Performance
Mixed Random IO Throughput Power Efficiency
Mixed Sequential IO Throughput Power Efficiency

The Sabrent Rocket Q4 and Corsair MP600 CORE deliver excellent performance on the mixed sequential IO test, leading to above-average power efficiency as well. Their performance on the mixed random IO test is not great, and is actually slower overall than what we saw with Phison E12 QLC drives like the original Rocket Q and the MP400.

Mixed Random IO
Mixed Sequential IO

The earlier E12+QLC drives outperform these new E16+QLC drives across almost all phases of the mixed random IO test, despite using same Micron 96L QLC NAND. On the other hand, the newer QLC drives turn in surprisingly fast and steady results throughout the mixed sequential IO test, though the 2TB MP600 CORE does get off to a bit of a slow start.

 

Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Sabrent Rocket Q4 4TB
NVMe Power and Thermal Management Features
Controller Phison E16
Firmware RKT40Q.2 (EGFM52.3)
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 75 °C
Critical Temperature 80 °C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Supported

Our samples of the Sabrent Rocket Q4 and Corsair MP600 CORE use the same firmware from Phison (though Sabrent has re-branded the version numbering). As a result, they support the same full range of power management features. The 4TB Rocket Q4 reports higher maximum power draws for its active power states than the 2TB MP600 CORE, but both drives report the same idle behaviors.

The advertised maximum of 10.58 W for the 4TB Rocket Q4 is alarming and definitely supports Sabrent's suggestion that the drive not be used without a heatsink. However, during our testing the drive never went much above 7W for sustained power draw, which is more in line with the maximum power claimed by the 2TB MP600 CORE (which also tended to stay well below its supposed maximum). In practice, these drives can get by just fine without a big heatsink as long as they have some decent airflow, because real-world workloads will almost never push these drives to their maximum power levels for long.

Sabrent Rocket Q4 4TB
NVMe Power States
Controller Phison E16
Firmware RKT40Q.2 (EGFM52.3)
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 10.58 W Active - -
PS 1 7.14 W Active - -
PS 2 5.43 W Active - -
PS 3 49 mW Idle 2 ms 2 ms
PS 4 1.8 mW Idle 25 ms 25 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but rarely achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

Both the Sabrent Rocket Q4 and Corsair MP600 CORE show quite high active idle power draw, a consequence of their use of a PCIe Gen4 controller made on 28nm rather than something newer like 12nm. However, there's no problem with the low-power idle states except for Phison's usual sluggish wake-up from the deepest sleep. This seems to be worse for higher capacity drives, with the 4TB Rocket Q4 taking about an eighth of a second to wake up.

Idle Wake-Up Latency

Advanced Synthetic Tests: Block Sizes and Cache Size Effects Is QLC Fine for Primetime?
Comments Locked

60 Comments

View All Comments

  • ZolaIII - Friday, April 9, 2021 - link

    Actually 5.6 years but compared to same MP600 TLC 8x that much or 44.8 years and for just a little more money. But seriously buying a 1 TB mp600 which will be enough regarding capacity and which will last 22.4 years under same explanation (vs 2.8 for Core) then that makes a hell of a difference.
  • WaltC - Saturday, April 10, 2021 - link

    In far less than 22 years your entire system will have been replaced...;) IE, for the use-life of the drive you will never wear it out. The importance some people place on "endurance" is really weird. I have a 960 EVO NVMe with endurance estimates of 75TB: the drive is three years old this month and served as my boot drive for two of those three years, and I've used 19.6TB of write as of today. Rounding off, I have 55TB of write endurance remaining. That makes for an average of 6.5 TBs written per year--but the drive is no longer my boot/Win10-build install drive, so an average of 5TBs per year as strictly a data drive is probably overestimating, but just for fun, let's call it 5 TBs write per year. That means I have *at least* 11 years of write endurance remaining for this drive--which would mean the drive would have lasted at least 14 years in daily use before wearing out. Anyone think that 11 years from now I'll still be using that drive on a daily basis? I don't...;) The fact is that people worry needlessly about write endurance unless they are using these drives in some kind of mega heavy-use commercial setting. Write endurance estimates of 20-30 years are absurd and when choosing a drive for your personal system such estimates should be ignored as they have no meaning--they will be obsolete long before they wear out. So, buy the drive performance at the price you want to pay and don't worry about write endurance as even 75TB is plenty for personal systems.
  • GeoffreyA - Sunday, April 11, 2021 - link

    It would be interesting to put today's drives to an endurance experiment and see if their actual and advertised ratings square.
  • ZolaIII - Sunday, April 11, 2021 - link

    I have 2 TB writes per month, using PC for productivity, gaming and transcoding and still not to much. If I used it professionally for video that number would be much higher (high bandwidth mastering codes). To hell transcoding a single Blu-ray movie quickly (with GPU for sakes of making it HLG10+) will eat up to 150GB of writes and that's not a rocket science task to perform. By the way its not that PCIe interface will go anywhere and you can mont old NVMe to a new machine.
  • Oxford Guy - Sunday, April 11, 2021 - link

    One can't choose performance with QLC. It's inherently slower.

    It's also inherently reduced in longevity.

    Remember, it has twice as many voltage states (causing a much bigger issue with drift) for just a 30% density increase.

    That's diminished returns.
  • haukionkannel - Friday, April 9, 2021 - link

    Well, soon QLS can be seen only in highend top models, when middle range and low end go to PLS or what ever...
    for SSD manufacturers it makes a lot of Sense because they save money in that way. Profit!
  • nandnandnand - Saturday, April 10, 2021 - link

    5/6/8 bits per cell might be ok if NAND manufacturers found some magic sauce to increase endurance. There was research to that effect going on a decade ago: https://ieeexplore.ieee.org/abstract/document/6479...

    TLC is not going away just yet, and they can just increase drive capacities to make it unlikely an average user will hit the limits.
  • Samus - Sunday, April 11, 2021 - link

    When you consider how well perfected TLC is now that it has gone full 3D and the SLC cache + overprovisioning eliminate most of the performance\endurance issues, it makes you wonder if MLC will ever come back. It's almost completely disappeared even in enterprise.
  • Oxford Guy - Sunday, April 11, 2021 - link

    3D manufacturing killed MLC. It made TLC viable.

    There is no such magic bullet for QLC.
  • FunBunny2 - Sunday, April 11, 2021 - link

    "There is no such magic bullet for QLC."

    well... the same bullet, ver. 2, might work. that would require two steps:
    - moving 'back' to an even larger node, assuming that there's sufficient machinery at such node available at scale
    - getting two or three times the layers as TLC currently uses

    I've no idea whether either is feasible, but willing to bet both gonads that both, at least, are required.

Log in

Don't have an account? Sign up now