Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Sabrent Rocket Q 8TB
NVMe Power and Thermal Management Features
Controller Phison E12S
Firmware RKT30Q.2 (ECFM52.2)
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 75°C
Critical Temperature 80°C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Supported

The Sabrent Rocket Q claims support for the full range of NVMe power and thermal management features. However, the table of power states includes frighteningly high maximum power draw numbers for the active power states—over 17 W is really pushing it for a M.2 drive. Fortunately, we never measured consumption getting that high. The idle power states look typical, including the promise of quick transitions in and out of idle.

Sabrent Rocket Q 8TB
NVMe Power States
Controller Phison E12S
Firmware RKT30Q.2 (ECFM52.2)
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 17.18 W Active - -
PS 1 10.58 W Active - -
PS 2 7.28 W Active - -
PS 3 49 mW Idle 2 ms 2 ms
PS 4 1.8 mW Idle 25 ms 25 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but not always achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Note: Last year we upgraded our power measurement equipment and switched to measuring idle power on our Coffee Lake desktop, our first SSD testbed to have fully-functional PCIe power management. The below measurements are not a perfect match for the older measurements in our reviews from before that switch.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

The Samsung 870 QVO SSDs have lower active idle power consumption than the NVMe competition, though our measurements of the 4TB model did catch it while it was still doing some background work. With SATA link power management enabled the 8TB 870 QVO draws more power than the smaller models, but is still very reasonable.

The Sabrent Rocket Q's idle power numbers are all decent but not surprising. The desktop idle power draw is significantly higher than the 49mW the drive claims for power state 3, but it's still only at 87mW which is not a problem.

Idle Wake-Up Latency

The Samsung 870 QVO takes 1ms to wake up from sleep. The Sabrent Rocket Q has almost no measurable wake-up latency from the intermediate desktop idle state, but takes a remarkably long 108ms to wake up from the deepest sleep state. This is one of the slowest wake-up times we've measured from a NVMe drive and considerably worse than the 25ms latency the drive itself promises to the OS.

Mixed Read/Write Performance Conclusion
Comments Locked

150 Comments

View All Comments

  • Oxford Guy - Monday, December 7, 2020 - link

    I have three OCZ 240 GB Vertex 2 drives. They're all bricked. Two of them were replacements for bricked drives. One of them bricked within 24 hours of being used. They bricked in four different machines.

    Pure garbage. OCZ pulled a bait and switch, where it substituted 64-bit NAND for the 32-bit the drives were reviewed/tested with and rated for on the box. The horrendously bad Sandforce controller choked on 64-bit NAND and OCZ never stabilized it with its plethora of firmware spew. The company also didn't include the 240 GB model in its later exchange program even though it was the most expensive in the lineup. Sandforce was more interested in protecting the secrets of its garbage design than protecting users from data loss so the drives would brick as soon as the tiniest problem was encountered and no tool was ever released to the public to retrieve the data. It was designed to make that impossible for anyone who wasn't in spycraft/forensics or working for a costly drive recovery service. I think there was even an announced partnership between OCZ and a drive recovery company for Sandforce drives which isn't at all suspicious.
  • Oxford Guy - Monday, December 7, 2020 - link

    The Sandforce controller also was apparently incompatible with the TRIM command but customers were never warned about that. So, TRIM didn't cause performance to rebound as it should.
  • UltraWide - Saturday, December 5, 2020 - link

    AMEN for silence. I have a 6 x 8TB NAS and even with 5,400rpm hdds it's quite loud.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    I really want to like the slim, and would love one that I could load up with 2TB SATA SSDS in raid, but they've drug their feet on a 10G version. 1G or even 2.5G is totally pointless for SSD NASes.
  • bsd228 - Friday, December 4, 2020 - link

    sequential transfer speed isn't all that matters.

    two mirrored SSDs on a 10G connection can get you better read performance than any SATA ssd locally. But it can be shared across all of the home network.
  • david87600 - Friday, December 4, 2020 - link

    My thoughts exactly. SSD rarely makes sense for NAS.
  • Hulk - Friday, December 4, 2020 - link

    What do we know about the long term data retention of these QLC storage devices?
  • Oxford Guy - Friday, December 4, 2020 - link

    16 voltage states to deal with for QLC. 8 voltage states for TLC. 4 for 2-layer MLC. 2 for SLC.

    More voltage states = bad. The only good thing about QLC is density. Everything else is worse.
  • Spunjji - Monday, December 7, 2020 - link

    It's not entirely. More voltage states is more difficult to read, for sure, but they've also begun implementing more robust ECC systems with each new variant of NAND to counteract that.

    I'd trust one of these QLC drives more than I'd trust my old 120GB 840 drive in that regard.
  • Oxford Guy - Tuesday, December 8, 2020 - link

    Apples and oranges. More robust things to try to work around shortcomings are not the shortcomings not existing.

Log in

Don't have an account? Sign up now