Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Sabrent Rocket Q 8TB
NVMe Power and Thermal Management Features
Controller Phison E12S
Firmware RKT30Q.2 (ECFM52.2)
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 75°C
Critical Temperature 80°C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Supported

The Sabrent Rocket Q claims support for the full range of NVMe power and thermal management features. However, the table of power states includes frighteningly high maximum power draw numbers for the active power states—over 17 W is really pushing it for a M.2 drive. Fortunately, we never measured consumption getting that high. The idle power states look typical, including the promise of quick transitions in and out of idle.

Sabrent Rocket Q 8TB
NVMe Power States
Controller Phison E12S
Firmware RKT30Q.2 (ECFM52.2)
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 17.18 W Active - -
PS 1 10.58 W Active - -
PS 2 7.28 W Active - -
PS 3 49 mW Idle 2 ms 2 ms
PS 4 1.8 mW Idle 25 ms 25 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but not always achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Note: Last year we upgraded our power measurement equipment and switched to measuring idle power on our Coffee Lake desktop, our first SSD testbed to have fully-functional PCIe power management. The below measurements are not a perfect match for the older measurements in our reviews from before that switch.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

The Samsung 870 QVO SSDs have lower active idle power consumption than the NVMe competition, though our measurements of the 4TB model did catch it while it was still doing some background work. With SATA link power management enabled the 8TB 870 QVO draws more power than the smaller models, but is still very reasonable.

The Sabrent Rocket Q's idle power numbers are all decent but not surprising. The desktop idle power draw is significantly higher than the 49mW the drive claims for power state 3, but it's still only at 87mW which is not a problem.

Idle Wake-Up Latency

The Samsung 870 QVO takes 1ms to wake up from sleep. The Sabrent Rocket Q has almost no measurable wake-up latency from the intermediate desktop idle state, but takes a remarkably long 108ms to wake up from the deepest sleep state. This is one of the slowest wake-up times we've measured from a NVMe drive and considerably worse than the 25ms latency the drive itself promises to the OS.

Mixed Read/Write Performance Conclusion
Comments Locked

150 Comments

View All Comments

  • Great_Scott - Sunday, December 6, 2020 - link

    QLC remains terrible and the price delta between the worst and good drives remains $5.

    The most interesting part of this review is how insanely good the performance of the DRAMless Mushkin drive is.
  • ksec - Friday, December 4, 2020 - link

    I really wish a segment of market move towards high capacity and low speed like QVO This is going to be useful for like NAS, where the speed is limited to 1Gbps or 2.5Gbps Ethernet.

    The cheapest SSD I saw for 2TB was a one off deal from Sandisk at $159. I wonder when we could see that being the norm if not even lower.
  • Oxford Guy - Friday, December 4, 2020 - link

    I wish QLC wouldn't be pushed on us because it ruins the economy of scale for 3D TLC. 3D TLC drives could have been offered in better capacities but QLC is attractive to manufacturers for margin. Too bad for us that it has so many drawbacks.
  • SirMaster - Friday, December 4, 2020 - link

    People said the same thing when they moved from SLC to MLC, and again from MLC to TLC.
  • emn13 - Saturday, December 5, 2020 - link

    There is an issue of decreasing returns, however.

    SLC -> MLC allowed for 2x capacity (minus some overhead) I don't remember anybody gnashing their teeth to much at that.
    MLC -> TLC allowed for 1.5x capacity (minus some overhead). That's not a bad deal, but it's not as impressive anymore.
    TLC -> QLC allows for 1.33x capacity (minor some overhead). That's starting to get pretty slim pickings.

    Would you rather have a 4TB QLC drive, or a 3TB TLC drive? that's the trade-off - and I wish sites would benchmark drives at higher fill rates, so it'd be easier to see more real-world performance.
  • at_clucks - Friday, December 11, 2020 - link

    @SirMaster, "People said the same thing when they moved from SLC to MLC, and again from MLC to TLC."

    You know you're allowed to change your mind and say no, right? Especially since some transitions can be acceptable, and others less so.

    The biggest thing you're missing is that the theoretical difference between TLC and QLC is bigger than the difference between SLC and TLC. Where SLC hasto discriminate between 2 levels of charge, TLC has to discriminate between 8, and QLC between 16.

    Doesn't this sound like a "you were ok with me kissing you so you definitely want the D"? When TheinsanegamerN insists ATers are "techies" and they "understand technology" I'll have this comment to refer him to.
  • magreen - Friday, December 4, 2020 - link

    Why is that useful for NAS? A hard drive will saturate that network interface.
  • RealBeast - Friday, December 4, 2020 - link

    Yup, my eight drive RAID 6 runs about 750MB/sec for large sequential transters over SFP+ to my backup array. No need for SSDs and I certainly couldn't afford them -- the 14TB enterprise SAS drives I got were only $250 each in the early summer.
  • nagi603 - Friday, December 4, 2020 - link

    Not if it's a 10G link
  • leexgx - Saturday, December 5, 2020 - link

    If you have enough drives in RAID6 you can come close to saturate a 10gb link (read post above 750MB/s with 8 hdds in RAID6)

Log in

Don't have an account? Sign up now