Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Sabrent Rocket Q 8TB
NVMe Power and Thermal Management Features
Controller Phison E12S
Firmware RKT30Q.2 (ECFM52.2)
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 75°C
Critical Temperature 80°C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Supported

The Sabrent Rocket Q claims support for the full range of NVMe power and thermal management features. However, the table of power states includes frighteningly high maximum power draw numbers for the active power states—over 17 W is really pushing it for a M.2 drive. Fortunately, we never measured consumption getting that high. The idle power states look typical, including the promise of quick transitions in and out of idle.

Sabrent Rocket Q 8TB
NVMe Power States
Controller Phison E12S
Firmware RKT30Q.2 (ECFM52.2)
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 17.18 W Active - -
PS 1 10.58 W Active - -
PS 2 7.28 W Active - -
PS 3 49 mW Idle 2 ms 2 ms
PS 4 1.8 mW Idle 25 ms 25 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but not always achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Note: Last year we upgraded our power measurement equipment and switched to measuring idle power on our Coffee Lake desktop, our first SSD testbed to have fully-functional PCIe power management. The below measurements are not a perfect match for the older measurements in our reviews from before that switch.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

The Samsung 870 QVO SSDs have lower active idle power consumption than the NVMe competition, though our measurements of the 4TB model did catch it while it was still doing some background work. With SATA link power management enabled the 8TB 870 QVO draws more power than the smaller models, but is still very reasonable.

The Sabrent Rocket Q's idle power numbers are all decent but not surprising. The desktop idle power draw is significantly higher than the 49mW the drive claims for power state 3, but it's still only at 87mW which is not a problem.

Idle Wake-Up Latency

The Samsung 870 QVO takes 1ms to wake up from sleep. The Sabrent Rocket Q has almost no measurable wake-up latency from the intermediate desktop idle state, but takes a remarkably long 108ms to wake up from the deepest sleep state. This is one of the slowest wake-up times we've measured from a NVMe drive and considerably worse than the 25ms latency the drive itself promises to the OS.

Mixed Read/Write Performance Conclusion
Comments Locked

150 Comments

View All Comments

  • heffeque - Friday, December 4, 2020 - link

    "Of course we hope that firmwares don't have such bugs, but how would we know unless someone looked at the numbers?"
    Well on a traditional HDD you also have to hope that they put Helium in it and not Mustard gas by mistake. It "can" happen, but how would we know if nobody opens every single HDD drive?

    In a serious note, if a drive has such a serious firmware bug, rest assured that someone will notice, that it will go public quite fast and that it will end up getting fixed (like it has in the past).
  • Spunjji - Monday, December 7, 2020 - link

    Thanks for responding to that "how do you know unless you look" post appropriately. That kind of woolly thinking really gets my goat.
  • joesiv - Monday, December 7, 2020 - link

    Well, I for one would rather not be the one that discovers the bug, and lose my data.

    I didn't experience this one, but it's an example of a firmware bug:
    https://www.engadget.com/2020-03-25-hpe-ssd-bricke...

    Where I work, I'm involved in SSD evaluation. A drive we used in the field had a nasty firmware bug, that took out dozens of our SSD's after a couple years of operation (that was well within their specs), The manufacturer fixed it in a firmware update, but not until a year + after release, so we shipped hundreds of product.

    Knowing that, I evaluate them now. But for my personal use, where my needs are different, I'd love it if at least a very simple check was done in the reviews. It's not that hard, review the SSD, then check to see if the writes to NAND is reasonable given the workload you gave it. It's right there in the smart data, it'll be in block sizes, so you might have to multiply it by the block size, but it'll tell you a lot.

    Just by doing something similar, we were able to vet a drive that was writing 100x more to NAND than it should have been, essentially it was using up it's life expectancy 1% per day! Working with the manufacturer, they eventually decided we should just move to another product, they weren't much into firmware fixes.

    Anyways, someone should keep the manufactuers honest, why not start with the reviews?

    Also, no offence, but what is the "wolly thinking" are you talking about? I'm just trying to protect my investment and data.
  • heffeque - Tuesday, December 8, 2020 - link

    As if HDD didn't have their share of problems, both firmware and HW (especially the HW). I've seen loads of HDD die in the first 48 hours, then a huge percentage of them no later than a year afterwards.

    My experience is that SDD last A LOT longer and are A LOT more reliable than HDD.
    While HDD had been braking every 1-3 years (and changing them was a high cost due to the remote location, and the high wages of Scandinavian countries), when we changed to SSD we had literally ZERO replacements to perform since then so... can't say that the experience with hundreds of SSD not failing vs hundreds of HDD that barely last a few years goes in favor of HDD in any kind of measure.

    In the end, paying to send to those countries a slightly more expensive device (the SSD) has payed for itself several-fold in just a couple of years.
  • MDD1963 - Friday, December 4, 2020 - link

    I've only averaged .8 TB per *month* over 3.5 years....
  • joesiv - Monday, December 7, 2020 - link

    Out of curiousity, how did you come to this number?

    Just be aware that SMART data will track different things. You're probably right, but SMART data is manufactuer and model dependant, and sometimes they'll use the attributes differently. You really have to look up the smart documentation for your drive, to be sure they are calculating and using the attributes the way your smart data utility is labeling them as. Some manfacturers also don't track writes to NAND.

    I would look at:
    "writes to nand" or "lifetime writes to flash" - which for some kingston drives is attribute 233
    "SSD Life Left" - which for some ADATA drives is 232 (ADATA), and Micron/Crucial might be is 202), this is actually usually calculated based on average block erase count against the rated block erase counts the NAND is rated for (3000ish for MLC, much less for 3d nand)

    A lot of maufactuers haven't included the actual NAND writes in their SMART data, so it'd be hard to get to, and should be called out for it (Delkin, Crucial).

    "Total host writes" is what the OS wrote, and what most viewers assume is what manufactuers are stating when they're talking about drive writes per day or TB a day. That's the amount of data that is fed to the SSD, not what is actually written to NAND.

    Also realize that wear leveling routines can eat up SSD life as well. I'm not sure how SLC mode that newer firmwars have affects life expectancy/nand writes actually.
  • stanleyipkiss - Friday, December 4, 2020 - link

    Honestly, if the prices of these QLC high-capacity drives would drop a bit, I would be all over them -- especially for NAS use. I just want to move away from spinning mechanical drives but when I can get a 18 TB drive at the same price of a 4-8 TB SSD, I will choose the larger drive.

    Just make them cheaper.

    Also: I would love HIGHER capacity, and I WOULD pay for it... Micron had some drives and I'm sure some mainstream drives could be made available -- if you can squeeze 8TB onto M.2 then you could certainly put 16TB on a 2.5 inch drive.
  • DigitalFreak - Monday, December 7, 2020 - link

    Ask and ye shall receive.

    https://www.pcgamer.com/sabrent-is-close-to-launch...
  • Xex360 - Friday, December 4, 2020 - link

    The prices don't make any sense, you can get multiple drives for the same capacity but less money and more performance and reliability, and should cost more because they use more material.
  • inighthawki - Friday, December 4, 2020 - link

    At least for the sabrent drive, M.2 slots can be at a premium, so it makes perfect sense for a single drive to cost more than two smaller ones. On many systems being able to hook up that many drives would require a PCIe expansion card, and if you're not just bifurcating an existing 8x or 16x lane you would need a PCIe switch which is going to cost you hundreds of dollars at minimum.

Log in

Don't have an account? Sign up now