Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Sabrent Rocket Q 8TB
NVMe Power and Thermal Management Features
Controller Phison E12S
Firmware RKT30Q.2 (ECFM52.2)
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 75°C
Critical Temperature 80°C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Supported

The Sabrent Rocket Q claims support for the full range of NVMe power and thermal management features. However, the table of power states includes frighteningly high maximum power draw numbers for the active power states—over 17 W is really pushing it for a M.2 drive. Fortunately, we never measured consumption getting that high. The idle power states look typical, including the promise of quick transitions in and out of idle.

Sabrent Rocket Q 8TB
NVMe Power States
Controller Phison E12S
Firmware RKT30Q.2 (ECFM52.2)
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 17.18 W Active - -
PS 1 10.58 W Active - -
PS 2 7.28 W Active - -
PS 3 49 mW Idle 2 ms 2 ms
PS 4 1.8 mW Idle 25 ms 25 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but not always achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Note: Last year we upgraded our power measurement equipment and switched to measuring idle power on our Coffee Lake desktop, our first SSD testbed to have fully-functional PCIe power management. The below measurements are not a perfect match for the older measurements in our reviews from before that switch.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

The Samsung 870 QVO SSDs have lower active idle power consumption than the NVMe competition, though our measurements of the 4TB model did catch it while it was still doing some background work. With SATA link power management enabled the 8TB 870 QVO draws more power than the smaller models, but is still very reasonable.

The Sabrent Rocket Q's idle power numbers are all decent but not surprising. The desktop idle power draw is significantly higher than the 49mW the drive claims for power state 3, but it's still only at 87mW which is not a problem.

Idle Wake-Up Latency

The Samsung 870 QVO takes 1ms to wake up from sleep. The Sabrent Rocket Q has almost no measurable wake-up latency from the intermediate desktop idle state, but takes a remarkably long 108ms to wake up from the deepest sleep state. This is one of the slowest wake-up times we've measured from a NVMe drive and considerably worse than the 25ms latency the drive itself promises to the OS.

Mixed Read/Write Performance Conclusion
Comments Locked

150 Comments

View All Comments

  • Kevin G - Friday, December 4, 2020 - link

    At 1 Gbit easily sure, but 2.5 Gbit is taking off in the consumer space and 10 Gbit has been here for awhile but at a price premium. There is also NIC bonding which can increase throughput further if the NAS has multiple active users.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    A single seagate ironwolf can push over 200MB/s read speeds. 2.5 Gbit will still bottleneck even the most basic raid 5 arrays.
  • heffeque - Friday, December 4, 2020 - link

    I want a silent NAS.
    Also SSD last longer than HDD.
    I'm hoping for a Synology DS620Slim but with AMD Zen inside (like the DS1621+), and I'll fill it up with 4TB QVO drives on SHD1 with BTRFS.
  • david87600 - Friday, December 4, 2020 - link

    Re: SSD lasts longer than HDD:

    Not necessarily. Especially with high volumes of writes. We've had more problems with our SSDs dying than our HDDs. We have several servers but the main application runs on an HDD. We replace our servers every four years but the old servers go into use as backup servers or as client machines. Some of those have been running their HDDs for 15 years now. None of our SSDs have lasted more than 2 years under load.
  • heffeque - Saturday, December 5, 2020 - link

    The Synology DS620Slim is not even near an enterprise server. Trust me, the SSDs won't die from high volume writes on a home user.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    Completely different use case. Home users fall under more of the WORM style of usage, they are not writing large data sets constantly.

    I also have no clue what you are doing to your poor SSDs. We have our SQL databases serving thousands of users reading and writing daily on SSDs for 3 years now without a single failure. Of course we have enterprise SSDs instead of consumer, so that makes a huge difference.
  • Deicidium369 - Saturday, December 5, 2020 - link

    I have far more dead HDDs than dead SSDs. The 1st SSD I bought was an OCZ midrange, 120GB - that drives has been used continuously for several years - about a year ago, wiped it and checked it - only a few worn cells. On the other hand - I had had terrible luck with anything over 8TB mechanical - out of the close to 300 14TB Seagates - over 10% failure rate - about half of those died during the 48 hour burn in - and the rest soon after.

    The Intel Optane U.2 we used in the Flash array have had no issues at all over the 3 year period - we had one that developed a power connector failure - but no issues with the actual media.

    as with most things tech YMMV
  • GeoffreyA - Sunday, December 6, 2020 - link

    Just a question. Between Seagate and WD, who would you say is worse when it comes to failures? Or are they about the same?
  • Deicidium369 - Sunday, December 6, 2020 - link

    I have not used WD in some time - so I can't comment I tend to use Backblaze failure rates - https://www.backblaze.com/blog/backblaze-hard-driv...
  • GeoffreyA - Monday, December 7, 2020 - link

    Thanks

Log in

Don't have an account? Sign up now