Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Corsair Force MP510
NVMe Power and Thermal Management Features
Controller Phison PS5012-E12
Firmware ECFM11.0
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 2
1.1 Number of non-operational (idle) power states 3
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 70 °C
Critical Temperature 90 °C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Not Supported

The Corsair Force MP510 implements all the usual NVMe power management features and has a comfortably high thermal throttling point. The drive's declared maximum power levels for the active power states look alarmingly high, but our testing didn't push the MP510 anywhere near the claimed 10.73W peak. The idle power states advertise low idle power with quick transition latencies, and our testing of power state 3 shows that the drive isn't exaggerating at all.

Corsair Force MP510
NVMe Power States
Controller Phison PS5012-E12
Firmware ECFM11.0
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 10.73 W Active - -
PS 1 7.69 W Active - -
PS 2 6.18 W Active - -
PS 3 49 mW Idle 2 ms 2 ms
PS 4 1.8 mW Idle 25 ms 25 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled if supported.

Active Idle Power Consumption (No LPM)Idle Power Consumption

The Corsair Force MP510 has the best active idle power consumption we've seen from a high-end NVMe SSD, and with APST enabled it is second only to Silicon Motion's latest generation of NVMe controllers in deep sleep power savings on our desktop testbed.

Idle Wake-Up Latency

This is complemented by a nice quick wakeup from sleep, so aggressive power management settings won't hurt system responsiveness.

Mixed Read/Write Performance Conclusion
Comments Locked

42 Comments

View All Comments

  • imaheadcase - Thursday, October 18, 2018 - link

    Wow, i had no idea how cheap SSD have come. You know, its getting to price points soon that home servers would easily use SSD drives vs mechanical.
  • bill.rookard - Thursday, October 18, 2018 - link

    If a 4TB drive becomes somewhat more affordable, then yes, they can. I guess it depends on how big of a server array you have. Personally, I have about 30TB in a 2U server using 4x4tb ZFS + 4x3tb ZFS for 20TB effective. Even a bargain basement setup for a similar size using the cheapest Micron 1100's 2TB SSDs you could find - you'd need 11 of them @ $280 each.

    Or - just a stitch over $3000.00. Meanwhile, the drives I used were factory refurbed enterprise drives and all 8 of them cost around $500.00
  • nathanddrews - Thursday, October 18, 2018 - link

    I'm definitely waiting for larger SSDs to come down. I think if we ever get to $100/TB, I'll start to swap out more drives. 2TB for $199 would be great.

    I only recently started to experiment with "hybrid" storage on my home server. I've got about 40TB of rust with about 800GB of SSDs (older SSDs that didn't have a home anymore), using software to manage what folders/files are stored/backed up on which drives. UHD Blu-ray and other disc backups on the slow hard drives (still fast enough to saturate 1GbE) and documents/photos, etc. on the SSD array. My server doesn't have anything faster than SATA6Gbps, but the SSDs are still much quicker for smaller files/random access.
  • Lolimaster - Thursday, October 18, 2018 - link

    I would upgrade to cheap 2.5-5Gbit NIC
  • nathanddrews - Thursday, October 18, 2018 - link

    I've already got a couple 10GbE NICs, just waiting on an affordable switch...
  • leexgx - Thursday, October 18, 2018 - link

    use a PC :) youtube video of a person doing it do need to make sure you have the right mobo so it can handle 10gb speeds between PCI-E 10GB cards or you be getting low speeds between cards (still far cheaper than a actual 10gb switch)

    https://www.youtube.com/watch?v=p39mFz7ORco
  • Valantar - Friday, October 19, 2018 - link

    You're recommending running a PC 24/7 as a switch to provide >GbE speeds from a NAS? Really?
  • nathanddrews - Friday, October 19, 2018 - link

    LOL that's a good joke! I mean, it's creative, but there's no way I'm doing that. I can wait a little longer to get a proper switch(es).
  • rrinker - Thursday, October 18, 2018 - link

    I'm at the point of contemplating a new server for home, and hybrid was the way I was going to go, since 16TB or so of all SSD is just too expensive still. But 1-2TB of SSD as fast cache for a bunch of 4TB spinny drives would be relatively inexpensive and offer most of the benefits. And SSD for the OS drive of course.
  • DominionSeraph - Monday, October 22, 2018 - link

    Yup, I picked up 24TB for $240. SSDs really can't compete.

Log in

Don't have an account? Sign up now