Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Samsung 970 EVO
NVMe Power and Thermal Management Features
Controller Samsung Phoenix
Firmware 1B2QEXE7
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 85°C
Critical Temperature 85°C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Not Supported

The Samsung 970 EVO bumps the supported NVMe spec version to 1.3, compared to the 1.2 feature set supported by the PM981 and 960 series. The 970 EVO implements the Host Controlled Thermal Management feature, allowing operating systems to configure the drive to throttle at a lower temperature than it normally would. The (optional) non-operational power state permissive mode feature is not included, so the 970 EVO is not supposed to do background tasks like garbage collection when it is in idle power states (unless they can be done within the power constraints of the idle states, which is unrealistic).

Samsung 970 EVO
NVMe Power States
Controller Samsung Phoenix
Firmware 1B2QEXE7
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 6.2 W Active - -
PS 1 4.3 W Active - -
PS 2 2.1 W Active - -
PS 3 0.04 W Idle 0.21 ms 1.2 ms
PS 4 0.005 W Idle 2 ms 8 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled if supported.

Active Idle Power Consumption (No LPM)Idle Power Consumption

Active idle power draw of the 970 EVO seems to be about 20% higher than the preceding generation of Samsung drives, but the low-power idle we measured is about the same as most other high-end NVMe drives.

Idle Wake-Up Latency

The idle wake-up latency of the 970 EVO is more than twice that of its predecessors and also significantly higher than that of the Samsung PM981. This ~14ms latency exceeds the 8ms that the drive itself claims as its latency to wake up from its deepest sleep state.

Mixed Read/Write Performance Conclusion
Comments Locked

68 Comments

View All Comments

  • cfenton - Tuesday, April 24, 2018 - link

    I've been meaning to ask about this for a while, but why do you order the performance charts based on the 'empty' results? In most of my systems, the SSD's are ~70% full most of the time. Does performance only degrade significantly if they are 100% full? If not, it seems to me that the 'full' results would be more representative of the performance most users will see.
  • Billy Tallis - Tuesday, April 24, 2018 - link

    At 70% full you're generally going to get performance closer to fresh out of the box than to 100% full. Performance drops steeply as the last bits of space are used up. At 70% full, you probably still have the full dynamic SLC cache size usable, and there's plenty of room for garbage collection and wear leveling.

    When it comes to manual overprovisioning to prevent full-drive performance degradation, I don't think I've ever seen someone recommend reserving more than 25% of the drive's usable space unless you're trying to abuse a consumer drive with a very heavy enterprise workload.
  • cfenton - Tuesday, April 24, 2018 - link

    Thanks for the reply. That's really helpful to know. I didn't even think about the dynamic SLC cache.
  • imaheadcase - Tuesday, April 24, 2018 - link

    So im wondering, i got a small 8TB server i use for media/backup. While i know im limited to network bandwidth, would replacing the drives with ssd make any impact at all?
  • Billy Tallis - Tuesday, April 24, 2018 - link

    It would be quieter and use less power. For media archiving over GbE, the sequential performance of mechanical drives is adequate. Incremental backups may make more random accesses, and retrieving a subset of data from your backup archive can definitely benefit from solid state performance, but it's probably not something you do often enough for it to matter.

    Even with the large pile of SSDs I have on hand, my personal machines still back up to a home server with mechanical drives in RAID.
  • gigahertz20 - Tuesday, April 24, 2018 - link

    @Billy Tallis Just out of curiosity, what backup software are you using?
  • enzotiger - Tuesday, April 24, 2018 - link

    With the exception of sequential write, there are some significant gap between your numbers and Samsung's spec. Any clue?
  • anactoraaron - Tuesday, April 24, 2018 - link

    Honest question here. Which of these tests do more than just test the SLC cache? That's a big thing to test, as some of these other drives are MLC and won't slow down when used beyond any SLC caching.
  • RamGuy239 - Tuesday, April 24, 2018 - link

    So these are sold and markedet with IEEE1667 / Microsoft edrive from the get-go, unlike Samsung 960 EVO and Pro that had this promised only to get it at the end of their life-cycles (the latest firmware update).

    That's good and old. But does it really work? The current implementation on the Samsung 960 EVO and Pro has a major issue, it doesn't work when the disk is used as a boot drive. Samsung keeps claiming this is due to a NVMe module bug in most UEFI firmware's and will require motherboard manufactures to provide a UEFI firmware update including a fix.

    Whether this is indeed true or not is hard for me to say, but that's what Samsung themselves claims over at their own support forums.

    All I know is that I can't get neither my Samsung 960 EVO 1TB, or my Samsung 960 Pro 1TB to use hardware encryption with BitLocker on Windows 10 when its used as a boot drive on neither my Asus Maximus IX Apex or my Asus Maximus X Apex both running the latest BIOS/UEFI firmware update.

    When used as a secondary drive hardware encryption works as intended.

    With this whole mess around BitLocker/IEEE1667/Microsoft Edrive on the Samsung 960 EVO and Pro how does it all fare with these new ones? Is it all indeed a issue with NVMe and most UEFI firmware's requiring new UEFI firmware's with fixes from motherboard manufactures or does the 970 EVO and Pro suddenly work with BitLocker as a boot drive without new UEFI firmware releases?
  • Palorim12 - Tuesday, April 24, 2018 - link

    Seems to be an issue with the BIOS chipset manufacturers like Megatrends, Phoenix, etc, and Samsung has stated they are working with them to resolve the issue.

Log in

Don't have an account? Sign up now