Power Measurement Sanity Check

Our current SSD test suite is almost completely automated. There are only about a dozen points where manual intervention is required to go from plugging in the drive to having a directory full of graphs ready to be analyzed and uploaded for the review. This helps ensure the tests are highly repeatable, and makes it easier to run a drive through the 24+ hours of testing without losing too much sleep. But things can still occasionally go wrong, and that's what I assumed had happened when I first looked at the test results for the SK hynix Gold P31. The power efficiency scores were way out of the normal range for high-end NVMe drives, and I worried that something was amiss with our very fancy and expensive Quarch HD Programmable Power Module (PPM).

After completing the initial round of testing for the P31, I took several steps to validate the surprising results. First, checking that the PPM wasn't reporting any error codes, and that the graphs were generated from the right data (rather than something like plotting measurements from the 12V supply rail for a 3.3V-only M.2 drive). Then, verifying that the PPM still produced reasonable results for a drive we've previously tested, because we've been using this instrument since April 2019 without any recalibration. I put the Samsung 970 EVO Plus 1TB back on the testbed and re-ran some idle and load power measurements, which produced virtually identical results to our original measurements—confirming that the PPM is still accurately reporting instantaneous power draw.

At this point, it was looking pretty certain that the record-setting efficiency scores from the Gold P31 were genuine, but there were still a few semi-plausible failure modes that could have affected at least some of the tests. For example, on the ATSB tests we report the total Watt-hours of energy used by the drive over the course of the test. If the power log was truncated before the test finished, that could quite easily lead to a much lower total energy usage number—but the power log for The Destroyer contained a normal 7h20m of data (and most of our scripts to process the logs and generate graphs try to detect a truncated log).

To make sure there wasn't anything really strange going on behind the scenes, I re-ran all of the Linux-based synthetic benchmarks with the Quarch Power Studio application open and graphing the power measurements in realtime:

Ignoring the labels on the vertical axis, this all looks as expected. The different phases of the tests are very distinct, with the drive dropping down to reasonable idle power levels between phases, and load power steadily increasing with queue depth. The tests that involve writing data show that the drive's power consumption jumps up after the host system is finished writing, when the SSD flushes the SLC cache in the background. (Our current synthetic tests don't directly measure this phenomenon, but it's typical behavior.) But even so, the largest spikes visible at this scale (samples averaged into 131ms chunks) only hit 4W. It was at this point that I started pressing SK hynix for more details about the controller and NAND used in the P31, to see how they pulled off such impressive efficiency:

Sustained IO Performance
Random Read Random Write Mixed Random I/O
Sequential Read Sequential Write Mixed Sequential I/O

 

Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

SK hynix Gold P31 1TB
NVMe Power and Thermal Management Features
Controller SK hynix ACNT038
Firmware 41060C20
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 83°C
Critical Temperature 84°C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Not Supported

The power management feature set of the SK hynix Gold P31 is fairly typical. The warning and critical temperature thresholds are only a degree apart, but realistically, this SSD isn't getting anywhere near those temperatures without a lot of outside assistance. The power state transition times claimed by the P31 are pretty quick.

SK hynix Gold P31 1TB
NVMe Power States
Controller SK hynix ACNT038
Firmware 41060C20
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 6.3 W Active - -
PS 1 2.4 W Active - -
PS 2 1.9 W Active - -
PS 3 50 mW Idle 1 ms 1 ms
PS 4 4 mW Idle 1 ms 9 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but rarely achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Note: We recently upgraded our power measurement equipment and switched to measuring idle power on our Coffee Lake desktop, our first SSD testbed to have fully-functional PCIe power management. The below measurements are all new, and are not a perfect match for the older measurements in our previous reviews and the Bench database.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

The SK hynix Gold P31 has fairly low active idle power consumption: even with PCIe link power management disabled, it doesn't take much to keep this controller awake. The intermediate idle power level that would be typical for many desktop systems is unimpressive, but 87mW is by no means a problem. With all the power management features turned on as should be the case for any properly-configured laptop, the P31's 3mW is competitive (and differences of one or two mW really don't matter here).

Idle Wake-Up Latency

The SK hynix Gold P31 takes just under 6 ms to wake up from its deepest idle state, which is one of the fastest wake-up times we've measured from a drive that successfully enters a deep sleep state when instructed to. Wake-up latencies an order of magnitude higher are still common on many drives, especially those with Silicon Motion NVMe controllers. We were unable to measure any significant latency difference between our "desktop idle" settings and the active idle settings that disable all PCIe link power management; some other drives have been pretty close to this, with mere tens of microseconds delay.

Mixed Read/Write Performance Conclusion
Comments Locked

80 Comments

View All Comments

  • vladx - Thursday, August 27, 2020 - link

    I have a SX8200 Pro on my laptop, do I need to enable the laptop Power Management state or is it detected automatically by the firmware?
  • Billy Tallis - Thursday, August 27, 2020 - link

    That really depends on what combination of firmware and driver bugs the laptop vendor gave you. But in theory, if the machine originally came with a M.2 NVMe drive, it should have been configured for proper power management and should continue to work well with an aftermarket SSD that doesn't bring any new power management bugs. I think the SX8200 Pro is okay on that score; the slow wake-up times shouldn't prevent the system from trying to use the deep idle states because the drive still promises the OS that it will have reasonable wake-up times.
  • vladx - Thursday, August 27, 2020 - link

    My laptop is a MSI Creator 17 that came with a Samsung PM981 drive. Could HWinfo offer any help in identifying the active power states?
  • Billy Tallis - Thursday, August 27, 2020 - link

    I'm not sure. I think you can figure out what PCIe power management settings are being used by digging through the PCI configuration space, but I'm not sure how easy it is to get that info while running Windows. As for the NVMe power management settings, my understanding is that it's impossible or very nearly impossible to access that information under Windows, at least with the usual NVMe drivers. The only reliable way I know of to confirm that everything is working correctly to get your SSD idling below 10mW is to have expensive power measurement equipment.
  • vladx - Thursday, August 27, 2020 - link

    Ok thanks, Billy. I was going to install Fedora anyways as secondary OS so I guess I'll try the Linux route then.
  • MrCommunistGen - Thursday, August 27, 2020 - link

    vladx, I'm really interested in how you go about trying to tease the NVMe power management info out of the drive. I did some internet searches a while back and didn't find anything definitive that I was able to follow and get results from. I've only ever used Debian-based distros, but if you're able to figure it out in Fedora then at least I'll know it is possible.
  • Foeketijn - Thursday, August 27, 2020 - link

    Did it happen? Did Samsung finally get an actual competitor? It doesn't really beat the 970 evo that much, so the 970 pro would still be better, but not at this price point, and definitely not with this power usage.
    Last time intel did that, Samsung suddenly woke up and beat them down again to a place where they stayed since.
    Interesting to see what the new evo and pro line will bring.
    Not high margin prices this time arround I guess.
  • LarsBolender - Thursday, August 27, 2020 - link

    This has to be one of the most positive AnandTech articles I have read in years. Good job SK Hynix!
  • Luminar - Thursday, August 27, 2020 - link

    No recommendation sticker, though.
  • Zan Lynx - Thursday, August 27, 2020 - link

    It would be handy if you could add a power loss consistency test. I have a Dell with an older hynix NVMe and one time the battery ran down in the bag, and on reboot its btrfs was corrupt.

    Imagine these are sequence numbers in metadata blocks.
    Correct: 10 12 22 30
    Actual: 10 12 11 30

    The hynix had committed writes for SOME of the blocks but a few in the middle of the update chain were old versions of the data. According to btrfs flush rules that is un-possible. Which means that the drive reported a successful write for 22 and for 30 but after powerloss recovery it lost that write for 22 and reverted to an older block.

    I mean, that's better than some of the older flash drives that would trash the entire FTL and lose all the data. But it is not exactly GOOD.

    I'm pretty sure Samsung consumer drives will also lose the data but at least they will revert all of the writes following the lost data, so in my example it would revert write 30 also. That would at least leave things in a logically consistent state.

Log in

Don't have an account? Sign up now