Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Crucial P1
NVMe Power and Thermal Management Features
Controller Silicon Motion SM2263
Firmware P3CR010
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 70 °C
Critical Temperature 80 °C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Not Supported

The Crucial P1 includes a fairly typical feature set for a consumer NVMe SSD, with two idle states that should both be quick to get in and out of. The three different active power states probably make little difference in practice, because even in our synthetic benchmarks the P1 seldom draws more than 3-4W.

Crucial P1
NVMe Power States
Controller Silicon Motion SM2263
Firmware P3CR010
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 9 W Active - -
PS 1 4.6 W Active - -
PS 2 3.8 W Active - -
PS 3 50 mW Idle 1 ms 1 ms
PS 4 4 mW Idle 6 ms 8 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled if supported.

Active Idle Power Consumption (No LPM)Idle Power Consumption

The idle power consumption numbers from the Crucial P1 match the pattern seen with other recent Silicon Motion platforms. The active idle draw is a bit higher for the P1 than the 660p due to the latter having less DRAM, but both do very well when put to sleep.

Idle Wake-Up Latency

The wake-up latency of over 73ms for the Crucial P1 is fairly high, and definitely much worse than what the drive advertises to the operating system. This could lead to some responsiveness problems if the OS is misled into choosing an overly-aggressive power management strategy.

Mixed Read/Write Performance Conclusion
Comments Locked

66 Comments

View All Comments

  • DanNeely - Thursday, November 8, 2018 - link

    When DDR2 went mainstream they stopped making DDR1 dimms. The dimms you could still find for sale a few years later were old ones where you were paying not just the original cost of making them, but the cost of keeping them in a warehouse for several years before you bought them. Individual ram chips continued to be made for a while longer on legacy processes for embedded use but because the same old mature processes were still being used there was no scope for newer tech allowing cost cutting, and lower volumes meant loss of scale savings meaning that the embedded world also had to pay more until they upgraded to new standards.
  • Oxford Guy - Thursday, November 8, 2018 - link

    The point was:

    "QLC may lead to higher TLC prices, if TLC volume goes down and/or gets positioned as a more premium product as manufacturers try to sell us QLC."

    Stopping production leads to a volume drop, eh?
  • romrunning - Thursday, November 8, 2018 - link

    "There is a low-end NVMe market segment with numerous options, but they are all struggling under the pressure from more competitively priced high-end NVMe SSDs."

    I really wish all NVMe drives kept a higher base performance level. QLC should have died on the vine. I get the technical advances, but I prefer tech advances increase performance, not ones that are worse than their predecessor. The price savings, when it's actually there, isn't worth the trade-offs.
  • Flunk - Thursday, November 8, 2018 - link

    In a year or two there are going to be QLC drives faster than today's TLC drives. it just takes time to develop a new technology.
  • Oxford Guy - Thursday, November 8, 2018 - link

    Faster to decay, certainly.

    As I understand it, it's impossible, due to physics, to make QLC faster than TLC, just as it's impossible to make TLC faster than MLC. Just as it's impossible to make MLC faster than SLC.

    Workarounds to mask the deficiencies aren't the same thing. The only benefit to going beyond SLC is density, as I understand it.
  • Billy Tallis - Thursday, November 8, 2018 - link

    Other things being equal, MLC is faster than TLC and so on. But NAND flash memory has been evolving in ways other than changing the number of bits stored per cell. Micron's 64L TLC is faster than their 32L MLC, not just denser and cheaper. I don't think their 96L or 128L QLC will end up being faster than 64L TLC, but I do think it will be faster than their 32L or 16nm planar TLC. (There are some ways in which increased layer count can hurt performance, but in general those effects have been offset by other performance increases.)
  • Oxford Guy - Thursday, November 8, 2018 - link

    "Other things being equal, MLC is faster than TLC and so on"

    So, other than density, there is no benefit to going beyond SLC, correct?
  • Billy Tallis - Thursday, November 8, 2018 - link

    Pretty much. If you can afford to pay for SLC and a controller with enough channels and chip enable lines, then you could have a very nice SSD for a very unreasonable price. When you're constrained to a SATA interface there's no reason not to store at least three bits per cell, and even for enterprise NVMe SSDs there are only a few workloads where the higher performance of SLC is cost-effective.
  • Great_Scott - Monday, November 12, 2018 - link

    They should drop the SLC emulation and just sell the drive as an SLC drive. Sure, there may be some performance left on the table due to the limits of the NVME interface, but the longevity would be hugely attractive to some users.

    They'd make more money too, since they could better justify higher costs that way. In fact, with modern Flash they might be able to get much the same benefit from MLC organization and have roughly half the drive space instead of 25%.
  • Lolimaster - Friday, November 9, 2018 - link

    Do not mix better algorithms of the simulated SLC cache and dram with actual "performance", start crushing their simulated cache and the TLC goes to trash.

Log in

Don't have an account? Sign up now