Power Management

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled (when supported).

Active Idle Power Consumption (No LPM)Idle Power Consumption

(idle power)

Idle Wake-Up Latency

(idle wake-up)

Mixed Read/Write Performance Conclusion
Comments Locked

69 Comments

View All Comments

  • Notmyusualid - Sunday, December 17, 2017 - link

    So, when you are at gun point, in a corner, you finally concede defeat?

    I think you need professional help.
  • tuxRoller - Friday, December 15, 2017 - link

    If you are staying with a single thread submission model Windows may we'll have a decent sized advantage with both iocp and rio. Linux kernel aio is just such a crap shoot that it's really only useful if you run big databases and you set it up properly.
  • IntelUser2000 - Friday, December 15, 2017 - link

    "Lower power consumption will require serious performance compromises.

    Don't hold your breath for a M.2 version of the 900p, or anything with performance close to the 900p. Future Optane products will require different controllers in order to offer significantly different performance characteristics"

    Not necessarily. Optane Memory devices show the random performance is on par with the 900P. It's the sequential throughput that limits top-end performance.

    While its plausible the load power consumption might be impacted by performance, not always true for idle. The power consumption in idle can be cut significantly(to 10's of mW levels) by using a new controller. It's reasonable to assume the 900P uses the controller derived from the 750, which is also power hungry.
  • p1esk - Friday, December 15, 2017 - link

    Wait, I don't get it: the operation is much simpler than flash (no garbage collection, no caching, etc), so the controller should be simpler. Then why does it consume more power?
  • IntelUser2000 - Friday, December 15, 2017 - link

    You are still confusing load power consumption with idle power consumption. What you said makes sense for load, when its active. Not for idle.

    Optane Memory devices having 1/3rd the idle power demonstrates its due to the controller. They likely wanted something with short TTM, so they chose whatever controller they had and retrofitted it.
  • rahvin - Friday, December 15, 2017 - link

    Optane's very nature as a heat based phase change material is always going to result in higher power use than NAND because it's always going to take more energy to heat a material up than it would to create a magnetic or electric field.
  • tuxRoller - Saturday, December 16, 2017 - link

    That same nature also means that it will require less energy per reset as the process node shrinks (roughly e~1/F).
    In general, pcm is a much more amenable to process scaling than nand.
  • CheapSushi - Friday, December 15, 2017 - link

    Keep in mind a big part of the sequential throughput limit is the fact that the Optane M.2s are x2 PCIe lanes. This AIC is x4. Most NAND M.2 sticks are x4 as well.
  • twotwotwo - Friday, December 15, 2017 - link

    I'm curious whether it's possible to get more IOPS doing random 512B reads, since that's the sector size this advertises.

    When the description of the memory tech itself came out, bit addressability--not having to read any minimum block size--was a selling point. But it may be that the controller isn't actually capable of reading any more 512B blocks/s than 4KB ones, even if the memory and the bus could handle it.

    I don't think any additional IOPS you get from smaller reads would help most existing apps, but if you were, say, writing a database you wanted to run well on this stuff, it'd be interesting to know that small reads help.
  • tuxRoller - Friday, December 15, 2017 - link

    Those latencies seem pretty high. Was this with Linux or Windows? The table on page one indicates both were used.
    Can you run a few of these tests against a loop mounted ram block device? I'm curious to see what both the min, average and standard deviation values of latency look like when the block layer is involved.

Log in

Don't have an account? Sign up now