Power Management

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled.

Active Idle Power Consumption (No LPM)Idle Power Consumption

Idle Wake-Up Latency

The Optane SSD 800p has a bit of an unusual suite of power management capabilities. Previous Optane products have not implemented any low-power sleep states, giving them quite high idle power consumption but entirely avoiding the problem of latency waking up from a sleep state. The 800p implements a single low-power sleep state, while most NVMe SSDs that have multiple power states have at least two or three idle states with progressively lower power consumption in exchange for higher latency to enter or leave the sleep state. On the other hand, the 800p has three tiers of active power levels, so devices with strict power or thermal limits can constrain the 800p when properly configured.

Unfortunately, our usual idle power testing method didn't work with the 800p, leading it to show only a modest reduction in power rather than a reduction of multiple orders of magnitude. This may be related to the fact that the Optane SSD 800p indicates that it may take over a full second to enter its idle state. This is an unusually high entry latency, and something in our system configuration is likely preventing the 800p from fully transitioning to idle. We will continue to investigate this issue. However, based on the specifications alone, it looks like the 800p could benefit from an intermediate idle state that can be accessed more quickly.

(I should mention here that the last Intel consumer SSD we reviewed, the 760p, also initially showed poor power management on our test. We were eventually able to track this down to an artifact of our test procedure, and determined that the 760p's power management was unlikely to malfunction during real-world usage. The 760p now ranks as the NVMe SSD with the lowest idle power we've measured.)

Mixed Read/Write Performance Conclusion
Comments Locked

116 Comments

View All Comments

  • MrSpadge - Friday, March 9, 2018 - link

    Did you ever had an SSD run out of write cycles? I've personally only witnessed one such case (old 60 GB drive from 2010, old controller, being almost full all the time), but numerous other SSD deaths (controller, Sandforce or whatever).
  • name99 - Friday, March 9, 2018 - link

    I have an SSD that SMART claims is at 42%. I'm curious to see how this plays out over the next three years or so.

    But yeah, I'd agree with your point. I've had two SSDs so far fail (many fewer than HDs, but of course I've owned many more HDs and for longer) and both those failures were inexplicable randomness (controller? RAM?) but they certainly didn't reflect the SSD running out of write cycles.

    I do have some very old (heavily used) devices that are flash based (iPod nano 3rd gen) and they are "failing" in the expected SSD fashion --- getting slower and slower, and can be goosed with some speed for another year by giving them a bulk erase. Meaning that it does seem that SSDs "wear-out" failure (when everything else is reliable) happens as claimed --- the device gets so slow that at some some point you're better off just moving to a new one --- but it takes YEARS to get there, and you get plenty of warning, not unexpected medium failure.
  • MonkeyPaw - Monday, March 12, 2018 - link

    The original Nexus 7 had this problem, I believe. Those things aged very poorly.
  • 80-wattHamster - Monday, March 12, 2018 - link

    Was that the issue? I'd read/heard that Lollipop introduced a change to the cache system that didn't play nicely with Tegra chips.
  • sharath.naik - Sunday, March 11, 2018 - link

    the Endurance listed here is barely better than MLC. it is not where close to even SLC
  • Reflex - Thursday, March 8, 2018 - link

    https://www.theregister.co.uk/2016/02/01/xpoint_ex...

    I know ddriver can't resist continuing to use 'hypetane' but seriously looking at this article, Optane appears to be a win nearly across the board. In some cases quite significantly. And this is with a product that is constrained in a number of ways. Prices also are starting at a much better place than early SSD's did vs HDD's.

    Really fantastic early results.
  • iter - Thursday, March 8, 2018 - link

    You need to lay off whatever you are abusing.

    Fantastic results? None of the people who can actually benefit from its few strong points are rushing to buy. And for everyone else intel is desperately flogging it at it is a pointless waste of money.

    Due to its failure to deliver on expectations and promisses, it is doubtful intel will any time soon allocate the manufacturing capacity it would require to make it competitive to nand, especially given its awful density. At this time intel is merely trying to make up for the money they put into making it. Nobody denies the strong low queue depth reads, but that ain't enough to make it into a money maker. Especially not when a more performant alternative has been available since before intel announced xpoint.
  • Alexvrb - Thursday, March 8, 2018 - link

    Most people ignore or gloss over the strong low QD results, actually. Which is ironic given that most of the people crapping all over them for having the "same" performance (read: bars in extreme benchmarks) would likely benefit from improved performance at low QD.

    With that being said capacity and price are terrible. They'll never make any significant inroads against NAND until they can quadruple their current best capacity.
  • Reflex - Thursday, March 8, 2018 - link

    Alex - I'm sure they are aware of that. I just remember how consumer NAND drives launched, the price/perf was far worse than this compared to HDD's, and those drives still lost in some types of performance (random read/write for instance) despite the high prices. For a new tech, being less than 3x while providing across the board better characteristics is pretty promising.
  • Calin - Friday, March 9, 2018 - link

    SSD never had a random R/W problem compared to magnetic disks, not even if you compared them by price to RAIDs and/or SCSI server drives. What problem they might had at the beginning was in sequential read (and especially write) speed. Current sequential write speeds for hard drives are limited by the rpm of the drive, and they reach around 150MB/s for a 7200 rpm 1TB desktop drive. Meanwhile, the Samsung 480 EVO SSD at 120GB (a good second or third generation SSD) reaches some 170MB/s sequential write.
    Where the magnetic rotational disk drives suffer a 100 times reduction in performance is random write, while the SSD hardly care. This is due to the awful access time of hard drives (move the heads and wait for the rotation of the disks to bring the data below the read/write heads) - that's 5-10 milliseconds wait time for each new operation).

Log in

Don't have an account? Sign up now