Power Management

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled.

Active Idle Power Consumption (No LPM)Idle Power Consumption

Idle Wake-Up Latency

The Optane SSD 800p has a bit of an unusual suite of power management capabilities. Previous Optane products have not implemented any low-power sleep states, giving them quite high idle power consumption but entirely avoiding the problem of latency waking up from a sleep state. The 800p implements a single low-power sleep state, while most NVMe SSDs that have multiple power states have at least two or three idle states with progressively lower power consumption in exchange for higher latency to enter or leave the sleep state. On the other hand, the 800p has three tiers of active power levels, so devices with strict power or thermal limits can constrain the 800p when properly configured.

Unfortunately, our usual idle power testing method didn't work with the 800p, leading it to show only a modest reduction in power rather than a reduction of multiple orders of magnitude. This may be related to the fact that the Optane SSD 800p indicates that it may take over a full second to enter its idle state. This is an unusually high entry latency, and something in our system configuration is likely preventing the 800p from fully transitioning to idle. We will continue to investigate this issue. However, based on the specifications alone, it looks like the 800p could benefit from an intermediate idle state that can be accessed more quickly.

(I should mention here that the last Intel consumer SSD we reviewed, the 760p, also initially showed poor power management on our test. We were eventually able to track this down to an artifact of our test procedure, and determined that the 760p's power management was unlikely to malfunction during real-world usage. The 760p now ranks as the NVMe SSD with the lowest idle power we've measured.)

Mixed Read/Write Performance Conclusion
Comments Locked

116 Comments

View All Comments

  • Reflex - Saturday, March 10, 2018 - link

    I also think people forget how crappy & expensive gen1 and 2 SSD's were.
  • Drazick - Friday, March 9, 2018 - link

    We really need those in U2 / SATA Express form.
    Desktop users shouldn't use M2 with all its thermal limitations.
  • jabber - Friday, March 9, 2018 - link

    Whichever connector you use or whatever the thermals, once you go above 600MBps the real world performance difference is very hard to tell in most cases. We just need SATA4 and we can dump all these U2/SATA Express sockets. M.2 for compactness and SATA4 for everything else non Enterprise. Done.
  • Reflex - Friday, March 9, 2018 - link

    U2 essentially is next gen SATA. There is no SATA4 on the way. SATA is at this point an 18 year old specification ripe for retirement. There is also nothing wrong with M.2 even in desktops. Heat spreaders aren't a big deal in that scenario. All that's inside a SATA drive is the same board you'd see in M.2 form factor more or less.
  • leexgx - Saturday, March 10, 2018 - link

    apart from that your limited to 0-2 slots per board (most come with 6 SATA ports)

    i agree that a newer SATA that support NVME over it be nice but U2 be nice if anyone would adopt it and make the ports become standard and have U2 SSDs
  • jabber - Friday, March 9, 2018 - link

    I am amazed that no one has decided to just do the logical thing and slap a 64GB Flash cache in a 4TB+ HDD and be done with it. One unit and done.
  • iter - Friday, March 9, 2018 - link

    They have, seagate has a hybrid drive, not all that great really.

    The reason is that caching algorithms suck. They are usually FIFO - first in first out, and don't take into account actual usage patterns. Meaning you get good performance only if your work is confined to a data set that doesn't exceed the cache. If you exceed it, it starts bringing in garbage, wearing down the flash over nothing. Go watch a movie, that you are only gonna watch once - it will cache that, because you accessed it. And now you have gigabytes of pointless writes to the cache, displacing data that actually made sense to be cached.

    Which is why I personally prefer to have separate drives rather than cache. Because I know what can benefit from flash and what makes no sense there. Automatic tiering is pathetic, even in crazy expensive enterprise software.
  • jabber - Friday, March 9, 2018 - link

    Yeah I was using SSHD drives when they first came out but 8GB of flash doesn't really cut it. I'm sure after all this time 64GB costs the same as 8GB did back then (plus it would be space enough for several apps and data sets to be retained) and the algorithms will have improved. If Intel thinks caches for HDDs have legs then why not just combine them in one simple package?
  • wumpus - Friday, March 9, 2018 - link

    Presumably, there's no market. People who buy spinning rust are either buying capacity (for media, and using SSD for the rest) or cheaping out and not buying SSDs.

    What surprises me is that drives still include 64MB of DRAM, you would think that companies who bothered to make these drives would have switched to TLC (and pseudo-SLC) for their buffer/caches (writing on power off must be a pain). Good luck finding someone who would pay for the difference.

    Intel managed to shove this tech into the chipsets (presumably a software driver that looked for the hardware flag, similar to RAID) in 2011-2012, but apparently dropped that soon afterward. Too bad, reserving 64GB of flash to cache a harddrive (no idea if you could do this with a RAID array) sounds like something that is still usefull (not that you need the performance, just that the flash is so cheap). Just make sure the cache is set to "write through" [if this kills performance it shouldn't be on rust] to avoid doubling your chances of drive loss. Apparently the support costs weren't worth the bother.
  • leexgx - Saturday, March 10, 2018 - link

    8GB should be plenty for SSHD and there currant generation have cache evic protection (witch i think is 3rd gen) so say a LBA block is read 10 times it will assume that is something you open often or its a system file or a startup item, so 2-3GB of data will not get removed easily (so windows, office, browsers and other startup items will always be in the nand cache) the rest of the caching is dynamic if its had more then 2-4 reads it caches it to the nand

    the current generation SSHDs by seagate (don't know how others do it) its split into 3 sections so has a easy, bit harder and very hard to evict from read cache, as the first gen SSHDs from seagate just defragmenting the drive would end evicting your normal used stuff as any 2 reads would be cached right away that does not happen any more

    if you expect it to make your games load faster you need to look elsewhere, as they are meant to boost commonly used applications and OS and on startup programs but still have the space for storage

    that said i really dislike HDDs as boot drives if they did not cost £55 for a 250gb SSD i put them in for free

Log in

Don't have an account? Sign up now