Power Management

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled.

Active Idle Power Consumption (No LPM)Idle Power Consumption

Like most NVMe SSDs, the WD Black has a fairly high active idle power draw—the cost of keeping a PCIe 3 x4 link active. The active idle power is a bit higher than the previous WD Black SSD but is in line with drives from Samsung, Toshiba and Phison.

Enabling all the advanced PCIe and NVMe power management features doesn't have the desired effect on the WD Black SSD. The drops by almost half, but it should have dropped by at least an order of magnitude. The original WD Black SSD used aggressive power management whether or not the operating system requested it. The new WD Black seems to be unable to save much power when used on our desktop testbed, no matter what NVMe power states are requested. We will work with Western Digital to try to isolate the cause of this poor behavior. In the meantime, the WD Black is hardly the only NVMe drive where power management has problems out of the box, but Intel and Samsung have managed to produce drives that achieve very low idle power on our testbed with little or no tuning required.

Idle Wake-Up Latency

Since the WD Black is clearly unable to engage its full array of power management capabilities on our testbed, it is unsurprising to see that its wake-up latency is quite short. It is not the minimal ~15µs we usually observe from drives that aren't enabling any power savings at all, but ~230µs is still a very quick wake-up from sleep.

Mixed Read/Write Performance Conclusion
Comments Locked

69 Comments

View All Comments

  • Chaitanya - Thursday, April 5, 2018 - link

    Nice to see some good competition to Samsung products in SSD space. Would like to see durability testing on these drives.
  • HStewart - Thursday, April 5, 2018 - link

    Yes it nice to have competition in this area and important thing to notice here a long time disk drive manufacture is changes it technology to meet changes in storage technology.
  • Samus - Thursday, April 5, 2018 - link

    Looks like WD's purchase of SanDisk is showing some payoff. If only Toshiba would have taken advantage of OCZ (who purchased Indilinx) in-house talent. The Barefoot controller showed a lot of promise and could have easily been updated to support low power states and TLC NAND. But they shelved it. I don't really know why Toshiba bought OCZ.
  • haukionkannel - Friday, April 6, 2018 - link

    Indeed! Samsung did have too long time performance supremesy and that did make the company to upp the prices (natural development thought).
    Hopefully this better situation help uss customers in reasonable time frame. Too much bad news to consumers last years considering the prices.
  • XabanakFanatik - Thursday, April 5, 2018 - link

    Whatever happened to performance consistency testing?
  • Billy Tallis - Thursday, April 5, 2018 - link

    The steady state QD32 random write test doesn't say anything meaningful about how modern SSDs will behave on real client workloads. It used to be a half-decent test before everything was TLC with SLC caching and the potential for thermal throttling on M.2 NVMe drives. Now, it's impossible to run a sustained workload for an hour and claim that it tells you something about how your drive will handle a bursty real world workload. The only purpose that benchmark can serve today is to tell you how suitable a consumer drive is for (ab)use as an enterprise drive.
  • iter - Thursday, April 5, 2018 - link

    Most of the tests don't say anything meaningful about "how modern SSDs will behave on real client workloads". You can spend 400% more money on storage that will only get you 4% of performance improvement in real world tasks.

    So why not omit synthetic tests altogether while you are at it?
  • Billy Tallis - Thursday, April 5, 2018 - link

    You're alluding to the difference between storage performance and whole system/application performance. A storage benchmark doesn't necessarily give you a direct measurement of whole system or application performance, but done properly it will tell you about how the choice of an SSD will affect the portion of your workload that is storage-dependent. Much like Amdahl's law, speeding up storage doesn't affect the non-storage bottlenecks in your workload.

    That's not the problem with the steady-state random write test. The problem with the steady state random write test is that real world usage doesn't put the drive in steady state, and the steady state behavior is completely different from the behavior when writing in bursts to the SLC cache. So that benchmark isn't even applicable to the 5% or 1% of your desktop usage that is spent waiting on storage.

    On the other hand, I have tried to ensure that the synthetic benchmarks I include actually are representative of real-world client storage workloads, by focusing primarily on low queue depths and limiting the benchmark duration to realistic quantities of data transferred and giving the drive idle time instead of running everything back to back. Synthetic benchmarks don't have to be the misleading marketing tests designed to produce the biggest numbers possible.
  • MrSpadge - Thursday, April 5, 2018 - link

    Good answer, Billy. It won't please everyone here, but that's impossible anyway.
  • iter - Thursday, April 5, 2018 - link

    People do want to see how much time it takes before cache gives out. Don't presume to know what all people do with their systems.

    As I mentioned 99% of the tests are already useless when it comes to indicating overall system performance. 99% of the people don't need anything above mainstream SATA SSD. So your point on excluding that one test is rather moot.

    All in all, it seems you are intentionally hiding the weakness of certain products. Not cool. Run the tests, post the numbers, that's what you get paid for, I don't think it is unreasonable to expect that you do your job. Two people pointed out the absence of that tests, which is two more than those who explicitly stated they don't care about it, much less have anything against it. Statistically speaking, the test is of interest, and I highly doubt it will kill you to include it.

Log in

Don't have an account? Sign up now