Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

WD Black SN750
NVMe Power and Thermal Management Features
Controller SanDisk 20-82-007011
Firmware 102000WD
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 80 °C
Critical Temperature 85 °C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Not Supported

The WD Black SN750 supports the typical power and thermal management features we expect from recent drives, including a reasonably high warning temperature of 80 degrees. The idle power states declare fairly quick transition times, especially for the PS3 intermediate idle.

WD Black SN750
NVMe Power States
Controller SanDisk 20-82-007011
Firmware 102000WD
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 6 W Active - -
PS 1 3.5 W Active - -
PS 2 3 W Active - -
PS 3 100 mW Idle 4 ms 10 ms
PS 4 2.5 mW Idle 4 ms 45 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled if supported.

Active Idle Power Consumption (No LPM)Idle Power Consumption

The WD Black SN750 has slightly higher idle power consumption than its predecessor, both with idle states enabled and with them disabled. The SN750 is one of a few NVMe drives that cannot even come close to a reasonably deep sleep state on our desktop testbed, a problem that seems to be hard for the industry to eliminate for good.

Idle Wake-Up Latency

Given the relatively poor idle power savings achieved with our desktop testbed, it's good to see the SN750's wake-up latency is so low, at under a quarter of a millisecond.

Mixed Read/Write Performance Conclusion
Comments Locked

54 Comments

View All Comments

  • joesiv - Friday, January 18, 2019 - link

    Micron was the manufacturer I was referring to.
    Other brands we've used which didn't exhibit the same poor endurance, ADATA, Kingston, Swissbit, Crucial

    Some of them probably even use Micron NAND. I bet the NAND is fine on the Micron model we were using, perhaps the hardware is good but the software (firmware) wasn't? Of course we haven't tested every brand/model as our requirements were very specific, so I am sure there are other Micron models that are totally fine (kind of why i'd love to see anandtech include some endurance results, to help weed out the outliers)
  • sdsdv10 - Friday, January 18, 2019 - link

    Interesting you write that Micron has problems and Crucial doesn't, as Crucial is just a consumer brand name for Micron Technology Inc.
  • joesiv - Friday, January 18, 2019 - link

    Well they were different models. The crucial was an old model that we were replacing with something new, since the old crucial drives were no longer available. It would be interesting to compare a crucial equivalent model though, I wonder if they share firmware.
  • sovking - Friday, January 18, 2019 - link

    Of course, these improvement will be welcomed, and I would like to see more in clear the steady state behaviour too.

    Regarding the endurance, we should take into account that most of these reviews are about consumer products. An NVME SSD for enterprise market has totally different performance: e.g regular steady state performance, higher endurance, higher reliability and so on. Sometimes, it's possible to find lightly used enterprise NVME drives at bargain price or at the cost of consumer drive: when this happens I prefer these drives.
  • joesiv - Friday, January 18, 2019 - link

    I think the role of a "consumer" is not perfectly defined these days. Are they the same as a "power user?" It would seem that more and more consumers are starting to do more and more serious workloads on their PCs. Obviously this is anecdotal, but with all the processing power at our disposal these days ("consumer" CPU's having 16 threads). People probably don't even know what the applications or services that they are running on their PC are doing.

    For example, a lot of commonly used applications will be running with a database system as their backend, whether it be a more simple sqlite database, or something more serious, those can be very write heavy, and they're often configured by the application without the user even knowing it. I'll bet that a lot of users even have web services running on their PC's, without actually thinking about it, all these API's that allow you to connect to your mobile devices/streaming appliances.

    I'll bet a lot of people reading anandtech reviews even have their PC's running as a fileserver, or have a dedicated machine for such duties.

    A lot of this stuff is stuff is stuff would be considered "enterprise" computing of yester-years. Why does anandtech run transcoding, rendering and "destroyer" style tests in their "consumer" reviews? Because it's relevant to some portion of the purchasing community.
  • Oxford Guy - Friday, January 18, 2019 - link

    Considering how consumer parts have had endurance problems...

    Examples: OCZ Vertex 2 (with 64-bit NAND), Samsung 840 128 (terrible steady state performance, too), Samsung 840 and 840 EVO series (read speed loss), etc.

    Endurance isn't just a matter of whether or not the drive dies or it has a lot of cell death. It's also a matter of performance consistency over time.
  • joesiv - Friday, January 18, 2019 - link

    I agree, I have bad memories of the early days of SSD's. I purchased a first generation intel SSD for $1000 (CND), the speeds were tested as being amazing compared to anything else on the market. But given the early learning curves with NAND controllers, and whatever the like, performance was terrible in the real world. I wasn't even able to upgrade the firmware since it was a first generation product, and only the subsequent versions supported the updates.

    Things have gotten better, but from my experience, it's been a rough road. Some manufacturers are a lot better than others for firmware development, and believe it or not a bug in the firmware can tank performance, or even tank your reliability, since the firmware is what controls wear leveling, and other new fangled features to give the maximum performance.

    There are MLC drives that work in SLC mode dynamically to aid in performance, and other drives that are MLC NAND running SLC mode which have a hybrid endurance between the two. Some older drives did driver level compression to reduce NAND writes, while theoretically great, can cause problems for reliability if there are any cases where the data doesn't get committed correctly, especially in poor power conditions. Firmware bugs are rarely talked about, but a firmware bug could cause garbage collection to occur too often, which will take your performance and reliability.
  • gglaw - Friday, January 18, 2019 - link

    With current gen 3D NAND, it would take an incredible amount of writes to test endurance and the regional wholeseller RMA data averaged over hundreds of thousands of SSD's sold is much more representative than AT testing endurance on 1 drive they receive as a sample. It appears most SSD RMA's are NOT from using up the endurance cycles so that would make a 1 sample test by AT even less meaningful. If they happen to get a dud when 99% of that same model has a very good reliability history based on the broader market it would just make thousands of AT readers base their purchasing decision based on a sample size of 1.
  • Billy Tallis - Friday, January 18, 2019 - link

    P/E ratings are highly dependent on what kind of error correction the NAND is used with. Even under pressure, the NAND manufacturers won't be able to give us more than just ballpark figures that would be tough to fairly compare between manufacturers.

    Last year (I think around when the first QLC drives showed up) I started recording SMART data before and after each phase of testing. I haven't written any code to parse and analyze that information yet, but it's on my to-do list.

    I don't think the usual consumer SSD test suite does enough total drive writes to move the SMART indicators enough to form meaningful projections about write endurance and drive lifetime. To do that, I would have to set up another system to do long-term endurance testing on several drives at once. That's also on our wishlist, but it's a relatively low priority given the extra equipment and time requirements.
  • joesiv - Friday, January 18, 2019 - link

    @gglaw, @Billy Tallis, you guys are right, it's hard to get firm reliability numbers based off a short, small sample test. But to be honest, its' better than nothing. And as I said, seeing one example of an outlier that performs badly on the bench for the test would validate it's usefulness.

    gglaw, you are totally right, there is more to reliability than PE Cycles, I gave the examples of a drive that under our testing failed, with a life expectancy under a year, the same test scenario (which was a heavy real world workload for our product) on other similar rated drives did not fail the test. But I didn't mention that we had huge realiability issues with our previous drives (Kingston), where they were no where near the end of their endurance ratings, but were failing for other causes. Kingston attributed a lot of the failures to firmware bugs that weren't traceable in SMART data, and in some cases pure hardware failure.

    Billy, yes in general you're right, it's hard to get meaningful projections for a short period of time, this is especially the case if you use percent life used as a metric (1-100). However, it's not too bad if you can get the PE Cycles, which typically are 3000 for MLC, and in some cases 2500 for 3D NAND, instead of waiting months for a single change in percent life change, we have seen drives go through 1 PE Cycle a day, which would give us around 8 years of product life (baring other failures), we were going through 5-6 PE Cycles a day on the Micron drive, which was a huge warning sign. That would be a great case for anandtech finding the poor endurance outliers.

Log in

Don't have an account? Sign up now