Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Intel SSD 660p 1TB
NVMe Power and Thermal Management Features
Controller Silicon Motion SM2263
Firmware NHF034C
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 77°C
Critical Temperature 80°C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Not Supported

The Intel SSD 660p's power and thermal management feature set is typical for current-generation NVMe SSDs. The rated exit latency from the deepest idle power state is quite a bit faster than what we have measured in practice from this generation of Silicon Motion controllers, but otherwise the drive's claims about its idle states seem realistic.

Intel SSD 660p 1TB
NVMe Power States
Controller Silicon Motion SM2263
Firmware NHF034C
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 4.0 W Active - -
PS 1 3.0 W Active - -
PS 2 2.2 W Active - -
PS 3 30 mW Idle 5 ms 5 ms
PS 4 4 mW Idle 5 ms 9 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled if supported.

Active Idle Power Consumption (No LPM)Idle Power Consumption

The Intel 660p has a slightly lower active idle power draw than the SM2262-based drives we've tested, thanks to the smaller controller and reduced DRAM capacity. It isn't the lowest active idle power we've measured from a NVMe SSD, but it is definitely better than most high-end NVMe drives. In the deepest idle state our desktop testbed can use, we measure an excellent 10mW draw.

Idle Wake-Up Latency

The Intel 660p's idle wake-up time of about 55ms is typical for Silicon Motion's current generation of controllers and much better than their first-generation NVMe controller as used in the Intel SSD 600p. The Phison E12 can wake up in under 2ms from a sleep state of about 52mW, but otherwise the NVMe SSDs that wake up quickly were saving far less power than the 660p's deep idle.

Mixed Read/Write Performance Conclusion
POST A COMMENT

92 Comments

View All Comments

  • jjj - Tuesday, August 7, 2018 - link

    Not bad, at least for now when there are no QLC competitors.
    The pressure QLC will put on HDDs is gonna be interesting too.
    Reply
  • npz - Tuesday, August 7, 2018 - link

    Well at least the price is reflected in the performance, with the MX500 beating the 660p when both are full. As far as scenarios where you'd go from SLC to QLC I would be much more cautious about generalizing too much. A lot of people use SSDs as scratch drives for their work (DAW, video editing, recording, etc) and it seems more than likely to hit it in those usage scenarios Reply
  • StrangerGuy - Tuesday, August 7, 2018 - link

    "A lot of people use SSDs as scratch drives for their work (DAW, video editing, recording, etc)"

    A lot of people relative to the entire market? No.
    Is this drive intended for power users/professionals? No.
    Is QLC bringing a lot more GB/$ at MSRP prices for 90%+ of the market? Yes.
    Is the worst case performance even remotely applicable to its intended market? No
    So did you just say a dumb comment while disguised a concern troll? Yes.
    Reply
  • npz - Wednesday, August 8, 2018 - link

    A lot people who would bother purusing sites like Anandtech yes. The people who would run more comprehensive benchmarks, as opposed to just buying a cheap SSD is the lot of people in the alot of.. I refer to. Of course you just disregarded the rest of my statement acknowleding the fact that it's cheap didn't you? Just so you could go on with your smart ass here. Reply
  • npz - Wednesday, August 8, 2018 - link

    And I specifically refer to "worst case" because I argue it is NOT worst case, but it becomes a rather typical case for certain use--going out of SLC to QLC, which would NOT be seen by just quick benchmarks like a lot of people cite on Amazon reviews via Crystaldisk benchmarks. Reply
  • Valantar - Wednesday, August 8, 2018 - link

    a) If you're enough of a power user to need a scratch disk and use it heavily enough to fill its SLC cache, you really ought to be buying proper equipment and not low-end drives.
    b) if you're -"- you really ought to educate yourself about your needs, or employ someone with this knowledge
    c) If you're not -"-, stop worrying and enjoy the cheap SSDs.

    Tl;dr: workstation parts for workstation use; cheapo parts for basic use.
    Reply
  • damianrobertjones - Tuesday, August 7, 2018 - link

    These drives will fill the bottom end... allowing the mid and high tiers to increase in price. Usual. Reply
  • Valantar - Wednesday, August 8, 2018 - link

    Only if the performance difference is large enough to make them worth it - which it isn't, at least in this case. While the advent of TLC did push MLC prices up (mainly due to reduced production and sales volume), it seems unlikely for the same to happen here, as these drives aim for a market segment that has so far been largely unoccupied. (It's also worth mentioning here that silicon prices have been rising for quite a while, and also affects this.) There are a few TLC drives in the same segment, but those are also quite bad. This, on the other hand, competes with faster drives unless you fill it or the SLC cache. In other words, higher-end drives will have to either aim for customers with heavier workloads (which might imply higher prices, but would also mean optimizations for non-consumer usage scenarios) or push prices lower to compete. Reply
  • romrunning - Wednesday, August 8, 2018 - link

    Well, QLC will slowly push out TLC, which was already pushing out MLC. It's not just pushing the prices of MLC/TLC up, mfgs are slowing phasing those lines out entirely. So even if I want a specific type, I may not be able to purchase it in consumerspace (maybe enterprise, with the resultant price hit).

    I hate that we're getting lower-performing items for the cheaper price - I'd rather get higher-performing at cheaper prices! :)
    Reply
  • rpg1966 - Tuesday, August 7, 2018 - link

    "In the past year, the deployment of 64-layer 3D NAND flash has allowed almost all of the SSD industry to adopt three bit per cell TLC flash"

    What does this mean? n-layer NAND isn't a requirement for TLC is it?
    Reply

Log in

Don't have an account? Sign up now