Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled if supported.

Active Idle Power Consumption (No LPM)Idle Power Consumption

It appears that the 1TB Samsung 860 QVO was still busy with background processing several minutes after the test data was written to the drive, so our automated idle power measurement caught it still drawing 2W. The 4TB was much quicker to flush its SLC cache and turned in a respectable active idle power consumption score. Both drives have good idle power consumption when put into the slumber state, though we've measured slightly higher than the official spec of 30mW.

Idle Wake-Up Latency

The wake-up latency for the 860 QVO is the same as their other SATA SSDs, hovering around a reasonable 1.2 ms. It's not the best that can be achieved over SATA, but it's nothing to complain about.

Mixed Read/Write Performance Conclusion
Comments Locked

109 Comments

View All Comments

  • Impulses - Thursday, November 29, 2018 - link

    Bleh, googling revealed that WAS on Amazon... It never triggered my price alerts, hrm, even tho it's definitely showing on Camelcamel's price history now, weird. Maybe I glossed over it entirely, oh well.
  • The_Assimilator - Thursday, November 29, 2018 - link

    https://www.pcgamer.com/crucials-2tb-mx500-ssd-is-...
  • hojnikb - Tuesday, November 27, 2018 - link

    @Billy

    Can you do a data retention test ? Like writing the drive with data (H2testw seems to be the pick of the bunch) and disconnecting it. I'm interested how data retention holds up over time.
  • JoeyJoJo123 - Tuesday, November 27, 2018 - link

    Don't use SSDs for cold storage. Testing this is absurdly stupid and a waste of a reviewers time.

    Disconnect drive for 10 days: Replug it in and verify data's still all there. Yep.
    Disconnect drive for 15 days: Replug it in and verify data's still all there. Yep.
    Disconnect drive for 20 days: Replug it in and verify data's still all there. Yep.
    (We're already at 45 days, or 1.5 months, and chances are the data's still all there, are you getting how and why this kind of testing is stupid?)
    By the time you get to sufficiently long enough time periods to see some change in data you'd have wasted over a year trying to find a time period (within 5 days) where you know the drive will start losing data. This is just as absurd as asking reviewers to test NAND flash endurance by hammering the drive 24/7 for years until it dies. By the time it DOES die something better will have already arrived on the market. And disregarding all that, testing with a sample size of ONE is not indicative of any relevant performance characteristics for your ONE drive.

    If you want to use cold storage backups, either use a mechanical hard drive, a tape drive, or invest in cloud storage and encrypt your data before uploading it.

    Testing the cold storage capabilities of a sample size of ONE QLC SSD does nothing but prove that it's a less satisfactory use case for the technology than using a mechanical hard drive.
  • HollyDOL - Tuesday, November 27, 2018 - link

    I _think_ you can find Google or Facebook statistics on their SSDs, error rate etc. in representative volumes. But ofc it is enterprise grade hardware.
  • hojnikb - Wednesday, November 28, 2018 - link

    Not for QLC.
  • hojnikb - Wednesday, November 28, 2018 - link

    You're not thinking very far. As QLC is very cheap, it will be used in likes of flash drives, SD cards and portable SSDs. With these, it's not expected for them to be powerred at all time and in the case of sd cards and flash drives, it's not likely controller would do any kind of data rewritting.

    So this is very much a relavant test, that might give us ideas on how it performs. And no, you don't have to wait a year to see something; data curruption can be presented a lot sooner if from of read error and slower read speed in general. A more extreme approach is to heat the drive, which accelarates the process.
  • eddieobscurant - Tuesday, November 27, 2018 - link

    Horrible performance by samsung 64 layer qlc nand. Way slower than intel's/micron's . Especially the 4k random reads are slower than my first ssd 10 years ago, the intel x-25m.

    In order for samsung 860 qvo to make sense, it should be priced below 0.10$/gb
  • stargazera5 - Tuesday, November 27, 2018 - link

    Great, now that we have QLC down, we can move on to PLC (5 bits per cell) and 1 drive write per week.
  • CheapSushi - Wednesday, November 28, 2018 - link

    And from the current trend, it still won't be much cheaper. =P

Log in

Don't have an account? Sign up now