Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled if supported.

Active Idle Power Consumption (No LPM)Idle Power Consumption

It appears that the 1TB Samsung 860 QVO was still busy with background processing several minutes after the test data was written to the drive, so our automated idle power measurement caught it still drawing 2W. The 4TB was much quicker to flush its SLC cache and turned in a respectable active idle power consumption score. Both drives have good idle power consumption when put into the slumber state, though we've measured slightly higher than the official spec of 30mW.

Idle Wake-Up Latency

The wake-up latency for the 860 QVO is the same as their other SATA SSDs, hovering around a reasonable 1.2 ms. It's not the best that can be achieved over SATA, but it's nothing to complain about.

Mixed Read/Write Performance Conclusion
Comments Locked

109 Comments

View All Comments

  • CheapSushi - Wednesday, November 28, 2018 - link

    Yeah the premise was cheaper NAND for bulk storage with compromises. That way all okay in my mind. But as shown, there's just no good value proposition here yet. Just inherently I figured QLC would be 33% cheaper than TLC and then mass production and the higher density stacking would bring that down further. But...I guess not.
  • nagi603 - Friday, November 30, 2018 - link

    At $400 I'd toss out all the current HDDs of my NAS. Maybe in a few years.... or not, as the HDD prices/capacities move too.
  • azazel1024 - Friday, November 30, 2018 - link

    My price point is roughly 8 cents a GB and performance of at least 250MB/sec sustained writes/reads.

    Which these aren't at. That might mean a few more generations of TLC drives to get there. I don't know. My use case is replacing the spinning rust in my desktop and server. I mirror storage between them and I am running dual 1GbE interfaces with SMB Multichannel. So I can push about 235MB/sec across.

    Sometimes I am just shoving a few GB file (sometimes one or two single digit MB sized files, but more often larger ones). On rare occasions I am backing up completely from one machine to the other, because reasons. So when tossing 2-3TB of data, I don't need my transfer "stalling" at 80MB/sec, or even 160MB/sec. Hopefully soon networking prices on 2.5/5/10GbE will drop enough I will upgrade there. I don't necessarily need to saturate a 2.5GbE, let alone 5 or 10GbE interface with big transfers.

    So my benchmark is 250MB/sec sustained on full disk transfers. That way I don't need to set up drives in RAID to accommodate higher speeds. As one of things I am looking forward to/hoping for with SSDs is being able to move to storage pools/JBOD type setup so that as I start pushing my capacity limits, I can just add a new drive, rather than needing to replace an entire array. And those sustained speeds better be able to manage that with the disk 80-90% full. One of those things that makes me shy away from using HDD arrays that full is the performance suffers a lot once you start getting that far in on the tracks (my current 2x3TB RAID0 arrays can push about 320MB/sec when on the outer tracks, on the inner ones it is only about 190MB/sec).

    Once of these days I could justify dishing out $600-800 to replace my 2x3TB arrays with 5-6TB of storage in each machine. Especially if it is storage that I can potentially keep using for a long time (I don't know, call it a decade or so) by just adding a new disk once capacity starts getting low, rather than replacing all of them. But I need/want good performance while I am at it.

    For my server I can still comfortably live with a 60GB system drive. When I upgrade it, I will likely FINALLY replace the old SATAII SSD in there with a newer SATAIII 120GB SSD or get an M.2 120GB depending on what the board will support. Basically the smallest capacity I can get. It doesn't need heaps of performance. My desktop I will likely get a 500GB M.2 TLC drive once I finally upgrade it (currently running a last generation 256GB TLC SATAIII drive as the system disk).There I'd like some nice performance, but frankly a good M.2 TLC drive with 512GB is big enough jump in performance I don't care to spend the money on an MLC drive for the system disk.
  • TheCurve - Tuesday, November 27, 2018 - link

    Another great review from Billy. Love reading your stuff!
  • rocky12345 - Tuesday, November 27, 2018 - link

    This is great and a step closer to getting rid of spinning drives but they are not there yet. The prices of these Samsung drives are far better at the 4TB range but I just picked up a 4TB WD Blue for $99 Canadian on the black Friday sales granted that same drive before the sale was $209.99 Canadian but even still far cheaper than the $599.99US for the Samsung 4TB almost SSD drive.

    With all of that said I am very happy to see large TB drives for SSD coming into a lower price range probably going to be another 5-6 years before the prices match for spinning and SSD drives or if Seagate and WD totally stop making spinning drives then of coarse we have nothing to fall back on if we want large TB hard drives in our systems for data storage and we will have to pay the price of these types of SSD drives.

    On a side note my fear is that when Seagate and WD stop making spinning drives SSD drive prices might sky rocket because we have no other option. The only reason SSD drives are coming down in the larger sizes is these companies are trying to compete with large TB spinning drives on price points.
  • kpb321 - Tuesday, November 27, 2018 - link

    I'm not sure SSD's will ever pass up HD's for $ per GB for pure bulk storage. Even the "Cheap" 4tb SSD is around the cost of 2x 10 TB HDs so somewhere around 5X more expensive in $ per GB. Not an impossible margin but still a lot of ground to make up vs a moving target. What has happened is SSDs have gotten "big enough" and "cheap enough" that for many people they are viable as the only drive in their machine. Looking at Newegg it's ~$45 for your basic 1tb hd and you can pick up a cheap ~250gb SSD for a little less or a ~512gb SSD for a little more. I'd certainly prefer a 250 or 512 gb SSD as the drive in my system over a 1tb HD but if you do need bulk storage (1tb+) HDs are still hard to beat. 3TB hd's start off at ~$85 and continue to be more cost effect at higher sizes so I don't think HDs will completely vanish. They may become increasingly specialized to bulk storage and cloud providers with things like SMR trading off some performance for increased density but I doubt they will go away. Cheap PC mfgs do still seem to like the cheap 1tb hds. About the same cost as a small but usable SSD but give big numbers for the ads.
  • dontlistentome - Tuesday, November 27, 2018 - link

    It's about 10 years ago that I bought an Intel 80GB drive for $250. We're getting 2TB flash for that now - about a factor of 25 times reduction in price.

    Another 5 times cheaper? Easy.
  • DanNeely - Tuesday, November 27, 2018 - link

    For comparison, in 2008 the biggest HDDs were apparently 1.5TB in size; that drive launched at ~$215 (although it rapidly dropped afterward). For comparison a 14TB Ironwolf is $530 at B&H. That's 9.3x more capacity at at 2.4x the price; or roughly at 3.8x improvement in price per TB at the top end. Or only 2.7x is you use the ~$150 price estimated price for nov 08.

    Flash might end up needing to drop 10x in price per TB to beat spinning rust; but it has momentum behind it, and the more market share it wins based on size/power/performance the more economies of scale and larger R&D budgets will tilt the floor in its favor.

    https://www.tomshardware.com/reviews/hdd-terabyte-...

    https://camelcamelcamel.com/Seagate-Barracuda-7200...
  • Lolimaster - Thursday, November 29, 2018 - link

    Ever heard of law of diminishing returns?
    At 1st tech is hard to produce and sell than it scales till you reach a wall, same with flash cards at the times of N64, 32-64MB for $100.

    Even HDD's which a well known tech is having a hard time executing the next step, HAMR which should boost capacities to 20-50TB, 4 years of delay.
  • The_Assimilator - Thursday, November 29, 2018 - link

    Just like HDDs never passed tapes in cost/GB for bulk storage.

Log in

Don't have an account? Sign up now