Conclusion

As the first SSD with QLC NAND to hit our testbed, the Intel SSD 660p provides much-awaited hard facts to settle the rumors and worries surrounding QLC NAND. With only a short time to review the drive we haven't had time to do much about measuring the write endurance, but our 1TB sample has been subjected to 8TB of writes and counting (out of a rated 200TB endurance) without reporting any errors and the SMART status indicates about 1% of the endurance has been used, so things are looking fine thus far.

On the performance side of things, we have confirmed that QLC NAND is slower than TLC, but the difference is not as drastic as many early predictions about QLC NAND suggested. If we didn't already know what NAND the 660p uses under the hood, Intel could pass it off as being an unusually slow TLC SSD. Even the worst-case performance isn't any worse than what we've seen with some older, smaller TLC SSDs with NAND that is much slower than the current 64-layer stuff.

The performance of the SLC cache on the Intel SSD 660p is excellent, rivaling the high-end 8-channel controllers from Silicon Motion. When the 660p isn't very full and the SLC cache is still quite large, it provides significant boosts to write performance. Read performance is usually very competitive with other low-end NVMe SSDs and well out of reach of SATA SSDs. The only exception seems to be that the 660p is not very good at suspending write operations in favor of completing a quicker read operation, so during mixed workloads or when the drive is still working on background processing to flush the SLC cache the read latency can be significantly elevated.

Even though our synthetic tests are designed to give drives a reasonable amount of idle time to flush their SLC write caches, the 660p keeps most of the data as SLC until the capacity of QLC becomes necessary. This means that when the SLC cache does eventually fill up, there's a large backlog of work to be done migrating data in to QLC blocks. We haven't yet quantified how quickly the 660p can fold the data from the SLC cache into QLC during idle times, but it clearly isn't enough to keep pace with our current test configurations. It also appears that most or all of the tests that were run after filling the drive up to 100% did not give the 660p enough idle time after the fill operation to complete its background cleanup work, so even some of the read performance measurements for the full-drive test runs suffer the consequences of filling up the SLC write cache.

In the real world, it is very rare for a consumer drive to need to accept tens or hundreds of GB of writes without interruption. Even the installation of a very large video game can mostly fit within the SLC cache of the 1TB 660p when the drive is not too full, and the steady-state write performance is pretty close to the highest rate data can be streamed into a computer over gigabit Ethernet. When copying huge amounts of data off of another SSD or sufficiently fast hard drive(s) it is possible to approach the worst-case performance our benchmarks have revealed, but those kind of jobs already last long enough that the user will take a coffee break while waiting.

Given the above caveats and the rarity with which they matter, the 660p's performance seems great for the majority of consumers who have light storage workloads. The 660p usually offers substantially better performance than SATA drives for very little extra cost and with only a small sacrifice in power efficiency. The 660p proves that QLC NAND is a viable option for general-purpose storage, and most users don't need to know or care that the drive is using QLC NAND instead of TLC NAND. The 660p still carries a bit of a price premium over what we would expect a SATA QLC SSD to cost, so it isn't the cheapest consumer SSD on the market, but it has effectively closed the price gap between mainstream SATA and entry-level NVMe drives.

Power users may not be satisfied with the limitations of the Intel SSD 660p, but for more typical users it offers a nice step up from the performance of SATA SSDs with a minimal price premium, making it an easy recommendation.

Power Management
POST A COMMENT

92 Comments

View All Comments

  • danwat1234 - Wednesday, August 8, 2018 - link

    The drive is only rated to write to each cell 200 times before it begins to wear out? Ewwww. Reply
  • azazel1024 - Wednesday, August 8, 2018 - link

    For some consumer uses, yes 100MiB/sec constant write speed isn't terrible once the SLC cache is exhausted, but it'll probably be a no for me. Granted, SSD prices aren't where I want them to be yet to replace my HDDs for bulk storage. Getting close, but prices still need to come down by about a factor of 3 first.

    My use case is 2x1GbE between my desktop and my server and at some point sooner rather than later I'd like to go with 2.5GbE or better yet 5GbE. No, I don't run 4k video editing studio or anything like that, but yes I do occasionally throw 50GiB files across my network. Right now my network link is the bottleneck, though as my RAID0 arrays are filling up, it is getting to be disk bound (2x3TB Seagate 7200rpm drive arrays in both machines). And small files it definitely runs in to disk issues.

    I'd like the network link to continue to be the limiting factor and not the drives. If I moved to a 2.5GbE link which can push around 270MiB/sec and I start lobbing large files, the drive steady state write limits are going to quickly be reached. And I really don't want to be running an SSD storage array in RAID. That is partly why I want to move to SSDs so I can run a storage pool and be confident that each individual SSD is sufficiently fast to at least saturate 2.5GbE (if I run 5GbE and the drives can't keep up, at least in an SLC cache saturated state, I am okay with that, but I'd like them to at least be able to run 250+ MiB/sec).

    Also although rare, I've had to transfer a full back-up of my server or desktop to the other machine when I've managed to do something to kill the file copy (only happened twice over the last 3 years, but it HAS happened. Also why I keep a cold back-up that is updated every month or two on an external HDD). When you are transferring 3TiB or so of data, being limited to 100MiB/sec would really suck. At least right now when that happens I can push an average of 200MiB/sec (accounting for some of it being smaller files which are getting pushed at more like 80-140MiB/sec rather than the 235MiB/sec of large files).

    That is a difference from close to 8:30 compared to about 4:15. Ideally I'd be looking at more like 3:30 for 3TiB.

    But, then again, looking at price movement, unless I win the lottery, SSD prices are probably going to take at least 4 or more likely 5-6 years before I can drop my HDD array and just replace it with SSDs. Heck, odds are excellent I'll end up replacing my HDD array with a set of even faster 4 or 6TiB HDDs before SSDs are closer enough in price (closer enough to me is paying $1000 or less for 12TB of SSD storage).

    That is keeping in mind that with HDDs I'd likely want utilized capacity under 75% and ideally under 67% to keep from utilizing those inner tracks and slowing way down. With SSDs (ignoring the SLC write cache size reductions), write penalties seem to be much less. Or at least the performance (for TLC and MLC) is so much higher than HDDs to start with, that it still remains high enough not to be a serious issue for me.

    So an SSD storage pool could probably be up around 80-90% utilized and be okay, where as a HDD array is going to want to be no more than 67-75% utilized. And also in my use case, it should be easy enough to simply slap in another SSD to increase the pool size, with HDDs I'd need to chuck the entire array and get new sets of matched drives.
    Reply
  • iwod - Wednesday, August 8, 2018 - link

    On Mac, two weeks of normal usage has gotten 1TB of written data. And it does 10-15GB on average per day.

    100TB endurance is nothing.......
    Reply
  • abufrejoval - Wednesday, August 8, 2018 - link

    I wonder if underneath the algorithm has already changed to do what I’d call the ‘smart’ thing: Essentially QLC encoding is a way of compression (brings back old memories about “Stacker”) data 4:1 at the cost of write bandwidth.

    So unless you run out of free space, you first let all data be written in fast SLC mode and then start compressing things into QLC as a background activity. As long as the input isn’t constantly saturated, the compression should reclaim enough SLC mode blocks faster on average after compression than they are filled with new data. The bigger the overall capacity and remaining cache, the longer the burst it can sustain. Of course, once the SSD is completely filled the cache will be whatever they put into the spare area and updates will dwindle down to the ‘native’ QLC write rate of 100MB/s.

    In a way this is the perfect storage for stuff like Steam games: Those tend to be hundreds of gigabytes these days, they are very sensitive to random reads (perhaps because the developers don’t know how to tune their data) but their maximum change rate is actually the capacity of your download bandwidth (wish mine was 100MB/s).

    But it’s also great for data warehouse databases or quite simply data that is read-mostly, but likes high bandwidth and better latency than spinning disks.

    The problem that I see, though, is that the compression pass needs power. So this doesn’t play well with mobile devices that you shut off immediately after slurping massive amounts of data. Worst case would be a backup SSD where you write and unplug.

    The specific problem I see for Anandtech and technical writers is that you’re no longer comparing hardware but complex software. And Emil Post proved in 1946, that it’s generally impossible.
    And with an MRAM buffer (those other articles) you could even avoid writing things at SLC first, as long as the write bursts do not overflow the buffer and QLC encoding empties it faster on average that it is filled. Should a burst overflow it, it could switch to SLC temporarily.

    I think I like it…

    And I think I would like it even better, if you could switch the caching and writing strategy at the OS or even application level. I don’t want to have to decide between buying a 2TB QLC, 1TB TLC, a 500GB MLC or 250GB SLC and then find out I need a little more here and a little less there. I have knowledge at the application (usage level), how long-lived my data will be and how it should best be treated: Let’s just use it, because the hardware internally is flexible enough to support at least SLC, TLC and QLC.

    That would also make it easier to control the QLC rewrite or compression activity in mobile or portable form factors.
    Reply
  • ikjadoon - Thursday, August 9, 2018 - link

    Billy, thank you!

    I posted a reddit comment a long time ago about separating SSD performance by storage size! I might be behind, but this is the first I’ve seen of it. It’s, to me, a much more reliable graph for purchases.

    A big shout out. 💪👌
    Reply
  • dromoxen - Friday, August 10, 2018 - link

    You would hope these things would have even larger dram buffers than tlc. I will pass on these 1st gen and stick with with HD.
    Has intel stopped making ssd controllers?
    To do some tests , write endurance, why not cool down the m.2 nand to LN2 temps, I'm sure debauer has some pots and equipment. I expect these will be even cheaper by jan 19
    Reply
  • tomatotree - Tuesday, August 14, 2018 - link

    Intel makes their own controllers for all their enterprise drives, and all 3DXP drives, but for consumer NAND drives they use 3rd party controllers with customized firmware.

    As for LN2 cooling, what would that show? That the drive might fail if you use it in a temperature range way out of spec?
    Reply
  • 351Cleveland - Monday, August 20, 2018 - link

    I’m confused. Why would I buy this over, say, an MX500 (my default go-to)? This thing is a dog in every way. How can Anandtech recommend something they admit is flawed? Reply
  • icebox - Thursday, December 6, 2018 - link

    I don't understand why everybody fusses about retention and endurance so much. Do you really buy ssd's to leave them on a shelf for months or years? Retention ? If it dies during warranty you exchange it. If it dies after it then it's probably slow and small in comparison with what's available than.
    You do have backups, right? Because no review or test or battery of tests won't guarantee that *your drive* won't die.

    BTW that's the only way I saw ssd's die - it works perfectly and after a reboot it's gone, not detected by the system.
    Reply
  • icebox - Thursday, December 6, 2018 - link

    The day has come when choosing storage is 4 tiered.

    You have fast nvme, slow nvme, sata ssd's and traditional hdd's. At least I kicked hdd's off my desktop. I have a samsung nvme for boot and applications and sata ssd's for media and photos. Now I'm looking of replacing those with the 2tb 660p and moving those to the nas for bulk storage.
    Reply

Log in

Don't have an account? Sign up now