As the first SSD with QLC NAND to hit our testbed, the Intel SSD 660p provides much-awaited hard facts to settle the rumors and worries surrounding QLC NAND. With only a short time to review the drive we haven't had time to do much about measuring the write endurance, but our 1TB sample has been subjected to 8TB of writes and counting (out of a rated 200TB endurance) without reporting any errors and the SMART status indicates about 1% of the endurance has been used, so things are looking fine thus far.

On the performance side of things, we have confirmed that QLC NAND is slower than TLC, but the difference is not as drastic as many early predictions about QLC NAND suggested. If we didn't already know what NAND the 660p uses under the hood, Intel could pass it off as being an unusually slow TLC SSD. Even the worst-case performance isn't any worse than what we've seen with some older, smaller TLC SSDs with NAND that is much slower than the current 64-layer stuff.

The performance of the SLC cache on the Intel SSD 660p is excellent, rivaling the high-end 8-channel controllers from Silicon Motion. When the 660p isn't very full and the SLC cache is still quite large, it provides significant boosts to write performance. Read performance is usually very competitive with other low-end NVMe SSDs and well out of reach of SATA SSDs. The only exception seems to be that the 660p is not very good at suspending write operations in favor of completing a quicker read operation, so during mixed workloads or when the drive is still working on background processing to flush the SLC cache the read latency can be significantly elevated.

Even though our synthetic tests are designed to give drives a reasonable amount of idle time to flush their SLC write caches, the 660p keeps most of the data as SLC until the capacity of QLC becomes necessary. This means that when the SLC cache does eventually fill up, there's a large backlog of work to be done migrating data in to QLC blocks. We haven't yet quantified how quickly the 660p can fold the data from the SLC cache into QLC during idle times, but it clearly isn't enough to keep pace with our current test configurations. It also appears that most or all of the tests that were run after filling the drive up to 100% did not give the 660p enough idle time after the fill operation to complete its background cleanup work, so even some of the read performance measurements for the full-drive test runs suffer the consequences of filling up the SLC write cache.

In the real world, it is very rare for a consumer drive to need to accept tens or hundreds of GB of writes without interruption. Even the installation of a very large video game can mostly fit within the SLC cache of the 1TB 660p when the drive is not too full, and the steady-state write performance is pretty close to the highest rate data can be streamed into a computer over gigabit Ethernet. When copying huge amounts of data off of another SSD or sufficiently fast hard drive(s) it is possible to approach the worst-case performance our benchmarks have revealed, but those kind of jobs already last long enough that the user will take a coffee break while waiting.

Given the above caveats and the rarity with which they matter, the 660p's performance seems great for the majority of consumers who have light storage workloads. The 660p usually offers substantially better performance than SATA drives for very little extra cost and with only a small sacrifice in power efficiency. The 660p proves that QLC NAND is a viable option for general-purpose storage, and most users don't need to know or care that the drive is using QLC NAND instead of TLC NAND. The 660p still carries a bit of a price premium over what we would expect a SATA QLC SSD to cost, so it isn't the cheapest consumer SSD on the market, but it has effectively closed the price gap between mainstream SATA and entry-level NVMe drives.

Power users may not be satisfied with the limitations of the Intel SSD 660p, but for more typical users it offers a nice step up from the performance of SATA SSDs with a minimal price premium, making it an easy recommendation.

Power Management


View All Comments

  • Ryan Smith - Tuesday, August 7, 2018 - link

    3D NAND is not a requirement for TLC. However most of the 32/48 layer processes weren't very good, resulting in poorly performing TLC NAND. The 64 layer stuff has turned out much better, finally making TLC viable from all manufacturers. Reply
  • woggs - Tuesday, August 7, 2018 - link

    2D nand was abandoned because it squeezed the storage element down to a size where it became infeasible to scale further and still store data reliably. The move to 3D nand took back the needed size of the memory element to store more charge. Cost reduction from scaling is no longer reliant directly on the reduction of the storage element. This is a key enabler for TLC and QLC. Reply
  • woggs - Tuesday, August 7, 2018 - link

    Stated another way... Scaling 2D flash cells proportionally reduced the stored charge available to divide up into multiple levels, making any number of bits per cell proportionally more difficult. The the question of cost reduction was which is faster and cheaper: scale the cell to smaller size or deliver more bits/cell? 2 bits per cell was achievable fast enough to justify it's use for cost reduction in parallel with process scaling, which was taking 18 to 24 months a pop. TLC was achievable on 2D nodes (not the final ones) but not before the next process node would be available. 3D has completely changed the scaling game and makes more bits per cell feasible, with less degradation in the ability to deliver as the process scales. The early 3D nodes "weren't very good" because they were the first 3D nodes going through the new learning curve. Reply
  • PeachNCream - Tuesday, August 7, 2018 - link

    Interesting performance measurements. Variable size pseudo-SLC really helps to cover up the QLC performance penalties which look pretty scary when the drive is mostly full. The .1 DWPD rating is bad, but typical consumers aren't likely to thrash a drive with that many writes on a daily basis though Anandtech's weighty benchmarks ate up 1% of the total rated endurance in what is a comparable blink of an eye in the overall life of a storage device.

    In the end, I don't think there's a value proposition in owning such the 660p in specific if you're compelled to leave a substantial chunk of the drive in an empty state so the performance doesn't rapidly decline. In effect, the buyer is purchasing more capacity than required to retain performance so why not just purchase a TLC or MLC drive and suffer less performance loss and therefore gain more usable space?
  • Oxford Guy - Tuesday, August 7, 2018 - link

    The 840's TLC degraded performance because of falling voltages, not because of anyone "thrashing" the drive.

    However, it is also true that the performance of the 120 GB drive was appalling in steady state.
  • mapesdhs - Wednesday, August 8, 2018 - link

    Again, 840 EVO; few sites covered the standard 840, there's not much data. I think it does suffer from the same issue, but most media coverage was about the EVO version. Reply
  • Spunjji - Wednesday, August 8, 2018 - link

    It does suffer from the same problem. It wasn't fixed. Not sure why Oxford *keeps* bringing it up in response to unrelated comments, though. Reply
  • Oxford Guy - Friday, August 10, 2018 - link

    The point is that there is more to SSD reliability than endurance ratings. Reply
  • Oxford Guy - Friday, August 10, 2018 - link

    "few sites covered the standard 840"

    The 840 got a lot of hype and sales.
  • FunBunny2 - Tuesday, August 7, 2018 - link

    with regard to power-off retention: is a stat estimation from existing USB sticks (on whatever node) and such, meaningful? whether or not, what might be the prediction? Reply

Log in

Don't have an account? Sign up now