AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.

ATSB - Heavy (Data Rate)

When the Heavy test is run on an empty Intel SSD 660p, the test is able to operate almost entirely within the large SLC cache and the average data rate is competitive with many high-end NVMe SSDs. When the drive is full and the SLC cache is small, the low performance of the QLC NAND shows through with an average data rate that is slower than the 600p or Crucial MX500, but still far faster than a mechanical hard drive.

ATSB - Heavy (Average Latency)ATSB - Heavy (99th Percentile Latency)

The average and 99th percentile latency scores of the 660p on the empty-drive test run are clearly high-end; the use of a four-channel controller doesn't seem to be holding back the performance of the SLC cache. The full-drive latency scores are an order of magnitude higher and worse than other SSDs of comparable capacity, but not worse than some of the slowest low-capacity TLC drives we've tested.

ATSB - Heavy (Average Read Latency)ATSB - Heavy (Average Write Latency)

The average read latency of the Intel 660p on the Heavy test is about 2.5x higher for the full-drive test run than when the test is run on a freshly-erased drive. Neither score is unprecedented for a NVMe drive, and it's not quite the largest disparity we've seen between full and empty performance. The average write latency is where the 660p suffers most from being full, with latency that's about 60% higher than the already-slow 600p.

ATSB - Heavy (99th Percentile Read Latency)ATSB - Heavy (99th Percentile Write Latency)

The 99th percentile read latency scores from the 660p are fine for a low-end NVMe drive, and close to high-end for the empty-drive test run that is mostly using the SLC cache. The 99th percentile write latency is similarly great when using the SLC cache, but almost 20 times worse when the drive is full. This is pretty bad in comparison to other current-generation NVMe drives or mainstream SATA drives, but is actually slightly better than the Intel 600p's best case for 99th percentile write latency.

ATSB - Heavy (Power)

The Intel SSD 660p shows above average power efficiency on the Heavy test, by NVMe standards. Even the full-drive test run energy usage is lower than several high-end drives.

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
POST A COMMENT

86 Comments

View All Comments

  • woggs - Tuesday, August 07, 2018 - link

    Stated another way... Scaling 2D flash cells proportionally reduced the stored charge available to divide up into multiple levels, making any number of bits per cell proportionally more difficult. The the question of cost reduction was which is faster and cheaper: scale the cell to smaller size or deliver more bits/cell? 2 bits per cell was achievable fast enough to justify it's use for cost reduction in parallel with process scaling, which was taking 18 to 24 months a pop. TLC was achievable on 2D nodes (not the final ones) but not before the next process node would be available. 3D has completely changed the scaling game and makes more bits per cell feasible, with less degradation in the ability to deliver as the process scales. The early 3D nodes "weren't very good" because they were the first 3D nodes going through the new learning curve. Reply
  • PeachNCream - Tuesday, August 07, 2018 - link

    Interesting performance measurements. Variable size pseudo-SLC really helps to cover up the QLC performance penalties which look pretty scary when the drive is mostly full. The .1 DWPD rating is bad, but typical consumers aren't likely to thrash a drive with that many writes on a daily basis though Anandtech's weighty benchmarks ate up 1% of the total rated endurance in what is a comparable blink of an eye in the overall life of a storage device.

    In the end, I don't think there's a value proposition in owning such the 660p in specific if you're compelled to leave a substantial chunk of the drive in an empty state so the performance doesn't rapidly decline. In effect, the buyer is purchasing more capacity than required to retain performance so why not just purchase a TLC or MLC drive and suffer less performance loss and therefore gain more usable space?
    Reply
  • Oxford Guy - Tuesday, August 07, 2018 - link

    The 840's TLC degraded performance because of falling voltages, not because of anyone "thrashing" the drive.

    However, it is also true that the performance of the 120 GB drive was appalling in steady state.
    Reply
  • mapesdhs - Wednesday, August 08, 2018 - link

    Again, 840 EVO; few sites covered the standard 840, there's not much data. I think it does suffer from the same issue, but most media coverage was about the EVO version. Reply
  • Spunjji - Wednesday, August 08, 2018 - link

    It does suffer from the same problem. It wasn't fixed. Not sure why Oxford *keeps* bringing it up in response to unrelated comments, though. Reply
  • Oxford Guy - Friday, August 10, 2018 - link

    The point is that there is more to SSD reliability than endurance ratings. Reply
  • Oxford Guy - Friday, August 10, 2018 - link

    "few sites covered the standard 840"

    The 840 got a lot of hype and sales.
    Reply
  • FunBunny2 - Tuesday, August 07, 2018 - link

    with regard to power-off retention: is a stat estimation from existing USB sticks (on whatever node) and such, meaningful? whether or not, what might be the prediction? Reply
  • npz - Tuesday, August 07, 2018 - link

    IMO no, not from USB sticks. There's this whitepaper from Dell concerning retention for their enterprise SSDs:
    http://www.dell.com/downloads/global/products/pvau...

    It depends on the how much the flash has been used (P/E cycle used), type of flash, and
    storage temperature. In MLC and SLC, this can be as low as 3 months and best case can be more than 10 years. The retention is highly dependent on temperature and workload.

    The pattern that I've noticed is that the more write durability and write-performance-oriented, the lower the retention rate (so it's actually inversely related)

    Now JEDEC specifies 1 year retention but who knows how many SSDs really follow it. As we saw with Samsun 840 it wouldn't have lasted a year without the firmware update, but that still requires periodically powered on. I still don't think it would last a full year after being tortured, then unplugged.
    Reply
  • Oxford Guy - Tuesday, August 07, 2018 - link

    And yet every article about it said Samsung "fixed" the drives. Reply

Log in

Don't have an account? Sign up now