AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.

ATSB - Heavy (Data Rate)

When the Heavy test is run on an empty Intel SSD 660p, the test is able to operate almost entirely within the large SLC cache and the average data rate is competitive with many high-end NVMe SSDs. When the drive is full and the SLC cache is small, the low performance of the QLC NAND shows through with an average data rate that is slower than the 600p or Crucial MX500, but still far faster than a mechanical hard drive.

ATSB - Heavy (Average Latency)ATSB - Heavy (99th Percentile Latency)

The average and 99th percentile latency scores of the 660p on the empty-drive test run are clearly high-end; the use of a four-channel controller doesn't seem to be holding back the performance of the SLC cache. The full-drive latency scores are an order of magnitude higher and worse than other SSDs of comparable capacity, but not worse than some of the slowest low-capacity TLC drives we've tested.

ATSB - Heavy (Average Read Latency)ATSB - Heavy (Average Write Latency)

The average read latency of the Intel 660p on the Heavy test is about 2.5x higher for the full-drive test run than when the test is run on a freshly-erased drive. Neither score is unprecedented for a NVMe drive, and it's not quite the largest disparity we've seen between full and empty performance. The average write latency is where the 660p suffers most from being full, with latency that's about 60% higher than the already-slow 600p.

ATSB - Heavy (99th Percentile Read Latency)ATSB - Heavy (99th Percentile Write Latency)

The 99th percentile read latency scores from the 660p are fine for a low-end NVMe drive, and close to high-end for the empty-drive test run that is mostly using the SLC cache. The 99th percentile write latency is similarly great when using the SLC cache, but almost 20 times worse when the drive is full. This is pretty bad in comparison to other current-generation NVMe drives or mainstream SATA drives, but is actually slightly better than the Intel 600p's best case for 99th percentile write latency.

ATSB - Heavy (Power)

The Intel SSD 660p shows above average power efficiency on the Heavy test, by NVMe standards. Even the full-drive test run energy usage is lower than several high-end drives.

AnandTech Storage Bench - The Destroyer AnandTech Storage Bench - Light
POST A COMMENT

88 Comments

View All Comments

  • limitedaccess - Tuesday, August 07, 2018 - link

    SSD reviewers need to look into testing data retention and related performance loss. Write endurance is misleading. Reply
  • Ryan Smith - Tuesday, August 07, 2018 - link

    It's definitely a trust-but-verify situation, and is something we're going to be looking into for the 660p and other early QLC drives.

    Besides the fact that we only had limited hands-on time with this drive ahead of the embargo and FMS, it's going to take a long time to test the drive's longevity. Even with 24/7 writing, with a sustained 100MB/sec write rate you're looking at only around 8TB written/day. Which means you're looking at weeks or months to exhaust the smallest drive.
    Reply
  • npz - Tuesday, August 07, 2018 - link

    In additoin to durability from DWPD, I'd also like to see retention tests, both cold storage verification and any performance impact, and when fresh and after the drive has been filled several times.
    It's definitely a long term endeavor like you said though.

    Then again, it took a while after the drive was released for Samsung to discover the retentation / charge leakage on old cells (requiring more ECC). But their solution, and basically everyone's solution, to periodically rewrite old cells at the expense of some endurance only works with the drive constantly powered on.
    Reply
  • npz - Tuesday, August 07, 2018 - link

    ^ referring to the 840 Reply
  • smilingcrow - Tuesday, August 07, 2018 - link

    and 840 Evo; but not the 80 Pro which was MLC. Reply
  • mapesdhs - Wednesday, August 08, 2018 - link

    840 Pro is still a really good SSD, I try to bag them used when I can. Remember this?

    https://techreport.com/review/27062/the-ssd-endura...

    The whole move to cheaper flash with less endurance is a shame in a way.
    Reply
  • Valantar - Wednesday, August 08, 2018 - link

    Why, if the endurance was never utilized to begin with? All TR showed with that was that consumer MLC SSDs had something resembling enterprise-grade endurance. If cutting that to something more in line with actual use also reduces costs noticeably, what does it matter? As long as endurance doesn't actually go to or below normal usage patterns, it won't make an iota of difference. The reduced write speeds are more of an issue, but also alleviated by the ever-larger SLC caches on these larger drives. Reply
  • Oxford Guy - Tuesday, August 07, 2018 - link

    Their kludge.

    Solution implies that the problem was truly fixed.
    Reply
  • eastcoast_pete - Tuesday, August 07, 2018 - link

    Hi Ryan and Billie,

    I second the questions by limitedaccess and npz, also on data retention in cold storage. Now, about Ryan's answer: I don't expect you guys to be able to torture every drive for months on end until it dies, but, is there any way to first test the drive, then run continuous writes/rewrites for seven days non-stop, and then re-do some core tests to see if there are any signs or even hints of deterioration? The issue I have with most tests is that they are all done on virgin drives with zero hours on them, which is a best-case scenario. Any decent drive should be good as new after only 7 days (168 hours) of intensive read/write stress. If it's still as good as when you first tested it, I believe that would bode well for possible longevity. Conversely, if any drive shows even mild deterioration after only a week of intense use, I'd really like to know, so I can stay away.
    Any chance for that or something similar?
    Reply
  • JoeyJoJo123 - Tuesday, August 07, 2018 - link

    >and then re-do some core tests to see if there are any signs or even hints of deterioration?
    That's not how solid state devices work. They're either working or they're not. And even if they're dead, that's not to say anything that it was indeed the nand flash that deteriorated beyond repair, it could've been the controller or even the port the SSD was connected that got hosed.

    Literally testing a single drive says absolutely nothing at all about the expected lifespan of your single drive. This is why mass aggregate reliability ratings from people like Backblaze is important. They buy enough bulk drives that they can actually average out the failure rates and get reasonable real world reliability numbers of the the drives used in hot and vibration-prone server rack environments.

    Anandtech could test one drive and say "Well it worked when we first plugged it in, and when we rebooted, the review sample we got no longer worked. I guess it was a bad sample" or "Well, we stress tested it for 4 weeks under a constant mixed read/write load, and the SMART readings show that everything is absolutely perfect, we can extrapolate that no drive of this particular series will never _ever_ fail for any reason whatsoever until the heat death of the universe". Either way, both are completely anecdotal evidence, neither can have any real conclusive evidence found due to the sample size of ONE drive, and does nothing but possibly kill the storage drive off prematurely for the sake of idiots salivating over elusive real world endurance rating numbers when in reality IT REALLY DOESN'T MATTER TO YOU.

    Are you a standard home consumer? Yes.
    And you're considering purchasing this drive that's designed and marketed towards home consumers (ie: this is not a data center priced or marketed product)?: Yes.
    Are you using it under normal home consumer workloads (ie: you're not reading/writing hundreds of MB/s 24/7 for years on end)? Yes.

    Then you have nothing to worry about. If the drive dies, then you call up/email the manufacturer and get warranty replacement for your drive. And chances are, your drives will likely be useless due to ever faster and more spacious storage options in the future than they will fail. I got a basically worthless 80GB SATA 2 (near first gen) SSD that's neither fast enough to really use as a boot drive nor spacious enough to be used anywhere else. If anything the NAND on that early model should be dead, but it's not, and chances are the endurance ratings are highly pessimistic of their actual death as seen in the ARS Technica report where Lee Hutchinson stressed SSDs 24/7 for ~18 months before they died.
    Reply

Log in

Don't have an account? Sign up now