Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Intel SSD 660p is lower than several of the fastest high-end drives, but is still pretty quick given the 4-channel controller used by the 660p. The read speed is only moderately impaired after filling the drive all the way.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

Sustained 128kB Sequential Read

On the longer sequential read test that goes beyond QD1, the true high-end NVMe drives pull away from the 660p but it is still faster than most other low-end NVMe SSDs. Internal fragmentation is more of a problem for the 660p than the TLC drives, but this is not too surprising—the QLC NAND is likely using larger page and block sizes that add to the overhead of gathering data that has been dispersed by wear leveling during random writes.

Sustained 128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of sequential reads from the 660p is competitive with many of the best TLC SSDs, and isn't too far behind even after filling the drive all the way.

The 660p doesn't reach its maximum sequential read speed until around QD8, but it was already pretty quick at QD1 so the overall growth is relatively small.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write test only hits the SLC write cache even when the Intel SSD 660p is completely full, so it performs comparably to many high-end NVMe drives.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

Our usual test conditions of a mostly-empty drive mean that the 660p's score on the sustained sequential write test reflects only writes to the SLC cache at its largest configuration. When the drive is full and the SLC cache has shrunk to just 12GB, the test quickly fills that cache and performance drops to last place.

Sustained 128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of the 660p when writing sequentially to the SLC cache is excellent, but it ends up slightly worse off than the 600p when the drive is full and the SLC cache is too small.

The 660p reaches its maximum sequential write speed at QD2 and maintains it for the rest of the test, showing that the drive is largely keeping up with flushing the SLC write cache during the idle phases of the test.

Random Performance Mixed Read/Write Performance
Comments Locked

86 Comments

View All Comments

  • limitedaccess - Tuesday, August 7, 2018 - link

    SSD reviewers need to look into testing data retention and related performance loss. Write endurance is misleading.
  • Ryan Smith - Tuesday, August 7, 2018 - link

    It's definitely a trust-but-verify situation, and is something we're going to be looking into for the 660p and other early QLC drives.

    Besides the fact that we only had limited hands-on time with this drive ahead of the embargo and FMS, it's going to take a long time to test the drive's longevity. Even with 24/7 writing, with a sustained 100MB/sec write rate you're looking at only around 8TB written/day. Which means you're looking at weeks or months to exhaust the smallest drive.
  • eastcoast_pete - Tuesday, August 7, 2018 - link

    Hi Ryan and Billie,

    I second the questions by limitedaccess and npz, also on data retention in cold storage. Now, about Ryan's answer: I don't expect you guys to be able to torture every drive for months on end until it dies, but, is there any way to first test the drive, then run continuous writes/rewrites for seven days non-stop, and then re-do some core tests to see if there are any signs or even hints of deterioration? The issue I have with most tests is that they are all done on virgin drives with zero hours on them, which is a best-case scenario. Any decent drive should be good as new after only 7 days (168 hours) of intensive read/write stress. If it's still as good as when you first tested it, I believe that would bode well for possible longevity. Conversely, if any drive shows even mild deterioration after only a week of intense use, I'd really like to know, so I can stay away.
    Any chance for that or something similar?
  • JoeyJoJo123 - Tuesday, August 7, 2018 - link

    >and then re-do some core tests to see if there are any signs or even hints of deterioration?
    That's not how solid state devices work. They're either working or they're not. And even if they're dead, that's not to say anything that it was indeed the nand flash that deteriorated beyond repair, it could've been the controller or even the port the SSD was connected that got hosed.

    Literally testing a single drive says absolutely nothing at all about the expected lifespan of your single drive. This is why mass aggregate reliability ratings from people like Backblaze is important. They buy enough bulk drives that they can actually average out the failure rates and get reasonable real world reliability numbers of the the drives used in hot and vibration-prone server rack environments.

    Anandtech could test one drive and say "Well it worked when we first plugged it in, and when we rebooted, the review sample we got no longer worked. I guess it was a bad sample" or "Well, we stress tested it for 4 weeks under a constant mixed read/write load, and the SMART readings show that everything is absolutely perfect, we can extrapolate that no drive of this particular series will never _ever_ fail for any reason whatsoever until the heat death of the universe". Either way, both are completely anecdotal evidence, neither can have any real conclusive evidence found due to the sample size of ONE drive, and does nothing but possibly kill the storage drive off prematurely for the sake of idiots salivating over elusive real world endurance rating numbers when in reality IT REALLY DOESN'T MATTER TO YOU.

    Are you a standard home consumer? Yes.
    And you're considering purchasing this drive that's designed and marketed towards home consumers (ie: this is not a data center priced or marketed product)?: Yes.
    Are you using it under normal home consumer workloads (ie: you're not reading/writing hundreds of MB/s 24/7 for years on end)? Yes.

    Then you have nothing to worry about. If the drive dies, then you call up/email the manufacturer and get warranty replacement for your drive. And chances are, your drives will likely be useless due to ever faster and more spacious storage options in the future than they will fail. I got a basically worthless 80GB SATA 2 (near first gen) SSD that's neither fast enough to really use as a boot drive nor spacious enough to be used anywhere else. If anything the NAND on that early model should be dead, but it's not, and chances are the endurance ratings are highly pessimistic of their actual death as seen in the ARS Technica report where Lee Hutchinson stressed SSDs 24/7 for ~18 months before they died.
  • eastcoast_pete - Tuesday, August 7, 2018 - link

    Firstly, thanks for calling me one of the "idiots salivating over elusive real world endurance rating numbers". I guess it takes one to know one, or think you found one. Second, I am quite aware of the need to have a sufficient sample size to make any inference to the real world. And third, I asked the question because this is new NAND tech (QLC), and I believe it doesn't hurt to put the test sample that the manufacturer sends through its paces for a while, because if that shows any sign of performance deterioration after a week or so of intense use, it doesn't bode well for the maturity of the tech and/or the in-house QC.
    And, your last comment about your 80 GB near first gen drive shows your own ignorance. Most/maybe all of those early SSDs were SLC NAND, and came with large overprovisioning, and yes, they are very hard to kill. This new QLC technology is, well, new, so yes I would like to see some stress testing done, just to see if the assumption that it's all just fine holds, at least for the drive the manufacturer provided.
  • Oxford Guy - Tuesday, August 7, 2018 - link

    If a product ships with a defect that is shared by all of its kind then only one unit is needed to expose it.
  • mapesdhs - Wednesday, August 8, 2018 - link

    Proof by negation, good point. :)
  • Spunjji - Wednesday, August 8, 2018 - link

    That's a big if, though. If say 80% of them do and Anandtech gets the one that doesn't, then...

    2nd gen OCZ Sandforce drives were well reviewed when they first came out.
  • Oxford Guy - Friday, August 10, 2018 - link

    "2nd gen OCZ Sandforce drives were well reviewed when they first came out."

    That's because OCZ pulled a bait and switch, switching from 32-bit NAND, which the controller was designed for, to 64-bit NAND. The 240 GB model with 64-bit NAND, in particular, had terrible bricking problems.

    Beyond that, there should have been pressure on Sandforce's decision to brick SSDs "to protect their firmware IP" rather than putting users' data first. Even prior to the severe reliability problems being exposed, that should have been looked at. But, there is generally so much passivity and deference in the tech press.
  • Oxford Guy - Friday, August 10, 2018 - link

    This example shows why it's important for the tech press to not merely evaluate the stuff they're given but go out and get products later, after the initial review cycle. It's very interesting to see the stealth downgrades that happen.

    The Lenovo S-10 netbook was praised by reviewers for having a matte screen. The matte screen, though, was replaced by a cheaper-to-make glossy later. Did Lenovo call the machine with a glossy screen the S-11? Nope!

    Sapphire, I just discovered, got lots of reviewer hype for its vapor chamber Vega cooler, only to replace the models with those. The difference? The ones with the vapor chamber are, so conveniently, "limited edition". Yet, people have found that the messaging about the difference has been far from clear, not just on Sapphire's website but also on some review sites. It's very convenient to pull this kind of bait and switch. Send reviewers a better product then sell customers something that seems exactly the same but which is clearly inferior.

Log in

Don't have an account? Sign up now