Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Intel SSD 660p is lower than several of the fastest high-end drives, but is still pretty quick given the 4-channel controller used by the 660p. The read speed is only moderately impaired after filling the drive all the way.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

Sustained 128kB Sequential Read

On the longer sequential read test that goes beyond QD1, the true high-end NVMe drives pull away from the 660p but it is still faster than most other low-end NVMe SSDs. Internal fragmentation is more of a problem for the 660p than the TLC drives, but this is not too surprising—the QLC NAND is likely using larger page and block sizes that add to the overhead of gathering data that has been dispersed by wear leveling during random writes.

Sustained 128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of sequential reads from the 660p is competitive with many of the best TLC SSDs, and isn't too far behind even after filling the drive all the way.

The 660p doesn't reach its maximum sequential read speed until around QD8, but it was already pretty quick at QD1 so the overall growth is relatively small.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write test only hits the SLC write cache even when the Intel SSD 660p is completely full, so it performs comparably to many high-end NVMe drives.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

Our usual test conditions of a mostly-empty drive mean that the 660p's score on the sustained sequential write test reflects only writes to the SLC cache at its largest configuration. When the drive is full and the SLC cache has shrunk to just 12GB, the test quickly fills that cache and performance drops to last place.

Sustained 128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of the 660p when writing sequentially to the SLC cache is excellent, but it ends up slightly worse off than the 600p when the drive is full and the SLC cache is too small.

The 660p reaches its maximum sequential write speed at QD2 and maintains it for the rest of the test, showing that the drive is largely keeping up with flushing the SLC write cache during the idle phases of the test.

Random Performance Mixed Read/Write Performance
Comments Locked

86 Comments

View All Comments

  • DanNeely - Tuesday, August 7, 2018 - link

    Over 18 months between 2013 and 2015 Tech Report tortured a set of early generation SSDs to death via continuous writing until they failed. I'm not aware of anyone else doing the same more recently. Power off retention testing is probably beyond anyone without major OEM sponsorship because each time you power a drive on to see if it's still good you've given its firmware a chance to start running a refresh cycle if needed. As a result to look beyond really short time spans, you'd need an entire stack of each model of drive tested.

    https://techreport.com/review/27909/the-ssd-endura...
  • Oxford Guy - Tuesday, August 7, 2018 - link

    Torture tests don't test voltage fading from disuse, though.
  • StrangerGuy - Tuesday, August 7, 2018 - link

    And audiophiles always claim no tests are ever enough to disprove their supernatural hearing claims, so...
  • Oxford Guy - Tuesday, August 7, 2018 - link

    SSD defects have been found in a variety of models, such as the 840 and the OCZ Vertex 2.
  • mapesdhs - Wednesday, August 8, 2018 - link

    Please explain the Vertex2, because I have a lot of them and so far none have failed. Or do you mean the original Vertex2 rather than the Vertex2E which very quickly replaced it? Most of mine are V2Es, it was actually quite rare to come across a normal V2, they were replaced in the channel very quickly. The V2E is an excellent SSD, especially for any OS that doesn't support TRIM, such as WinXP or IRIX. Also, most of the talk about the 840 line was of the 840 EVO, not the standard 840; it's hard to find equivalent coverage of the 840, most sites focused on the EVO instead.
  • Valantar - Wednesday, August 8, 2018 - link

    If the Vertex2 was the one that caused BSODs and was recalled, then at least I had one. Didn't find out that the drive was the defective part or that it had been recalled until quite a lot later, but at least I got my money back (which then paid for a very nice 840 Pro, so it turned out well in the end XD).
  • Oxford Guy - Friday, August 10, 2018 - link

    Not recalled. There was a program where people could ask OCZ for replacements. But, OCZ also "ran out" of stock for that replacement program and never even covered the drive that was most severely affected: the 240 GB 64-bit NAND unit.
  • BurntMyBacon - Wednesday, August 8, 2018 - link

    I believe the problems that plagued the 840 EVO were relevant to the 840 based on two facts. Both SSDs used the same flash. Samsung eventually released a (partial) fix for the 840 similar to the 840 EVO. The fix was apparently incompatible with Linux/BSD, though.
  • Spunjji - Wednesday, August 8, 2018 - link

    You'd also be providing useless data by doing so. The drives will have been superseded at least twice before you even have anything to show from the (very expensive) testing.
  • JoeyJoJo123 - Tuesday, August 7, 2018 - link

    >muh ssd endurance boogeyman
    Like clockwork.

Log in

Don't have an account? Sign up now