Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

Burst 4kB Random Read (Queue Depth 1)

The Intel SSD 660p delivers excellent random read performance from its SLC cache, coming in behind only the drives using Silicon Motion's higher-end controllers with Intel/Micron TLC. When reading data from a full drive where background processing is probably still ocurring, the performance is halved but remains slightly ahead of the Intel 600p.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

On the longer random read test, the 660p maintains its outstanding SLC cache performance that beats anything else currently on the market, but filling the drive it is slower than almost any other NVMe SSD - the exception being the Toshiba RC100 that doesn't use a large enough host memory buffer for the data range this test covers.

Sustained 4kB Random Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

With the combination of lower power consumption afforded by its small NVMe controller and excellent random read performance, the Intel 660p earns the top efficiency score for this test. When it's slowed down by being full and still grinding away at background cleanup, its efficiency is much worse but still an improvement over the 600p.

At high queue depths the 660p's random read speed begins to fall behind high-end NVMe SSDs, but it isn't significant until well beyond the queue depths that are relevant to real-world client/consumer usage patterns.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

The burst random write speed of the Intel SSD 660p is not record-setting, but it is comparable to high-end NVMe SSDs.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

On the longer random write test, the 660p is slower than most high-end NVMe SSDs but still performs much better than the other entry-level NVMe drives or the SATA drive. After filling the drive (and consequently the SLC write cache), the performance drops below the SATA drive but is still more than twice as fast as the Toshiba RC100.

Sustained 4kB Random Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Power efficiency when performing random writes to a clean SLC cache is not quite the best we've measured, but it is far ahead of what the other low-end NVMe SSD drives or the Crucial MX500 SATA drive can manage

After QD4 the 660p starts to show signs of filling the SLC write cache, which is a little bit sooner than expected given how large the SLC cache should be for the mostly-empty drive condition. The performance doesn't drop very far, showing that the idle time is enough for the drive to mostly keep up with flushing the SLC cache when the test is writing to the drive with a 50% duty cycle.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

86 Comments

View All Comments

  • DanNeely - Tuesday, August 7, 2018 - link

    Over 18 months between 2013 and 2015 Tech Report tortured a set of early generation SSDs to death via continuous writing until they failed. I'm not aware of anyone else doing the same more recently. Power off retention testing is probably beyond anyone without major OEM sponsorship because each time you power a drive on to see if it's still good you've given its firmware a chance to start running a refresh cycle if needed. As a result to look beyond really short time spans, you'd need an entire stack of each model of drive tested.

    https://techreport.com/review/27909/the-ssd-endura...
  • Oxford Guy - Tuesday, August 7, 2018 - link

    Torture tests don't test voltage fading from disuse, though.
  • StrangerGuy - Tuesday, August 7, 2018 - link

    And audiophiles always claim no tests are ever enough to disprove their supernatural hearing claims, so...
  • Oxford Guy - Tuesday, August 7, 2018 - link

    SSD defects have been found in a variety of models, such as the 840 and the OCZ Vertex 2.
  • mapesdhs - Wednesday, August 8, 2018 - link

    Please explain the Vertex2, because I have a lot of them and so far none have failed. Or do you mean the original Vertex2 rather than the Vertex2E which very quickly replaced it? Most of mine are V2Es, it was actually quite rare to come across a normal V2, they were replaced in the channel very quickly. The V2E is an excellent SSD, especially for any OS that doesn't support TRIM, such as WinXP or IRIX. Also, most of the talk about the 840 line was of the 840 EVO, not the standard 840; it's hard to find equivalent coverage of the 840, most sites focused on the EVO instead.
  • Valantar - Wednesday, August 8, 2018 - link

    If the Vertex2 was the one that caused BSODs and was recalled, then at least I had one. Didn't find out that the drive was the defective part or that it had been recalled until quite a lot later, but at least I got my money back (which then paid for a very nice 840 Pro, so it turned out well in the end XD).
  • Oxford Guy - Friday, August 10, 2018 - link

    Not recalled. There was a program where people could ask OCZ for replacements. But, OCZ also "ran out" of stock for that replacement program and never even covered the drive that was most severely affected: the 240 GB 64-bit NAND unit.
  • BurntMyBacon - Wednesday, August 8, 2018 - link

    I believe the problems that plagued the 840 EVO were relevant to the 840 based on two facts. Both SSDs used the same flash. Samsung eventually released a (partial) fix for the 840 similar to the 840 EVO. The fix was apparently incompatible with Linux/BSD, though.
  • Spunjji - Wednesday, August 8, 2018 - link

    You'd also be providing useless data by doing so. The drives will have been superseded at least twice before you even have anything to show from the (very expensive) testing.
  • JoeyJoJo123 - Tuesday, August 7, 2018 - link

    >muh ssd endurance boogeyman
    Like clockwork.

Log in

Don't have an account? Sign up now