Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

Burst 4kB Random Read (Queue Depth 1)

The Intel SSD 660p delivers excellent random read performance from its SLC cache, coming in behind only the drives using Silicon Motion's higher-end controllers with Intel/Micron TLC. When reading data from a full drive where background processing is probably still ocurring, the performance is halved but remains slightly ahead of the Intel 600p.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

On the longer random read test, the 660p maintains its outstanding SLC cache performance that beats anything else currently on the market, but filling the drive it is slower than almost any other NVMe SSD - the exception being the Toshiba RC100 that doesn't use a large enough host memory buffer for the data range this test covers.

Sustained 4kB Random Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

With the combination of lower power consumption afforded by its small NVMe controller and excellent random read performance, the Intel 660p earns the top efficiency score for this test. When it's slowed down by being full and still grinding away at background cleanup, its efficiency is much worse but still an improvement over the 600p.

At high queue depths the 660p's random read speed begins to fall behind high-end NVMe SSDs, but it isn't significant until well beyond the queue depths that are relevant to real-world client/consumer usage patterns.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

The burst random write speed of the Intel SSD 660p is not record-setting, but it is comparable to high-end NVMe SSDs.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

On the longer random write test, the 660p is slower than most high-end NVMe SSDs but still performs much better than the other entry-level NVMe drives or the SATA drive. After filling the drive (and consequently the SLC write cache), the performance drops below the SATA drive but is still more than twice as fast as the Toshiba RC100.

Sustained 4kB Random Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Power efficiency when performing random writes to a clean SLC cache is not quite the best we've measured, but it is far ahead of what the other low-end NVMe SSD drives or the Crucial MX500 SATA drive can manage

After QD4 the 660p starts to show signs of filling the SLC write cache, which is a little bit sooner than expected given how large the SLC cache should be for the mostly-empty drive condition. The performance doesn't drop very far, showing that the idle time is enough for the drive to mostly keep up with flushing the SLC cache when the test is writing to the drive with a 50% duty cycle.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

86 Comments

View All Comments

  • StrangerGuy - Tuesday, August 7, 2018 - link

    "I am a TRUE PROFESSIONAL who can't pay more endurance for my EXTREME SSD WORKLOADS by either from my employer or by myself, I'm the poor 0.01% who is being oppressed by QLC!"
  • Oxford Guy - Tuesday, August 7, 2018 - link

    Memes didn't make the IBM Deathstar drives fun and games.
  • StrangerGuy - Tuesday, August 7, 2018 - link

    I'm sure you were the true prophetic one warning us about those crappy those 75GXPs before they were released, oh wait.

    I'm sorry why are you here and why should anyone listen to you again?
  • Oxford Guy - Tuesday, August 7, 2018 - link

    Memes and trolling may be entertaining but this isn't really the place for it.
  • jjj - Tuesday, August 7, 2018 - link

    Not bad, at least for now when there are no QLC competitors.
    The pressure QLC will put on HDDs is gonna be interesting too.
  • damianrobertjones - Tuesday, August 7, 2018 - link

    These drives will fill the bottom end... allowing the mid and high tiers to increase in price. Usual.
  • Valantar - Wednesday, August 8, 2018 - link

    Only if the performance difference is large enough to make them worth it - which it isn't, at least in this case. While the advent of TLC did push MLC prices up (mainly due to reduced production and sales volume), it seems unlikely for the same to happen here, as these drives aim for a market segment that has so far been largely unoccupied. (It's also worth mentioning here that silicon prices have been rising for quite a while, and also affects this.) There are a few TLC drives in the same segment, but those are also quite bad. This, on the other hand, competes with faster drives unless you fill it or the SLC cache. In other words, higher-end drives will have to either aim for customers with heavier workloads (which might imply higher prices, but would also mean optimizations for non-consumer usage scenarios) or push prices lower to compete.
  • romrunning - Wednesday, August 8, 2018 - link

    Well, QLC will slowly push out TLC, which was already pushing out MLC. It's not just pushing the prices of MLC/TLC up, mfgs are slowing phasing those lines out entirely. So even if I want a specific type, I may not be able to purchase it in consumerspace (maybe enterprise, with the resultant price hit).

    I hate that we're getting lower-performing items for the cheaper price - I'd rather get higher-performing at cheaper prices! :)
  • rpg1966 - Tuesday, August 7, 2018 - link

    "In the past year, the deployment of 64-layer 3D NAND flash has allowed almost all of the SSD industry to adopt three bit per cell TLC flash"

    What does this mean? n-layer NAND isn't a requirement for TLC is it?
  • Ryan Smith - Tuesday, August 7, 2018 - link

    3D NAND is not a requirement for TLC. However most of the 32/48 layer processes weren't very good, resulting in poorly performing TLC NAND. The 64 layer stuff has turned out much better, finally making TLC viable from all manufacturers.

Log in

Don't have an account? Sign up now