Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

Burst 4kB Random Read (Queue Depth 1)

When the Crucial P1 has plenty of unused capacity and its SLC cache is large enough to contain the entire 16GB of test data, the burst random read performance is excellent. When the drive is full and the test data can no longer fit in the SLC cache, the performance falls behind the Crucial MX500 and most low-end NVMe SSDs.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

The sustained random read performance of the Crucial P1 at low queue depths is mediocre at best, falling behind most TLC-based NVMe SSDs and the Crucial MX500. By contrast, the Intel 660p manages to retain its high performance even on the sustained test, indicating that the Intel drive kept more of the test data in its SLC cache than the Crucial P1 does. When the test is run on a full drive, the P1 and the 660p have equivalent performance that is about 12% slower than the P1 with only the 64GB test data file.

Sustained 4kB Random Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of the Crucial P1 during the sustained random read test is less than half of what the Intel 660p offers, due almost entirely to the large performance difference. At just over 2W, the power consumption of the P1 is reasonable, but it doesn't provide the performance to match when the test data isn't in the SLC cache.

The random read performance of the Crucial P1 increases modestly with higher queue depths, but it pales in comparison to what the Intel 660p delivers by serving most of the reads for this test out of its SLC cache. Even the Crucial MX500 develops a large lead over the P1 at the highest queue depths, while using less power.

Plotting the sustained random read performance and power consumption of the Crucial P1 against the rest of the drives that have run through our 2018 SSD test suite, it is clear that the drive doesn't measure up well against even most SATA SSDs, let alone NVMe drives that go beyond the SATA speed limit when given a sufficiently high queue depth. Thanks to its SLC cache being more suited to these test conditions, the Intel 660p is among those NVMe drives that beat the limits of SATA.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

The burst random write performance of the Crucial P1 is good, but not quite on par with the top tier of NVMe SSDs. The Intel 660p is about 10% slower. Both drives clearly have enough free SLC cache to handle this test even when the drives are completely full.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

The longer sustained random write test involves enough data to show the effects of the variable SLC cache size on the Crucial P1: performance on a full drive is less than half of what the drive provides when it only contains the 64GB test data. As with the burst random write test, the  P1 has a small but clear performance advantage over the Intel 660p.

Sustained 4kB Random Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

When the sustained random write test is run on the Crucial P1 containing only the test data, it delivers excellent power efficiency. When the drive is full and the SLC cache is inadequate, power consumption increases slightly and efficiency is reduced by almost a factor of three.

Even when the random write test is conducted on an otherwise empty Crucial P1, the SLC cache starts to fill up by the time the queue depth reaches 32. When the drive is full and the cache is at its minimum size, random write performance decreases with each phase of the test despite the increasing queue depth. By contrast, the Intel 660p shows signs of its SLC cache filling up after QD4 even when the drive is otherwise empty, but its full-drive performance is steadier.

Plotting the Crucial P1's sustained random write performance and power consumption against the rest of the drives that have completed our 2018 SSD test suite emphasizes the excellent combination of performance and power efficiency enabled by the very effective SLC write cache. The P1 requires more power than many SATA drives, but almost all NVMe drives require more power to deliver the same performance, and the very fastest drives aren't much faster than the peak write speed of the Crucial P1.

SYSmark 2018 Sequential Performance
Comments Locked

66 Comments

View All Comments

  • DigitalFreak - Thursday, November 8, 2018 - link

    At this rate, by the time they get to H(ex)LC you'll only be able to write 1GB per day to your drive or risk having it fail.
  • PeachNCream - Thursday, November 8, 2018 - link

    Please don't give them any ideas! The last thing we need is NAND that generously handles a few dozen P/E cycles before dying. We've already gone from millions of P/E cycles to a few hundred in the last 15 years and data retention has dropped from over a decade to under six months. Sure you can get a lot more capacity for the price, but NAND needs to be replaced with something more durable sooner rather than later. (And no, I'm not advocating for Optane either, just something that lasts longer and has room for density improvements - don't care what that something is.)
  • MrCommunistGen - Thursday, November 8, 2018 - link

    I was expecting the extra DRAM to provide a more meaningful advantage over the Intel 660p... I guess it makes sense that Intel left it off to save on BOM.
  • Ratman6161 - Thursday, November 8, 2018 - link

    This could be a very good standard desktop drive if 1) the price is right and 2) you can accept that the 1 TB drive is really only good for up to 900 GB. You would just partition the drive such that there is 100 GB free (or make sure you always just keep that much space free) so you always have the maximum SLC cach available. For the price to be right, it has to be lower. Taking the prices from the article, the 1 TB P1 is only $8 cheaper than a 970 EVO. Now if they could get the price down to the same territory as the current MX 500 they might have something.
  • Billy Tallis - Thursday, November 8, 2018 - link

    Leaving 10% of the drive unpartitioned won't be enough to get the maximum size SLC cache, because 1GB of SLC cache requires 4GB of QLC to be used as SLC. However, 10% manual overprovisioning would definitely reduce the already small chances of overflowing the SLC cache.
  • mczak - Thursday, November 8, 2018 - link

    On that note, wouldn't it actually make sense to use a MLC cache instead of a SLC cache for these SSDs using QLC flash (and by MLC of course I mean using 2 bits per cell)? I'd assume you should still be able to get very decent write speeds with that, and it would effectively only need half as much flash for the same cache size.
  • Billy Tallis - Thursday, November 8, 2018 - link

    Cache size isn't really a big enough problem for a 2bpc MLC write cache to be worthwhile. Using SLC for the write cache has several advantages: highest performance/lowest latency, single-pass reads and writes (important for Crucial's power loss immunity features), and your SLC cache can use flash blocks that are too worn out to still reliably store multiple bits per cell. A slower write cache with twice the capacity would only make sense if consumer workloads regularly overflowed the existing write cache. Almost all of the instances where our benchmarks overflow SLC caches are a consequence of our tests giving the drive less idle time than real-world usage, rather than being tests representing use cases where the cache would be expected to overflow even in the real world.
  • idri - Thursday, November 8, 2018 - link

    Why don't you guys include the Samsung 970 PRO 1TB in your charts for comparison? It's one of the most sought after SSDs on the market for HEDT systems and for sure it would be useful to have your tests results for this one too. Thanks.
  • Billy Tallis - Thursday, November 8, 2018 - link

    A.) Samsung didn't send me a 970 PRO. B.) The 970 PRO is pretty far outside the range of what could be considered competition for an entry-level NVMe SSD. It's a drive you buy for bragging rights, not for real-world performance benefits. The Optane SSD is in that same category, and I don't think the graphs for this kind of review need to be cluttered up with too many of those.
  • PeachNCream - Thursday, November 8, 2018 - link

    Not to be obtuse, but by price the 970 PRO is well within the range of competition for the P1 given that the 1TB 970 retails for $228 on Amazon right now and the MSRP for the 1TB P1 $220. Buyers looking for a product will most certainly consider the $8 difference and factor that into their decision to move up from an entry-level product to a "bragging rights" option given the insignificant difference in cost. Your first point is valid. I would have stopped there since its reasonable to say, "Physically impossible, don't have one there pal."

Log in

Don't have an account? Sign up now