AnandTech Storage Bench - Light

Our Light storage test has relatively more sequential accesses and lower queue depths than The Destroyer or the Heavy test, and it's by far the shortest test overall. It's based largely on applications that aren't highly dependent on storage performance, so this is a test more of application launch times and file load times. This test can be seen as the sum of all the little delays in daily usage, but with the idle times trimmed to 25ms it takes less than half an hour to run. Details of the Light test can be found here. As with the ATSB Heavy test, this test is run with the drive both freshly erased and empty, and after filling the drive with sequential writes.

ATSB - Light (Data Rate)

As with the Heavy test, the Crucial P1 handles the Light test as well as most high-end drives when the test is run on an empty drive with plenty of free space in the SLC cache. When the test is run on a full drive, the P1's average data rate drops to just below that of the Crucial MX500 SATA SSD.

ATSB - Light (Average Latency)ATSB - Light (99th Percentile Latency)

When the Light test is run on an empty Crucial P1, the average and 99th percentile latency scores are comparable to high-end NVMe SSDs because the test is operating entirely within the SLC cache. When that cache is shrunk by completely filling the drive, both latency scores are an order of magnitude worse. However, the 99th percentile latency is much better than what we saw from the Intel 660p when full.

ATSB - Light (Average Read Latency)ATSB - Light (Average Write Latency)

The average read latency of the Crucial P1 stays reasonably below that of SATA SSDs even when the test is run on a full drive, but the average write latency ends up several times higher than the MX500 SATA drive. The Intel 660p and DRAMless Toshiba RC100 have similar average write latency when full.

ATSB - Light (99th Percentile Read Latency)ATSB - Light (99th Percentile Write Latency)

The 99th percentile read and write latency scores tell a similar story to the average latencies, but the weaknesses of the Crucial P1 stand out more clearly. Even with a full drive, read latency on the Light test isn't a problem, but write latency can climb to tens of milliseconds.

ATSB - Light (Power)

Energy usage by the Crucial P1 is reasonably low (by NVMe standards) when the Light test is run on an empty drive. When the test is run on a full drive, the P1 uses substantially more energy than the Intel 660p and instead shows comparable efficiency to most high-performance NVMe SSD.

AnandTech Storage Bench - Heavy SYSmark 2018
Comments Locked

66 Comments

View All Comments

  • Mikewind Dale - Thursday, November 8, 2018 - link

    Sic:

    "A reduction in quantity and an increase in price will increase net revenue only if demand is elastic."

    That should be "inelastic."
  • limitedaccess - Thursday, November 8, 2018 - link

    The transition to TLC drives was also shortly followed with the transition to 3D NAND using higher process (larger) from smaller planar litho process. While smaller litho allowed more density it also came with the trade off of worse endurance/higher decay. So the transition to 3D NAND effectively offset the issues of MLC->TLC which is where we are today. What's the equivalent for TLC->QLC?

    Low litho planar TLC drives were the ones that were poorly received and performed worse then they reviewed in reality due to decay. And decay is the real issue here with QLC since no reviewer tests for it (it isn't the same as poor write endurance). Is that file I don't regularly access going to maintain the same read speeds or have massively higher latency to access due to the need for ECC to kick in?
  • 0ldman79 - Monday, November 12, 2018 - link

    I may not be correct on the exact numbers, but I think the NAND lithography has stopped at 22nm as they were having issues retaining data on 14nm, just no real benefit going to a smaller lithography.

    They may tune that in a couple of years, but the only way I can see that working with my rudimentary understanding of the system is to keep everything the same size as the 22nm (gates, gaps, fences, chains, roads, whatever, it's too late/early for me to remember the correct terms), same gaps only on a smaller process. They'd have no reduction in cost as they'd be using the same amount of each wafer, might have a reduction in power consumption.

    I'm eager to see how they address the problem but it really looks like QLC may be a dead end. Eventually we're going to hit walls where lithography can't improve and we're going to have to come at the problem (cpu speed, memory speeds, NAND speeds, etc) from an entirely different angle than what we've been doing. For what, 40 years, we've been doing major design changes every 5 years or so and just relying on lithography to improve clock speeds.

    I think that is about to cease entirely. They can probably go farther than what we're seeing but not economically.
  • Lolimaster - Friday, November 9, 2018 - link

    Youre not specting a drive limited to 500MB to be as fast as a PCI-E 4x SSD with full support for it...

    TLC vs MLC all goes to endurance and degraded performance when the drive is full or the cache is exhausted.
  • Lolimaster - Friday, November 9, 2018 - link

    Random performance seems the land of Optane and similar. Even the 16GB optane M10 absoluletely murders even the top of the line NVME Samsung MLC SSD.
  • PaoDeTech - Thursday, November 8, 2018 - link

    Yes, price is still too high. But it will come down. I think that the conclusions fail to highlight the main strength of this SSD: top performance / power. For portable devices, this is the key metric to consider. In this regard is far ahead any SATA SSD and almost all PCIe out there.
  • Lolimaster - Friday, November 9, 2018 - link

    Exactly. QLC should stick to big multiterabyte drives for avrg user or HEDT.

    Like 4TB+.
  • 0ldman79 - Monday, November 12, 2018 - link

    I think that's where they need to place QLC.

    Massive "read mostly" storage. xx layer TLC for a performance drive, QLC for massive data storage, ie; all of my Steam games installed on a 10 cent per gig "read mostly" drive while the OS and my general use is on a 22 cent per gig TLC.

    That's what they're trying to do with that SLC cache, but I think they need to push it a lot farther, throw a 500GB TLC cache on a 4 terabyte QLC drive. That might be able to have it fit into the mainstream NVME lineup.
  • Flunk - Thursday, November 8, 2018 - link

    MSRP seems a little high, I recently picked up an HP EX920 1TB for $255 and that's a much faster drive. Perhaps the street price will be lower.
  • B3an - Thursday, November 8, 2018 - link

    That latency is APPALLING and the performance is below par. If this was dirt cheap it might be worth it to some people, but at that price it's a joke.

Log in

Don't have an account? Sign up now