Mixed Random Performance

Our test of mixed random reads and writes covers mixes varying from pure reads to pure writes at 10% increments. Each mix is tested for up to 1 minute or 32GB of data transferred. The test is conducted with a queue depth of 4, and is limited to a 64GB span of the drive. In between each mix, the drive is given idle time of up to one minute so that the overall duty cycle is 50%.

Mixed 4kB Random Read/Write

The Crucial P1 has reasonable entry-level NVMe performance on the mixed random I/O test. It's clearly faster than the MX500 SATA SSD and comes close to some high-end NVMe SSDs. But when the drive is full and the SLC cache is at its minimum size, the P1 slows down to 40% of its speed on a drive containing only the test data. When full, the P1 is about 22% slower than the Intel 660p, but their empty-drive performance is similar.

Sustained 4kB Mixed Random Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Crucial P1 has worse power efficiency than the Intel 660p on this test, whether it is run on a full drive or not. The efficiency is still reasonable for the mostly-empty drive test run, but when full the P1's power consumption increases slightly and the efficiency is significantly worse than other low-end NVMe SSDs.

When the mixed random I/O test is run on a full Crucial P1, the benefits of the SLC cache almost completely disappear, leaving the drive with a mostly flat performance curve (with some inconsistency) rather than the significant performance upswing as the proportion of writes grows beyond 70%. The Intel 660p's performance is very similar save for slightly lower write performance to the SLC cache, and slightly improved full-drive performance.

Mixed Sequential Performance

Our test of mixed sequential reads and writes differs from the mixed random I/O test by performing 128kB sequential accesses rather than 4kB accesses at random locations, and the sequential test is conducted at queue depth 1. The range of mixes tested is the same, and the timing and limits on data transfers are also the same as above.

Mixed 128kB Sequential Read/Write

The performance of the Crucial P1 on the mixed sequential I/O test is better than most entry-level NVMe SSDs and comes close to some of the slower high-end drives. Even when the test is run on a full drive, the P1 remains faster than SATA SSDs, and its full-drive performance is slightly better than the Intel 660p.

Sustained 128kB Mixed Sequential Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of the Crucial P1 on this test is about average for an entry-level NVMe drive. When the test is run on a full drive, the reduced performance causes efficiency to take a big hit, but it ends up being only slightly less efficient than the Crucial MX500 SATA SSD.

The Crucial P1 has decent performance at either end of the test, when the workload is either very read-heavy or very write-heavy. Compared to other entry-level NVMe drives, the P1 starts out with better read performance and recovers more of its performance toward the end of the test than many of its competitors. The minimum reached at around a 60/40 read/write split is faster than a SATA drive can manage but is unremarkable among NVMe drives. When the test is run on a full drive, performance during the more read-heavy half of the test is only slightly reduced, but things get worse throughout the write-heavy half of the test instead of improving as write caching comes more into play.

Sequential Performance Power Management
Comments Locked

66 Comments

View All Comments

  • DanNeely - Thursday, November 8, 2018 - link

    When DDR2 went mainstream they stopped making DDR1 dimms. The dimms you could still find for sale a few years later were old ones where you were paying not just the original cost of making them, but the cost of keeping them in a warehouse for several years before you bought them. Individual ram chips continued to be made for a while longer on legacy processes for embedded use but because the same old mature processes were still being used there was no scope for newer tech allowing cost cutting, and lower volumes meant loss of scale savings meaning that the embedded world also had to pay more until they upgraded to new standards.
  • Oxford Guy - Thursday, November 8, 2018 - link

    The point was:

    "QLC may lead to higher TLC prices, if TLC volume goes down and/or gets positioned as a more premium product as manufacturers try to sell us QLC."

    Stopping production leads to a volume drop, eh?
  • romrunning - Thursday, November 8, 2018 - link

    "There is a low-end NVMe market segment with numerous options, but they are all struggling under the pressure from more competitively priced high-end NVMe SSDs."

    I really wish all NVMe drives kept a higher base performance level. QLC should have died on the vine. I get the technical advances, but I prefer tech advances increase performance, not ones that are worse than their predecessor. The price savings, when it's actually there, isn't worth the trade-offs.
  • Flunk - Thursday, November 8, 2018 - link

    In a year or two there are going to be QLC drives faster than today's TLC drives. it just takes time to develop a new technology.
  • Oxford Guy - Thursday, November 8, 2018 - link

    Faster to decay, certainly.

    As I understand it, it's impossible, due to physics, to make QLC faster than TLC, just as it's impossible to make TLC faster than MLC. Just as it's impossible to make MLC faster than SLC.

    Workarounds to mask the deficiencies aren't the same thing. The only benefit to going beyond SLC is density, as I understand it.
  • Billy Tallis - Thursday, November 8, 2018 - link

    Other things being equal, MLC is faster than TLC and so on. But NAND flash memory has been evolving in ways other than changing the number of bits stored per cell. Micron's 64L TLC is faster than their 32L MLC, not just denser and cheaper. I don't think their 96L or 128L QLC will end up being faster than 64L TLC, but I do think it will be faster than their 32L or 16nm planar TLC. (There are some ways in which increased layer count can hurt performance, but in general those effects have been offset by other performance increases.)
  • Oxford Guy - Thursday, November 8, 2018 - link

    "Other things being equal, MLC is faster than TLC and so on"

    So, other than density, there is no benefit to going beyond SLC, correct?
  • Billy Tallis - Thursday, November 8, 2018 - link

    Pretty much. If you can afford to pay for SLC and a controller with enough channels and chip enable lines, then you could have a very nice SSD for a very unreasonable price. When you're constrained to a SATA interface there's no reason not to store at least three bits per cell, and even for enterprise NVMe SSDs there are only a few workloads where the higher performance of SLC is cost-effective.
  • Great_Scott - Monday, November 12, 2018 - link

    They should drop the SLC emulation and just sell the drive as an SLC drive. Sure, there may be some performance left on the table due to the limits of the NVME interface, but the longevity would be hugely attractive to some users.

    They'd make more money too, since they could better justify higher costs that way. In fact, with modern Flash they might be able to get much the same benefit from MLC organization and have roughly half the drive space instead of 25%.
  • Lolimaster - Friday, November 9, 2018 - link

    Do not mix better algorithms of the simulated SLC cache and dram with actual "performance", start crushing their simulated cache and the TLC goes to trash.

Log in

Don't have an account? Sign up now