Mixed IO Performance

Our tests of mixed read/write IO vary the workload from pure reads to pure writes at 10% increments. Each mix is tested for up to 1 minute or 32GB of data transferred. The mixed random IO test uses a queue depth of 4 while the mixed sequential IO test uses a queue depth of 1. The tests are confined to a 64GB span of the drive, and the drive is given up to one minute of idle time in between each mix tested.

Mixed IO Performance
Mixed Random IO Mixed Sequential IO

The QLC NVMe drives like the Corsair MP400 have a clear advantage over the DRAMless TLC drives for mixed read/write workloads. This is particularly pronounced for the mixed random IO test, where the DRAMless TLC drives are even slower than the QLC SATA drive. On the mixed sequential IO test, those DRAMless TLC drives can compete with some of the slower QLC NVMe drives, but the 8-channel Phison E12 controller used in the Corsair MP400 and Sabrent Rocket Q helps them stay ahead as the faster budget NVMe strategy.

Mixed IO Efficiency
Mixed Random IO Mixed Sequential IO

The budget NVMe drives all have worse power efficiency during the mixed IO tests than any of the high-end options, but the Corsair MP400 has some of the best efficiency scores within the budget NVMe segment. The 8TB Sabrent Rocket Q trails behind the 1TB MP400 because of the extra power draw of so many NAND dies.

Mixed Random IO
Mixed Sequential IO

The Corsair MP400's performance curves through the mixed random and sequential IO tests generally resemble what we saw for the Sabrent Rocket Q. On the mixed sequential IO test, the 8TB Rocket Q generally maintained a clear performance lead over the 1TB MP400 (at the cost of much higher power draw), but their performance scaling across the random IO test is very similar. The overall shape of the performance curves for these QLC drives has a lot more in common with mainstream TLC drives than it does with the entry-level DRAMless TLC drives; the QLC drives are slower than mainstream TLC drives, but not as obviously limited as the DRAMless drives.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but rarely achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

Idle Wake-Up Latency

The Corsair MP400 identifies itself as having the same idle power management capabilities as the Sabrent Rocket Q, and significantly lower maximum power draw in its active power states than the very pessimistic figures our 8TB Rocket Q sample provides to the OS.

The active and desktop idle power values we measured for the MP400 are a bit lower than for the Rocket Q 8TB, which is to be expected given the lower part count on the 1TB MP400. Waking up from the intermediate desktop idle state is extremely quick, but waking from the deepest sleep state is just a bit on the slow side.

Synthetic Benchmarks Conclusion: Entry Level QLC
Comments Locked

75 Comments

View All Comments

  • Spunjji - Monday, December 14, 2020 - link

    Yes, the drive is absolutely fine for its intended use case and completely inappropriate when used in ways which were not intended (and where its performance would be miserable anyway). 🥱

    In terms of my usage, for example, it's less "almost disposable" and more "extremely unlikely to wear out before it becomes useless in what will be my current system 10-15 years from now".
  • zdz - Friday, December 11, 2020 - link

    Don't need to look at Samsung. Corsair MP500 1 TB (previous generation model) had 1700 TBW. Eight times more. Today's drive are becoming creap.
  • zdz - Friday, December 11, 2020 - link

    I meant mp510, sorry.
  • Makaveli - Saturday, December 12, 2020 - link

    Yup the Corsair MP600 1TB which is based on the E16 controller is rated for 1800TBW.
  • Maverick009 - Friday, December 11, 2020 - link

    You are talking about 2 similarly but differently marketed drives, the Samsung 860 EVO is a step up or step down from QLC drives depending on how you look at it. The actual 200TBW or 600TBW is based slightly on warranty, data written to drive, along with a few other metrics, but in majority of cases an SSDs life will last several years to decades before it gives out. Well past the purchase date and in some cases past your lifetime.
  • GeoffreyA - Saturday, December 12, 2020 - link

    Using Kristian's 850 Evo calculations as a guide, I got the following values, assuming a 1,000 p/e cycle for QLC on a 1 TB drive:

    20 GB/day (with 1.5x write amp) == 93.52 years
    50 GB/day (1.5x w.a.) == 37.4 y
    100 GB/day (3x w.a.) == 9.35 y

    Hopefully, I didn't muck it all up :)
  • Spunjji - Monday, December 14, 2020 - link

    Nice! Looks like it would be perfectly suitable for the lower-end users that QLC drives are aimed at.

    But hey, number go down, so all the NAND ranters have a sad.
  • GeoffreyA - Tuesday, December 15, 2020 - link

    I'm sure the Fact Rewriting Squad can work wonders with those values ;)
  • joesiv - Monday, December 14, 2020 - link

    Good idea using P/E cycles.

    Just remember, OS writes != NAND writes.

    There are other factors that will increase the WAF (Write Amplification Factor).

    It's totally not uncommon to have a WAF of 2x, but if there are bugs, or firmwares too focused on speed or particular usage scenarios, the WAF can be much much higher.

    Basically, it needs to be tested.

    SMART data may provide this for you, look at your Average Nand writes.
    If you're at 10, then you're 1/100th through your 1000 P/E cycles. You can calculate out how many GB/TB's you've used by using the life expecancy the manufactuers advertise (200TB?), divided by that ratio, to find actual NAND writes (if the SMART CTL isn't giving it to you)

    Good luck!
  • GeoffreyA - Tuesday, December 15, 2020 - link

    Had a crack at this earlier. Attributes were a bit obscure, but after some research, took value 241, LBAs written, and multiplied by 512 to work out bytes. From there, got about 2.28 TB or so, which squares with the ~2.4 TB value in Hwinfo. It's an 860 Evo 500 GB, just over a year old. I hardly play games or anything any more, so that's why the writes are so low. Also, round about May, put in a mechanical drive that was just collecting dust. It works well for storing films. Before that, space was a battle on the SSD.

Log in

Don't have an account? Sign up now