Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Crucial P1 is decent for an entry-level NVMe SSD. Even when the test is run on a full drive, the P1 is about twice as fast as a SATA SSD. The Intel 660p is slightly faster on this test, with more of an advantage in the full-drive/small SLC cache conditions.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

Sustained 128kB Sequential Read

On the longer sequential read test, the Crucial P1 sustains sequential reads at over 1GB/s even when full, so long as the data is contiguous on the flash itself due to having been written sequentially. When reading data that has been modified heavily by random writes, the drive has to do random reads behind the scenes and its performance is no longer competitive with other current NVMe SSDs or even mainstream SATA SSDs, though it does at least maintain roughly the same level of performance that can be expected from a hard drive (which doesn't need wear leveling and thus is not subject to internal fragmentation). As with the burst sequential read test, the Intel 660p is slightly faster.

Sustained 128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of the Crucial P1 when reading contiguous data is reasonable for a NVMe drive but nothing special. When reading data with internal fragmentation, the power consumption is the same but the reduced performance drags the efficiency way down, to only 50% better than a 3.5" hard drive.

The Crucial P1 reaches full sequential read speed of just under 2GB/s at QD8 or higher. While the Intel 660p has the advantage at low queue depths, the P1 ends up slightly faster at higher queue depths.

Looking at which part of the performance and power landscape the Crucial P1 occupies compared to all the other drives that have been subjected to our 2018 SSD tests, the P1 is very middle-of-the-road among NVMe drives for both performance and power consumption. There are drives with similar read performance profiles that require nearly 1W less, and high-performance drives that can deliver more than 1GB/s faster speeds than the P1 without using much more power than the P1 at its peak.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

On the burst sequential write test, the Crucial P1 is on par with many high-end NVMe SSDs, thanks to the high write performance of its SLC cache. This test is short enough that the P1 doesn't overflow the SLC cache even when it is at its minimum size due to the drive being full.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

On the longer sequential write test, the Crucial P1 continues to perform well when the drive is mostly empty and the SLC cache is at its largest. When the drive is full, this test writes enough to overflow the cache and performance drops below that of mainstream SATA SSDs.

Sustained 128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Power consumption by the Crucial P1 during the sustained sequential write test is lower than average for NVMe SSDs, so the efficiency is good when the test hits only the SLC cache. When the drive is full and the SLC cache overflows, the P1's efficiency is significantly worse than almost all of the competition, though the performance per Watt is still several times what a hard drive can manage.

The Crucial P1 hits its full write speed at a queue depth of 2 or higher, when writing to the SLC cache. When the drive is full, the SLC cache overflows before the QD1 test phase is over, and performance bounces around a bit but stays generally quite low with each phase of the test. The Intel 660p is slightly faster when writing to its SLC cache, and performs more consistently when the cache is constantly overflowing.

Compared against all the other drives that have completed our 2018 SSD test suite, the sequential write performance and power consumption of the Crucial P1 are better than most other low-end NVMe drives (or drives that would be considered low-end if they were still on the market). But there are numerous high-end drives that vastly outperform the P1, and some of them use a bit less power in doing so.

Whole-Drive Fill

This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. From this, we can estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.

The 1TB Crucial P1 manages about 155 GB of sequential writes before the SLC cache overflows and performance tanks. The drive does manage to free up some SLC cache on several occasions before the drive is completely full, so the write speed occasionally jumps back up. The Intel 660p only lasts for about 128 GB before its cache runs out, and while it does show some variability in write speed during the rest of the drive fill, it never gets all the way back up to the full SLC write speed.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

For extremely large sequential write operations that overflow the SLC cache, the Crucial P1 and Intel 660p average out to about the same speed as a 7200RPM hard drive. The fast writes to the SLC cache don't last long enough to bring the average up very far above the steady-state write speeds of about 100MB/s. High-end NVMe SSDs with modern 3D TLC NAND and 8-channel controllers can sustain write speeds that are about an order of magnitude higher after the SLC caches run out.

Random Performance Mixed Read/Write Performance
Comments Locked

66 Comments

View All Comments

  • Lolimaster - Friday, November 9, 2018 - link

    With worse of everything how is it going to be "faster", do any TLC SSD beat the Samsung MLC ones? No.
  • Valantar - Thursday, November 8, 2018 - link

    What's the point of increasing performance when current top-level performance is already so high as to be nigh unnoticeable? The real-world difference between a good mid-range NVMe drive and a high end one are barely measurable in actual real-world workloads, let alone noticeable. Sure, improving random perf would be worthwhile, but that's not happening with flash any time soon. Increasing capacity per dollar while maintaining satisfactory performance is clearly a worthy goal. The only issue is that this, as with most drives at launch, is overpriced. It'll come down, though.
  • JoeyJoJo123 - Thursday, November 8, 2018 - link

    ^ This.

    For typical end users, even NVMe over SATA3 SSDs don't provide a noticeable difference in overall system performance. Moving to an SSD over an HDD for your OS install was a different story and a noticeable upgrade, but that kind of noticeable upgrade just isn't going to happen anymore.

    Typical end users aren't writing/reading so much off the drive that QLC presents a noticeable downgrade over TLC, or even MLC storage. Yes, right now QLC isn't cheap enough compared to existing TLC products, but we've already done this dance when TLC first arrived on the scene and people were stalwart about sticking to MLC drives only. Today? We got high-end NVMe TLC drives with better read/write and random IOPS performance compared to the best MLC SATA3 drives back when MLC was the superior technology.

    Yeah, it's going to take time for QLC to come down in price, the tech is newer and yields are lower, and companies are trying to fine tune the characteristics of their product stacks to make them both appealing in price and performance. Give it some time.
  • romrunning - Thursday, November 8, 2018 - link

    Sure, we lost endurance and speed with the switch from MLC to TLC. But the change from TLC to QLC is much worse in terms of latency, endurance, and just overall performance. Frankly, the sad part is that the drive needs the pseudo-SLC area to just barely meet the lowered expectations for QLC. Some of those QLC drives barely beat good SATA drives.

    We now have a new tech (3D Xpoint/Optane) that is demonstrably better for latency, consistency, endurance, and performance. I'd rather Micron continue to put the $ into it to get higher yields for both increased density/capacity & lower costs. That's what I want on the NVMe side, not another race to the bottom.
  • JoeyJoJo123 - Thursday, November 8, 2018 - link

    Sorry, you're not the end consumer that dictates how products get taped out, and honestly, if you were in charge of product management, you'd run the company into the ground focusing on making only premium priced storage drives in a market that's saturated with performance drives.

    The bulk of all SSD sales are for lower cost lower storage options. There is no "race to the bottom", it's just some jank you made up in your head to justify why companies are focusing on making products for the common man. Being able to move from an affordable 500GB SSD on TLC to an similarly priced 1TB SSD in a few years is a GOOD THING.

    If you want preemium(tm) quality products, SSDs with only the HIGHEST of endurance ratings for the massive Read/Write workloads you perform on your personal desktop on a day-to-day basis, SSDs with only the LOWEST of latencies so that you can load into Forknight(tm) faster than the other childerm, then how about you go buy enterprise storage products instead of whining in the comments section of a free news article. The products you want with the technology you need are out there. They're expensive because it's a niche market catered towards enterprise workloads where they can justify the buckets of money.

    You keep whining, I'll keep enjoying the larger storage capacities at cheaper prices so that I can eventually migrate my Home NAS to a completely solid state solution. Right now, getting even a cheap 1TB SSD for caching is super-slick.
  • romrunning - Friday, November 9, 2018 - link

    "...how about you go buy enterprise storage products instead of whining in the comments section of a free news article."

    You are taking this way too personally.

    I'm actually thinking more about the business side. I want 3D-Xpoint/Optane to get cheaper & get more capacity so that I can justify it for more than just some specific servers/use-cases. So I'd like Micron to focus more on developing that side than chasing the price train with QLC, which is inferior to what preceded it. With Micron buying out Intel's stake in IMFT for 3D-Xpoint, I just hope the product line diversification doesn't lessen the work to make 3D-Xpoint cheaper & even greater capacities.
  • JoeyJoJo123 - Friday, November 9, 2018 - link

    >You are taking this way too personally.

    Talk about projecting. Micron is taping out dozens of products across different product segments for all kinds of users. They're working on 3D-Xpoint and QLC stuff simultaneously and independently from each other. What's happening here is that Micron is producing QLC NAND for this Crucial M.2 SSD, and you're here taking it personally (and therefore whining in a free news article comments section) that Micron isn't focusing enough on 3D-Xpoint and that supposedly their QLC is bad for some reason. Thing is, this news article isn't for you. This technology isn't for you. You decided your tech needs are above what this product is aimed for: affordable, large volume SSDs for lower prices.

    Seriously, calm down. This wasn't an assault orchestrated by Micron against people that need/want higher performance storage options. More 3D-Xpoint stuff will come your way if that's the technology you're looking forward to. Again back to my main point, it's going to take some time for these newer technologies to roll out. Until then, don't whine in comments sections that X isn't the Y you were waiting for. If the article is about technology X, make a half-decent effort keep to the topic about technology X.
  • mathew7 - Tuesday, November 13, 2018 - link

    "I'll keep enjoying the larger storage capacities at cheaper prices so that I can eventually migrate my Home NAS to a completely solid state solution."
    Wwwwwhhhhhhhaaaaaaaaaaattttt?? NEVER. You don't understand the SSD limits. I would not do that with SLC (assuming current quality at QLC price).
    Enterprises with SSD NASes only use them for short-term performance storage with hourly/daily backup. Anyone who uses them differently is asking for a disaster.
    Look for linuxconf Intel SSD. There is a presentation where they explain how reading a cell damages nearby cells and manufacturers need to monitor this a relocate the data that is only read.
    I have 2 servers with only 1 SSD each for OS and 8-10TB HDDs for my actual long-term data.
    All my desktops/laptops have SSDs (Intel 320, Samsung 830-860 evo+pro, Crucial BX100/MX300 etc). But anything important on SSDs will be backed-up to HDDs.
  • Oxford Guy - Thursday, November 8, 2018 - link

    "That's what I want ... not another race to the bottom."

    That's what consumers want: value.

    That's not what companies want. They want the opposite. Their wish is to sell the least for the most.
  • Mikewind Dale - Thursday, November 8, 2018 - link

    "[Companies] want the opposite. Their wish is to sell the least for the most."

    Not true. Companies want to maximize net revenue, i.e. total revenue minus cost.

    Depending on the elasticity of demand (i.e. price sensitivity), that might mean increasing quantity and decreasing price.

    A reduction in quantity and an increase in price will increase net revenue only if demand is elastic.

    But given the existence of HDDs, it makes sense that demand for SSDs is elastic, i.e. price-sensitive. These aren't captive consumers with zero choice.

    Of course, nothing stops a company from catering to BOTH markets, i.e. high performance AND low cost markets.

Log in

Don't have an account? Sign up now