Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

The Patriot Hellfire, in blue, is highlighted as an example of a last-generation Phison E7 drive. Although we didn't test it at the time, the MP500 was based on the same controller and memory.

Burst 4kB Random Read (Queue Depth 1)

The Corsair Force MP510 can't match the burst random read performance of a Silicon Motion controller paired with IMFT 64L 3D TLC, but the MP510 has the fastest random reads of any drive using Toshiba/SanDisk BiCS TLC and it also beats the Samsung 970 EVO.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

On the longer random read test that adds some higher queue depths, the MP510's performance standing falls somewhat as the Samsung 970 EVO and a few other drives with BiCS TLC overtake it, while Silicon Motion drives retain a commanding lead.

Sustained 4kB Random Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of the MP510 during random reads is reasonable but is about 15% worse than what WD and Toshiba can do using their own controllers with this NAND.

The MP510 may not provide the best random read performance at low queue depths, but its performance does scale up nicely when the queue depth continues to grow. By QD32 it is delivering over 800MB/s with a little more than 3W and showing no sign of approaching a performance ceiling; Phison's plans for enterprise drives based on this controller seem to have merit.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

The Corsair Force MP510 is essentially tied for first place for burst random write performance: its SLC write cache has a very low latency.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

On the longer random write test, the Corsair MP510 loses the lead but stays in the top tier of high-performing drives.

Sustained 4kB Random Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of the Corsair Force MP510 during random writes is a bit better than average for NVMe drives, but significantly worse than what Toshiba and WD can do by pairing the same NAND with their own controllers. The WD Black manages this substantial efficiency advantage wil also slightly outperforming the MP510.

Like many of its closest competitors, the Corsair Force MP510's random write speed is saturated by QD4, but it plateaus well below the limit of the WD Black, which doesn't require any more power than the MP510.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

42 Comments

View All Comments

  • leexgx - Thursday, October 18, 2018 - link

    be nice if they do a renew on it as from unreliable source that did a review (toms hard) seems to find the P1 is only a little faster then a MX500 (yes the P1 its a NVME ssd but that's only good for sequential test it seems)
  • yoyomah20 - Thursday, October 18, 2018 - link

    I've been waiting for this review to come out. I'm excited about what corsair has put out, seems like its a pretty good competetor to 970 EVO and WD Black at a cheaper price point. I've been waiting for a power efficient nvme drive to replace my laptop's stock 128GB sata m.2 drive and I think that this is the one! Too bad it's not available anywhere yet...
  • G3TG0T - Thursday, October 18, 2018 - link

    Somehow the price SHOT up by double...
  • G3TG0T - Thursday, October 18, 2018 - link

    Who would buy that for double the price when you could get an EVO 970??!
  • lilmoe - Thursday, October 18, 2018 - link

    Damn Amazon and their sketchy crap. Go to newegg, the price is slightly up 10% though.
  • Lolimaster - Thursday, October 18, 2018 - link

    The other thing is using Office 365 Home, 6TB for $99 a year.
  • shabby - Thursday, October 18, 2018 - link

    Would be nice if all sizes were tested and not just the fastest, you guys should tell oems to send your all the sizes to test.
  • leexgx - Thursday, October 18, 2018 - link

    i could imagine that would take some time to test them, as i would guess Billy/reviewer runs the tests at least 2-3 times to make sure the results are consistent (not looked at the article yet but i guess it was the 1TB one they reviewed)
  • WatcherCK - Thursday, October 18, 2018 - link

    Do OSS NAS solutions (OMV/FreeNAS/Ubuntu+ZOL...) support fast/slow storage tiers transparently? I guess this would look like monolithic storage with the OS caching higher use files behind the scenes... hmmm, how hard would it be to have a hybrid drive that makes use of TLC/QLC (not in a fast caching scenario but say 512GB of TLC and 4/6/8TB QLC in one enclosure and a controller that can present both storage arrays transperently to the OS, an SSD only version of a fusion drive for example.)

    And agree with other posters about capacity, once 96 layer becomes ubiquitous then SSDs should be able to reach parity with mechanical HDD in terms of density and price as far as non enterprise users are concerned...
  • Wolfclaw - Friday, October 19, 2018 - link

    Not fussed about top end speed, just cheap mass storage in raid or Microsoft Storage, that wipes the floor with HDD's and can satuate a SATA3 interface is more than enough for me.

Log in

Don't have an account? Sign up now