Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

Burst 4kB Random Read (Queue Depth 1)

The Crucial MX500 delivers a remarkable QD1 burst random read performance that is much faster than any SATA drive we've tested, and competitive with many NVMe SSDs, even those using MLC NAND. This is a 60% improvement over the MX300.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

The sustained random read speed of the Crucial MX500 when some higher queue depths are involved is merely average, but still a big improvement over the MX300.

Sustained 4kB Random Read (Power Efficiency)

The power efficiency of the Crucial MX500 during random reads is a bit of a disappointment relative to the Intel 545s that uses Intel's similar 64L 3D TLC NAND and a slightly updated Silicon Motion controller. The 545s may have a slight advantage due to our sample being just a 512GB model, but the Crucial MX500's efficiency score is also slightly worse than the drives using Toshiba/SanDisk 64L 3D TLC. Compared to older drives, the efficiency of the MX500 looks great.

The low queue depth performance of the Crucial MX500 is great, and at higher queue depths the Samsung drives and the SanDisk Ultra 3D/WD Blue 3D  are the only ones that have a substantial lead over the MX500. At QD32, the MX500 catches up and is tied for first place with the Samsung drives. Through QD8, the Intel 545s offers very similar performance at lower power.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

The Crucial MX500 sets another record with its burst QD1 random write performance, but the margin isn't quite as wide as for the random read performance. The other mainstream 3D TLC drives form a clear second tier of performance that is about 15% slower.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

The sustained random write performance of the Crucial MX500 is a bit of a regression compared to the Crucial MX300, but still above average and better than the other 64L 3D TLC drives.

Sustained 4kB Random Write (Power Efficiency)

The Crucial MX500's power efficiency during random writes is second only to the MX300, and substantially better than any other mainstream SATA SSD.

As compared to its predecessor, the MX500's random write performance doesn't increase as quickly with higher queue depths, but the MX300 saturates at QD4 leaving the MX500 to catch up and surpass it beyond QD8.

The Samsung 850 PRO and EVO both saturate at about the same level of performance, but they reach that level with much lower queue depths. They also require far more power across the entire range of queue depths.

AnandTech Storage Bench - Light Sequential Performance
Comments Locked

90 Comments

View All Comments

  • mode_13h - Tuesday, December 19, 2017 - link

    Actually, you don't need it to upgrade the firmware. But it also does things like over-provisioning and secure-erase. And it can tell you roughly how much write endurance is remaining.
  • ddrіver - Tuesday, December 19, 2017 - link

    Overprovisioning, that very advanced function that translates to shrinking a partition and leaving some free space somewhere at the end of the drive using nothing but Windows tools... There is no killer function in these SSD tools. They make FW updates a lot easier for regular people. I don't want to make boot disks and type in my updates. Just click click form the GUI and I'm done. Maybe something like Samsung's RAPID needs the software but other than that you can do it yourself.
  • jabber - Tuesday, December 19, 2017 - link

    Yeah I just use Disk Management or Partition Wizard to leave 2-5GB (depending on size) free on all my SSDs.
  • mode_13h - Wednesday, December 20, 2017 - link

    Good luck with that. First, it's not much (I usually OP by anywhere from 6.25% to 25% - if you search SSD reviews on this site, you'll see performance benefits even up to 25%).

    Second, it definitely won't work for all vendors. The SSD firmware needs to know that the unpartitioned space doesn't hold user data, and they don't all derive this information by simply looking for a partition table and parsing that, as you seem to believe.
  • ddrіver - Thursday, December 21, 2017 - link

    Any link for that? The point of overprovisioning is to have space to swap data and do internal reorganization even when the drive is full, for wear leveling. Since most drives support TRIM and you can trigger it manually it's impossible to assume there will be data there. It's like the SSD should stop with the wear leveling because I might have some data in that free space it's using anyway.

    The ONLY difference between normal free space and OP is that OP will be there even when you regular partition is full. Wear leveling and data reorganization works even with 0 OP when there actually is user data everywhere. It just takes longer.

    The second you create an empty chunk of disk space it will be TRIMmed anyway and it becomes really free space. It can even be a partition that you never write to. Windows will TRIM it regardless and the SSD will know the LBAs are empty. No flag needed. But I'd love to see some docs from recent times to say otherwise.
  • mode_13h - Friday, December 22, 2017 - link

    I don't know why you think anyone is TRIMming unpartitioned space, but it's a bad assumption.
  • mode_13h - Wednesday, December 20, 2017 - link

    Their documentation seems to suggest it writes something to flag the unpartitioned space as useable for over-provisioning. I don't know how you can easily prove that simply leaving unpartitioned space is equivalent.

    With certain other vendors, I've seen official statements that simply leaving unpartitioned space is not sufficient for overprovisioning.
  • Wolfpup - Tuesday, December 19, 2017 - link

    Dumb question, but what do you need to use it for? I've never used it, that I can remember, and I've got a couple of Crucial drives. I don't have Java on any of my personal systems either.
  • mikato - Friday, January 19, 2018 - link

    Agree. I don’t see how Java is a problem.
  • DanNeely - Tuesday, December 19, 2017 - link

    *shrug*

    While Java apps tend to be clunky to use, for a firmware updater usability isn't a top priority; and Java is an easy way to create an app with the needed low level system access that will run on almost any OS.

Log in

Don't have an account? Sign up now