Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The Crucial MX500 turns in another record-setting burst performance score with its sequential read results, but by the smallest margin yet. It's about 6% faster on this test than most mainstream SATA SSDs.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

The sustained sequential read performance of the Crucial MX500 puts it in second place, behind the Samsung 850 PRO and 850 EVO. The MX500 does have a slight advantage over the other 64L 3D TLC drives, and improves on the MX300's performance by an impressive 100 MB/s.

Sustained 128kB Sequential Read (Power Efficiency)

The power efficiency of the Crucial MX500 during sequential reads is mediocre and clearly worse than the other 64L 3D TLC drives, including the Intel 545s. The MX500 is slightly more efficient than the Samsung 850 PRO and EVO, and scores 21% higher than the MX300.

The Samsung 850 PRO and EVO are the only drives that continuously saturate the SATA bus from QD2 onwards with no drops in performance. The MX500's performance is reasonably steady but does drop a bit as the test wears on.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

For once, the burst performance of the Crucial MX500 doesn't set a record. Its QD1 sequential write speed is only second-fastest, about 3% slower than the Samsung 850 PRO.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

The sustained sequential write speed of the MX500 is the same as the MX300, putting them in the second tier of performance behind the Samsung 850 PRO and EVO and the MLC-based Patriot Ignite.

Sustained 128kB Sequential Write (Power Efficiency)

The Crucial MX500 scores great on power efficiency during sequential writes, but not quite as well as the MX300. The OCZ VX500 with its reduced DRAM cache holds on to first place and the Toshiba TR200 (entirely DRAMless) comes in right behind the MX500. The mainstream 3D TLC drives are all much less efficient.

Only a few drives offer a higher sustained sequential write speed than the MX500 after reaching saturation. The Samsung drives and the Patriot Ignite are faster at all queue depths and much less pwoer efficient than the MX500, while the OCZ VX500 stumbles at QD2 before saturating with slightly higher throughput and substantially lower power consumption. The MX300 is a bit faster than the MX500 at QD2 but slightly slower at higher queue depths.

Random Performance Mixed Read/Write Performance
Comments Locked

90 Comments

View All Comments

  • mode_13h - Tuesday, December 19, 2017 - link

    Actually, you don't need it to upgrade the firmware. But it also does things like over-provisioning and secure-erase. And it can tell you roughly how much write endurance is remaining.
  • ddrіver - Tuesday, December 19, 2017 - link

    Overprovisioning, that very advanced function that translates to shrinking a partition and leaving some free space somewhere at the end of the drive using nothing but Windows tools... There is no killer function in these SSD tools. They make FW updates a lot easier for regular people. I don't want to make boot disks and type in my updates. Just click click form the GUI and I'm done. Maybe something like Samsung's RAPID needs the software but other than that you can do it yourself.
  • jabber - Tuesday, December 19, 2017 - link

    Yeah I just use Disk Management or Partition Wizard to leave 2-5GB (depending on size) free on all my SSDs.
  • mode_13h - Wednesday, December 20, 2017 - link

    Good luck with that. First, it's not much (I usually OP by anywhere from 6.25% to 25% - if you search SSD reviews on this site, you'll see performance benefits even up to 25%).

    Second, it definitely won't work for all vendors. The SSD firmware needs to know that the unpartitioned space doesn't hold user data, and they don't all derive this information by simply looking for a partition table and parsing that, as you seem to believe.
  • ddrіver - Thursday, December 21, 2017 - link

    Any link for that? The point of overprovisioning is to have space to swap data and do internal reorganization even when the drive is full, for wear leveling. Since most drives support TRIM and you can trigger it manually it's impossible to assume there will be data there. It's like the SSD should stop with the wear leveling because I might have some data in that free space it's using anyway.

    The ONLY difference between normal free space and OP is that OP will be there even when you regular partition is full. Wear leveling and data reorganization works even with 0 OP when there actually is user data everywhere. It just takes longer.

    The second you create an empty chunk of disk space it will be TRIMmed anyway and it becomes really free space. It can even be a partition that you never write to. Windows will TRIM it regardless and the SSD will know the LBAs are empty. No flag needed. But I'd love to see some docs from recent times to say otherwise.
  • mode_13h - Friday, December 22, 2017 - link

    I don't know why you think anyone is TRIMming unpartitioned space, but it's a bad assumption.
  • mode_13h - Wednesday, December 20, 2017 - link

    Their documentation seems to suggest it writes something to flag the unpartitioned space as useable for over-provisioning. I don't know how you can easily prove that simply leaving unpartitioned space is equivalent.

    With certain other vendors, I've seen official statements that simply leaving unpartitioned space is not sufficient for overprovisioning.
  • Wolfpup - Tuesday, December 19, 2017 - link

    Dumb question, but what do you need to use it for? I've never used it, that I can remember, and I've got a couple of Crucial drives. I don't have Java on any of my personal systems either.
  • mikato - Friday, January 19, 2018 - link

    Agree. I don’t see how Java is a problem.
  • DanNeely - Tuesday, December 19, 2017 - link

    *shrug*

    While Java apps tend to be clunky to use, for a firmware updater usability isn't a top priority; and Java is an easy way to create an app with the needed low level system access that will run on almost any OS.

Log in

Don't have an account? Sign up now