AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.

ATSB - Heavy (Data Rate)

The Crucial MX500 turns in the best average data rate score on the Heavy test that we've seen from a 1TB-class SATA drive. When the test is run on a full drive, the MX500's performance falls below that of the Samsung 850 PRO and 850 EVO.

ATSB - Heavy (Average Latency)ATSB - Heavy (99th Percentile Latency)

The average and 99th percentile scores of the Crucial MX500 are typical for a good SATA SSD. The MX500 doesn't set any records here, but at least the latency doesn't climb out of control when the test is run on a full drive. This is a notable improvement over the MX300.

ATSB - Heavy (Average Read Latency)ATSB - Heavy (Average Write Latency)

The average read latency of the Crucial MX300 is fast by SATA standards, but it doesn't quite match the Samsung 850 PRO. When the test is run on a full drive, the average read latency suffers and the MX500's score is merely average for a mainstream 1TB drive. The average write latency is slightly below average in both cases, but not to a degree worthy of concern.

ATSB - Heavy (99th Percentile Read Latency)ATSB - Heavy (99th Percentile Write Latency)

The 99th percentile read latency of the MX500 falls in the middle of the pack, though running the test on a full drive has a bit more of an impact than for most drives. The 99th percentile write latency is reasonably low whether or not the test is run on a full drive.

ATSB - Heavy (Power)

The power consumption of the Crucial MX500 on the Heavy test is significantly higher than the record-setting MX300, but the MX500 certainly doesn't qualify as power-hungry compared to the broader field of competitors. The Samsung 850 PRO and EVO drives require much more power on this test than the MX500.

Introduction AnandTech Storage Bench - Light
Comments Locked

90 Comments

View All Comments

  • mode_13h - Tuesday, December 19, 2017 - link

    Actually, you don't need it to upgrade the firmware. But it also does things like over-provisioning and secure-erase. And it can tell you roughly how much write endurance is remaining.
  • ddrіver - Tuesday, December 19, 2017 - link

    Overprovisioning, that very advanced function that translates to shrinking a partition and leaving some free space somewhere at the end of the drive using nothing but Windows tools... There is no killer function in these SSD tools. They make FW updates a lot easier for regular people. I don't want to make boot disks and type in my updates. Just click click form the GUI and I'm done. Maybe something like Samsung's RAPID needs the software but other than that you can do it yourself.
  • jabber - Tuesday, December 19, 2017 - link

    Yeah I just use Disk Management or Partition Wizard to leave 2-5GB (depending on size) free on all my SSDs.
  • mode_13h - Wednesday, December 20, 2017 - link

    Good luck with that. First, it's not much (I usually OP by anywhere from 6.25% to 25% - if you search SSD reviews on this site, you'll see performance benefits even up to 25%).

    Second, it definitely won't work for all vendors. The SSD firmware needs to know that the unpartitioned space doesn't hold user data, and they don't all derive this information by simply looking for a partition table and parsing that, as you seem to believe.
  • ddrіver - Thursday, December 21, 2017 - link

    Any link for that? The point of overprovisioning is to have space to swap data and do internal reorganization even when the drive is full, for wear leveling. Since most drives support TRIM and you can trigger it manually it's impossible to assume there will be data there. It's like the SSD should stop with the wear leveling because I might have some data in that free space it's using anyway.

    The ONLY difference between normal free space and OP is that OP will be there even when you regular partition is full. Wear leveling and data reorganization works even with 0 OP when there actually is user data everywhere. It just takes longer.

    The second you create an empty chunk of disk space it will be TRIMmed anyway and it becomes really free space. It can even be a partition that you never write to. Windows will TRIM it regardless and the SSD will know the LBAs are empty. No flag needed. But I'd love to see some docs from recent times to say otherwise.
  • mode_13h - Friday, December 22, 2017 - link

    I don't know why you think anyone is TRIMming unpartitioned space, but it's a bad assumption.
  • mode_13h - Wednesday, December 20, 2017 - link

    Their documentation seems to suggest it writes something to flag the unpartitioned space as useable for over-provisioning. I don't know how you can easily prove that simply leaving unpartitioned space is equivalent.

    With certain other vendors, I've seen official statements that simply leaving unpartitioned space is not sufficient for overprovisioning.
  • Wolfpup - Tuesday, December 19, 2017 - link

    Dumb question, but what do you need to use it for? I've never used it, that I can remember, and I've got a couple of Crucial drives. I don't have Java on any of my personal systems either.
  • mikato - Friday, January 19, 2018 - link

    Agree. I don’t see how Java is a problem.
  • DanNeely - Tuesday, December 19, 2017 - link

    *shrug*

    While Java apps tend to be clunky to use, for a firmware updater usability isn't a top priority; and Java is an easy way to create an app with the needed low level system access that will run on almost any OS.

Log in

Don't have an account? Sign up now