Performance Consistency

We've been looking at performance consistency since the Intel SSD DC S3700 review in late 2012 and it has become one of the cornerstones of our SSD reviews. Back in the days many SSD vendors were only focusing on high peak performance, which unfortunately came at the cost of sustained performance. In other words, the drives would push high IOPS in certain synthetic scenarios to provide nice marketing numbers, but as soon as you pushed the drive for more than a few minutes you could easily run into hiccups caused by poor performance consistency. 

Once we started exploring IO consistency, nearly all SSD manufacturers made a move to improve consistency and for the 2015 suite, I haven't made any significant changes to the methodology we use to test IO consistency. The biggest change is the move from VDBench to Iometer 1.1.0 as the benchmarking software and I've also extended the test from 2000 seconds to a full hour to ensure that all drives hit steady-state during the test.

For better readability, I now provide bar graphs with the first one being an average IOPS of the last 400 seconds and the second graph displaying the IOPS divided by standard deviation during the same period. Average IOPS provides a quick look into overall performance, but it can easily hide bad consistency, so looking at standard deviation is necessary for a complete look into consistency.

I'm still providing the same scatter graphs too, of course. However, I decided to dump the logarithmic graphs and go linear-only since logarithmic graphs aren't as accurate and can be hard to interpret for those who aren't familiar with them. I provide two graphs: one that includes the whole duration of the test and another that focuses on the last 400 seconds of the test to get a better scope into steady-state performance.

Steady-State 4KB Random Write Performance

It looks like Crucial has finally taken steps to improve steady-state performance, although the additional over-provisioning is partially to thank for the increase. One criticism I always had about Crucial's SSDs was the relatively bad steady-state performance, but the MX200 finally brings the performance closer to other high-end drives.

Steady-State 4KB Random Write Consistency

The consistency is very good as well and far better than what the BX100 offers. 

Crucial MX200 250GB

The IO consistency appears to behave differently from the MX100 and the graph resembles 850 EVO and Pro quite a bit by dropping quickly in performance and then slowly increasing before evening out. The 1TB model is an exception, though, as it seems that the firmware can't properly handle such a large capacity, which results in worse performance and considerably higher variation. Unfortunately, the MX200 wouldn't respond to the hdparm command that I use for over-provisioning testing, so I don't have any results with added over-provisioning at this point.

Crucial MX200 250GB
Introduction, The Drives & The Test AnandTech Storage Bench - The Destroyer
Comments Locked

62 Comments

View All Comments

  • KAlmquist - Saturday, May 23, 2015 - link

    I'm hoping that once other companies get 3D NAND into production we will see some interesting competition for Samsung.
  • austinsguitar - Friday, May 22, 2015 - link

    I love how they post this but not the mx100 tests.... whats the FKING POINT in testing than?
  • Ryan Smith - Friday, May 22, 2015 - link

    I'm not sure I follow. The MX100 is in our graphs.
  • earl colby pottinger - Friday, May 22, 2015 - link

    Question about the hardware encryption.

    Where does the key come from? Can I set my own key?

    The reason I ask is, if all the drives have the same key from the manufacturer then it is like there is no key at all. As if you know one key you know them all.

    If it is made by a random number generator, how do we not know there is a pattern from the generator so a hacker only needs to do a few thousand (million?) tests to break the encryption?

    If on the other-hand we can set the key, is it easy to do? Is the key such that we can write it to the drive but it is hard to read out?
  • Vinchent - Friday, May 22, 2015 - link

    Wow I just purchased the MX200 250GB a couple of days ago.
    If I had read this article before, I wouldn't have bought it.
    btw, thanks for this great article, AT :)
  • RandUser - Saturday, May 23, 2015 - link

    Lol, same for me here. Should have gotten a BX100. The MX200 performs without problems though, so no point returning it, just it's not the best value for money.
  • MrSpadge - Saturday, May 23, 2015 - link

    If you don't fill it in a sudden rush, it's still a fine drive. Not the best choice, but not terrible either.
  • PaulBags - Friday, May 22, 2015 - link

    The Samsung 850 pro 1tb is missing from most charts, disappointing.
  • Sejong - Friday, May 22, 2015 - link

    No comments on the NAND being 16nm? Is this not an issue? I am reluctant to buy MX100, 200 and BX100 when there is M500 still in stock (the price seems to be rising).

    Another review request : Intel`s new SSD 535 (this seems to use hynix 16nm NAND memory).
  • MrSpadge - Saturday, May 23, 2015 - link

    No, it's not an issue. Even with "just" the guaranteed endurance it's going to last a long time. And very probably a lot longer, as in any SSD which is not under continous sustained use (which would cause very high write amplification).

Log in

Don't have an account? Sign up now