Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Micron M600 256GB
Default
25% Over-Provisioning

The 1TB M600 actually performs quite significantly worse than the 256GB model, which is most likely due to the tracking overhead that the increased capacity causes (more pages to track). Overall IO consistency has not really changed from the MX100 as Dynamic Write Acceleration only affects burst performance. I suspect the firmware architectures for sustained performance are similar between the MX100 and M600, although with added over-provisioning the M600 is a bit more consistent.

Micron M600 256GB
Default
25% Over-Provisioning

Micron M600 256GB
Default
25% Over-Provisioning

TRIM Validation

To test TRIM, I filled the 128GB M600 with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

It appears that TRIM does not fully recover the SLC cache as the acceleration capacity seems to be only ~7GB. I suspect that giving the drive some idle time would do the trick because it might take a couple of minutes (or more) for the internal garbage collection to finish after issuing a TRIM command.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

56 Comments

View All Comments

  • makerofthegames - Monday, September 29, 2014 - link

    If the cost is low enough, they might be able to compete with hard drives. A two-disk RAID0 of these 1TB drives could replace my 2TB WD Black, which I store my game library on. And even a slow drive like this is a million times faster than any hard drive.

    That said, it's still a $900 set of SSDs fighting with a $200 hard drive. What we really need is a $200 1TB SSD, even a horribly slow one (is it possible to pack four bits into one cell? Like a QLC or something? That might be the way to do it). That would be able to compete not just in the performance sector, but in the bulk storage arena.

    For people like me, capacity also affects performance, because it means I can install more apps/games to that drive instead of the slow spinning rust. I actually bought a very low-performing Mushkin 180GB SSD for my desktop, because it was the same price as the 120GB drives everyone else was slinging. That meant I could fit more games onto it, even the big ones like Skyrim.
  • sirius3100 - Monday, September 29, 2014 - link

    Afaik QLC has been used in some USB-sticks in the past. But for SSDs the amount of write cycles QLC-NAND would be able to endure might be too low.
  • bernstein - Monday, September 29, 2014 - link

    you are just wrong, it's an order of magnitude BETTER than a M500 & still 5x better than MX100 : http://techreport.com/r.x/micron-m600/db2-100-writ...
  • milli - Monday, September 29, 2014 - link

    That review wasn't up yet when I posted my comment.
    But you can add to that, that it's still 340x worse than the ARC 100 in that same test (which is also a budget drive). It's worse in the read test than the MX100 and 5x worse than the ARC.
    So yeah, service times are just terrible on Crucial's 256GB drives (all models).
  • nirwander - Monday, September 29, 2014 - link

    Obviously, Dynamic Write Acceleration is not meant to be benchmarked. And "client workload" is not about constant high pressure on the SSD, so the drive is basically ok.
  • kmmatney - Monday, September 29, 2014 - link

    Agreed. It seems like the whole premise of the Dynamic Write Acceleration requires idle time to move data off the SLC NAND, but benchamarking doesn't allow that to happen (and isn't like real-life client usage). Also, if you just compare the MX100 256GB vs the M600 256GB, the newer SSD does have better write speeds, and does better at everything except the destroyer test.
  • hojnikb - Monday, September 29, 2014 - link

    I wonder if Crucial is gonna bring DWA to their consumer line aswell..
  • Samus - Monday, September 29, 2014 - link

    The M500 sure could have used it back in the day. The 120GB model had appalling write performance.
  • PrivacyIsNotCriminal - Monday, September 29, 2014 - link

    Appreciate the brief write up on encryption and that this may be a technically challenging area to detail. But in a post-Snowden world with increasing complex malware and emphasis on data mining, we should all be pressing for strengthening of protective technologies.

    Additional article depth on encryption technologies, certification authorities and related technical metrics would be appreciated by many of us who are not IT professionals, but are concerned about protecting our personal LANs and links to our wireless/cellular devices.

    Contrary to the government's and RIAA most recent assertions, a desire for privacy and freedom from warrantless searches should be a fundamental American value.

    Thanks for the in depth technical reviews and hope Anand is doing well.
  • kaelynthedove78 - Monday, September 29, 2014 - link

    This explains the data loss issues we've had with the MX100 series, both under Windows 7 and FreeNAS.

    With all C-states enabled (the default and recommended configuration, which Anandtech doesn't use since some highly advertized drives are badly designed and suffer up to 40% IOPS drop), the drives don't properly handle suspending and resuming the system.

    Under FreeNAS, the zpool would slowly accumulate corruption and during the next scrubbing the whole zpool would get trashed and the only option was to restore all data from backup.

    Under windows strange errors, like being unable to properly recognise USB devices or install Windows updates, would appear little by little after every suspend/resume cycle until the machine would refuse to boot up at all.

    A workaround is to either disable all power-saving C-states or to disable HIPM and DIPM on *all* disk controller, even those which don't have Micron drives connected. Or to never suspend/resume.

    We decided to return all our Micron drivers, about 350 total, and get Intel SSDs instead. They're not cheap and not the fastest but at least I don't have to keep re-imaging systems every week..

    For information on how to enable/disable HIPM and DIPM under Windows 7 please see:
    www.sevenforums.com/tutorials/177819-ahci-link-power-management-enable-hipm-dipm.html

Log in

Don't have an account? Sign up now