Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). We perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.

Desktop Iometer - 4KB Random Read

Desktop Iometer - 4KB Random Write

Desktop Iometer - 4KB Random Write (QD=32)

Random performance remains more or less unchanged from the MX100 and M550. Micron has always done well in random performance as long as the IOs are of bursty nature, but Micron's performance consistency under sustained workloads has never been top notch.

Sequential Read/Write Speed

To measure sequential performance we run a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read

Sequential write performance sees a minor increase at smaller capacities thanks to Dynamic Write Acceleration, but aside from that there is nothing surprising in sequential performance.

Desktop Iometer - 128KB Sequential Write

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers, but most other controllers are unaffected.

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance

 

AnandTech Storage Bench 2011 Performance vs. Transfer Size
Comments Locked

56 Comments

View All Comments

  • makerofthegames - Monday, September 29, 2014 - link

    If the cost is low enough, they might be able to compete with hard drives. A two-disk RAID0 of these 1TB drives could replace my 2TB WD Black, which I store my game library on. And even a slow drive like this is a million times faster than any hard drive.

    That said, it's still a $900 set of SSDs fighting with a $200 hard drive. What we really need is a $200 1TB SSD, even a horribly slow one (is it possible to pack four bits into one cell? Like a QLC or something? That might be the way to do it). That would be able to compete not just in the performance sector, but in the bulk storage arena.

    For people like me, capacity also affects performance, because it means I can install more apps/games to that drive instead of the slow spinning rust. I actually bought a very low-performing Mushkin 180GB SSD for my desktop, because it was the same price as the 120GB drives everyone else was slinging. That meant I could fit more games onto it, even the big ones like Skyrim.
  • sirius3100 - Monday, September 29, 2014 - link

    Afaik QLC has been used in some USB-sticks in the past. But for SSDs the amount of write cycles QLC-NAND would be able to endure might be too low.
  • bernstein - Monday, September 29, 2014 - link

    you are just wrong, it's an order of magnitude BETTER than a M500 & still 5x better than MX100 : http://techreport.com/r.x/micron-m600/db2-100-writ...
  • milli - Monday, September 29, 2014 - link

    That review wasn't up yet when I posted my comment.
    But you can add to that, that it's still 340x worse than the ARC 100 in that same test (which is also a budget drive). It's worse in the read test than the MX100 and 5x worse than the ARC.
    So yeah, service times are just terrible on Crucial's 256GB drives (all models).
  • nirwander - Monday, September 29, 2014 - link

    Obviously, Dynamic Write Acceleration is not meant to be benchmarked. And "client workload" is not about constant high pressure on the SSD, so the drive is basically ok.
  • kmmatney - Monday, September 29, 2014 - link

    Agreed. It seems like the whole premise of the Dynamic Write Acceleration requires idle time to move data off the SLC NAND, but benchamarking doesn't allow that to happen (and isn't like real-life client usage). Also, if you just compare the MX100 256GB vs the M600 256GB, the newer SSD does have better write speeds, and does better at everything except the destroyer test.
  • hojnikb - Monday, September 29, 2014 - link

    I wonder if Crucial is gonna bring DWA to their consumer line aswell..
  • Samus - Monday, September 29, 2014 - link

    The M500 sure could have used it back in the day. The 120GB model had appalling write performance.
  • PrivacyIsNotCriminal - Monday, September 29, 2014 - link

    Appreciate the brief write up on encryption and that this may be a technically challenging area to detail. But in a post-Snowden world with increasing complex malware and emphasis on data mining, we should all be pressing for strengthening of protective technologies.

    Additional article depth on encryption technologies, certification authorities and related technical metrics would be appreciated by many of us who are not IT professionals, but are concerned about protecting our personal LANs and links to our wireless/cellular devices.

    Contrary to the government's and RIAA most recent assertions, a desire for privacy and freedom from warrantless searches should be a fundamental American value.

    Thanks for the in depth technical reviews and hope Anand is doing well.
  • kaelynthedove78 - Monday, September 29, 2014 - link

    This explains the data loss issues we've had with the MX100 series, both under Windows 7 and FreeNAS.

    With all C-states enabled (the default and recommended configuration, which Anandtech doesn't use since some highly advertized drives are badly designed and suffer up to 40% IOPS drop), the drives don't properly handle suspending and resuming the system.

    Under FreeNAS, the zpool would slowly accumulate corruption and during the next scrubbing the whole zpool would get trashed and the only option was to restore all data from backup.

    Under windows strange errors, like being unable to properly recognise USB devices or install Windows updates, would appear little by little after every suspend/resume cycle until the machine would refuse to boot up at all.

    A workaround is to either disable all power-saving C-states or to disable HIPM and DIPM on *all* disk controller, even those which don't have Micron drives connected. Or to never suspend/resume.

    We decided to return all our Micron drivers, about 350 total, and get Intel SSDs instead. They're not cheap and not the fastest but at least I don't have to keep re-imaging systems every week..

    For information on how to enable/disable HIPM and DIPM under Windows 7 please see:
    www.sevenforums.com/tutorials/177819-ahci-link-power-management-enable-hipm-dipm.html

Log in

Don't have an account? Sign up now