AnandTech Storage Bench 2013

Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based – we record all IO requests made to a test system and play them back on the drive we are testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.

AnandTech Storage Bench 2013 - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, Bioshock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, Ad-Aware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we have been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

Storage Bench 2013 - The Destroyer (Data Rate)

Wow, this actually looks pretty bad. The 256GB M600 is slower than the 256GB MX100 and I guess it is due to the fact that under sustained workloads, the M600 will have to transfer data from SLC to MLC at the same time it is taking in host IOs, so the performance drops due to the internal IO overhead. The 1TB drive does better thanks to higher parallelism, but even then the M550 and 840 EVO are faster.

Storage Bench 2013 - The Destroyer (Service Time)

Performance Consistency & TRIM Validation AnandTech Storage Bench 2011
Comments Locked

56 Comments

View All Comments

  • Kristian Vättö - Wednesday, October 1, 2014 - link

    Oh, that one. It's from the M600's reviewer's guide and the numbers are based on Micron's own research.
  • maofthnun - Wednesday, October 1, 2014 - link

    Thanks for the clarification on the powerloss protection feature. I am very disappointed by how it actually works because that was a major deciding factor in my purchase of the MX100. At the time, the choice was between the MX100 and the Seagate 600 Pro which was $30 more and which also offers powerloss protection. I would have gladly paid the extra $30 if I had known the actual workings of the MX100.

    Since we're on the topic, I wonder if other relatively recent SSDs within the consumer budget that offer powerloss protection (e.g. Intel 730, Seagate 600 Pro) work the way everyone assumes (flush volatile data)? Would love to hear your comment on this.
  • Kristian Vättö - Wednesday, October 1, 2014 - link

    Seagate 600 Pro is basically an enterprise drive (28% over-provisioning etc), so it does have full power-loss protection. It uses tantalum capacitors like other enterprise SSDs.

    http://www.anandtech.com/show/6935/seagate-600-ssd...

    As for the SSD 730, it too has full power-loss protection, which is because of its enterprise background (it's essentially an S3500 with an overclocked controller/NAND and a more client-optimized firmware). The power-loss protection implementation is the same as in the S3500 and S3700.
  • maofthnun - Wednesday, October 1, 2014 - link

    Thank you. I'll be targeting those two as my future purchase.
  • RAMdiskSeeker - Wednesday, October 1, 2014 - link

    If the 256GB drive were formatted with a 110GB partition, would it operate in Dynamic Write Acceleration 100% of the time? If so, this would be an interesting way to get an SLC drive.
  • Romberry - Friday, January 9, 2015 - link

    I'm really not sure that the AnandTech Storage Bench 2013 does an adequate job of characterzing the performance of this drive or really, any drive in a consumer class environment. And I'm not sure that filling all the LBA's and looking at the psedo-SLC step down as the drive is filled really tells us anything useful (other than where the break points are...and how much use is that?) either.

    Performance consistency? Same deal. Almost no one uses consumer class drives (large steady long term massive writes) this way, and those who do use drives this way likely aren't using consumer class drives.

    I can really take nothing useful away from this review. And BTW, this whole "Crucial doesn't really have power protection, we didn't actually bother checking but just assumed and repeated the marketing speak before" stuff is not the kind of thing I expect from AnandTech. With that kind of care being taken in these articles, I'll be careful to read things here with the same sort of skepticism I had previously reserved for other sites. I'd sort of suspended that skepticism with AnandTech over the years. My mistake.

Log in

Don't have an account? Sign up now