AnandTech Storage Bench 2013

When Anand built the AnandTech Heavy and Light Storage Bench suites in 2011 he did so because we didn't have any good tools at the time that would begin to stress a drive's garbage collection routines. Once all blocks have a sufficient number of used pages, all further writes will inevitably trigger some sort of garbage collection/block recycling algorithm. Our Heavy 2011 test in particular was designed to do just this. By hitting the test SSD with a large enough and write intensive enough workload, we could ensure that some amount of GC would happen.

There were a couple of issues with our 2011 tests that we've been wanting to rectify however. First off, all of our 2011 tests were built using Windows 7 x64 pre-SP1, which meant there were potentially some 4K alignment issues that wouldn't exist had we built the trace on a system with SP1. This didn't really impact most SSDs but it proved to be a problem with some hard drives. Secondly, and more recently, we've shifted focus from simply triggering GC routines to really looking at worst-case scenario performance after prolonged random IO.

For years we'd felt the negative impacts of inconsistent IO performance with all SSDs, but until the S3700 showed up we didn't think to actually measure and visualize IO consistency. The problem with our IO consistency tests is that they are very focused on 4KB random writes at high queue depths and full LBA spans–not exactly a real world client usage model. The aspects of SSD architecture that those tests stress however are very important, and none of our existing tests were doing a good job of quantifying that.

We needed an updated heavy test, one that dealt with an even larger set of data and one that somehow incorporated IO consistency into its metrics. We think we have that test. The new benchmark doesn't even have a name, we've just been calling it The Destroyer (although AnandTech Storage Bench 2013 is likely a better fit for PR reasons).

Everything about this new test is bigger and better. The test platform moves to Windows 8 Pro x64. The workload is far more realistic. Just as before, this is an application trace based test–we record all IO requests made to a test system, then play them back on the drive we're measuring and run statistical analysis on the drive's responses.

Imitating most modern benchmarks Anand crafted the Destroyer out of a series of scenarios. For this benchmark we focused heavily on Photo editing, Gaming, Virtualization, General Productivity, Video Playback and Application Development. Rough descriptions of the various scenarios are in the table below:

AnandTech Storage Bench 2013 Preview - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

While some tasks remained independent, many were stitched together (e.g. system backups would take place while other scenarios were taking place). The overall stats give some justification to what we've been calling this test internally:

AnandTech Storage Bench 2013 Preview - The Destroyer, Specs
  The Destroyer (2013) Heavy 2011
Reads 38.83 million 2.17 million
Writes 10.98 million 1.78 million
Total IO Operations 49.8 million 3.99 million
Total GB Read 1583.02 GB 48.63 GB
Total GB Written 875.62 GB 106.32 GB
Average Queue Depth ~5.5 ~4.6
Focus Worst-case multitasking, IO consistency Peak IO, basic GC routines

SSDs have grown in their performance abilities over the years, so we wanted a new test that could really push high queue depths at times. The average queue depth is still realistic for a client workload, but the Destroyer has some very demanding peaks. When we first introduced the Heavy 2011 test, some drives would take multiple hours to complete it; today most high performance SSDs can finish the test in under 90 minutes. The Destroyer? So far the fastest we've seen it go is 10 hours. Most high performance SSDs we've tested seem to need around 12–13 hours per run, with mainstream drives taking closer to 24 hours. The read/write balance is also a lot more realistic than in the Heavy 2011 test. Back in 2011 we just needed something that had a ton of writes so we could start separating the good from the bad. Now that the drives have matured, we felt a test that was a bit more balanced would be a better idea.

Despite the balance recalibration, there's just a ton of data moving around in this test. Ultimately the sheer volume of data here and the fact that there's a good amount of random IO courtesy of all of the multitasking (e.g. background VM work, background photo exports/syncs, etc...) makes the Destroyer do a far better job of giving credit for performance consistency than the old Heavy 2011 test. Both tests are valid; they just stress/showcase different things. As the days of begging for better random IO performance and basic GC intelligence are over, we wanted a test that would give us a bit more of what we're interested in these days. As Anand mentioned in the S3700 review, having good worst-case IO performance and consistency matters just as much to client users as it does to enterprise users.

We're reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

AT Storage Bench 2013 - The Destroyer (Data Rate)

The SSD 530 does okay in our new Storage Bench 2013. The improvement from SSD 335 is again quite significant, which is mostly thanks to the improved performance consistency. However, the SF-2281 simply can't challenge the more modern designs and for ultimate performance the SanDisk Extreme II is still the best pick. 

AT Storage Bench 2013 - The Destroyer (Service Time)

Performance Consistency & TRIM Validation Random & Sequential Performance
Comments Locked

60 Comments

View All Comments

  • AnnonymousCoward - Monday, November 18, 2013 - link

    I enjoyed the review, but why can't you have a single real world benchmark??? You compare CPUs based on the time it takes to encode/decode, and fps in games. That tells readers the quantified difference. Your SSD data tells the reader nothing about Windows startup time, file copy time, and program load time. This has been an overlook on Anandtech from Day 1. I've brought this up multiple times in these comments, but you guys somehow don't get it.
  • dhisumdhisum - Tuesday, November 19, 2013 - link

    Debroah, will you marry me? I don't work, I am a bum.
  • dac7nco - Tuesday, November 19, 2013 - link

    Greatest reply ever.
  • Bullwinkle J Moose - Saturday, November 23, 2013 - link

    Technically, that was a proposal...
    The reply has not yet been given
  • Tjalve - Wednesday, November 20, 2013 - link

    I have actually don that kind of testing. But i use 20min idle time.
    http://www.nordichardware.se/SSD-Recensioner/svens...

    The text is in swedish so use google translate to translate to english. But scroll down and qlik on the links.
    But check the diffrence between test 6 and 7 in the graphs.
  • Tjalve - Wednesday, November 20, 2013 - link

    I have actually don that kind of testing. But i use 20min idle time.
    http://www.nordichardware.se/SSD-Recensioner/svens...

    The text is in swedish so use google translate to translate to english. But scroll down and qlik on the links.
    But check the diffrence between test 6 and 7 in the graphs.
  • nicolaim - Wednesday, November 27, 2013 - link

    MyDigitalSSD sells M.2 SSDs at retail, so saying M.2 SSDs are OEM-only is incorrect.
  • mi1stormilst - Friday, December 6, 2013 - link

    The Intel 530 is $169.99 on newegg today ... tack on the 10% discount code floating around (NAFSAVETENDEC6W) for newegg and you have a bargain at $155.98 shipped!!!
  • PKR - Sunday, December 8, 2013 - link

    With my Macbook pro Mid 2010, and Intel 530 240gb with DC12 firmware, I think this ssd is slow - I am only getting about 200 mbps write and 260 mbps read speed. Very disappointed, as I the reviews online pointed to speeds in the range of 500 mbps.

    I tried the installation two ways - one by cloning the system partition using carbon copy cloner, and another using a fresh install from super-drive and updating .. In both cases, speed didn't change.

    If it matters, I have 4 partitions on the drive. The system partition is of size 100gb, with about 40gb free space after migrating my content.
  • Wolfpup - Monday, December 16, 2013 - link

    I switched from Intel to Micron/Crucial after Intel switched to Sandforce controllers...I'd still pick this over OTHER sandforce drives, but I'm still picking an M500 over this...

Log in

Don't have an account? Sign up now