AnandTech Storage Bench 2013

When I built the AnandTech Heavy and Light Storage Bench suites in 2011 I did so because we didn't have any good tools at the time that would begin to stress a drive's garbage collection routines. Once all blocks have a sufficient number of used pages, all further writes will inevitably trigger some sort of garbage collection/block recycling algorithm. Our Heavy 2011 test in particular was designed to do just this. By hitting the test SSD with a large enough and write intensive enough workload, we could ensure that some amount of GC would happen.

There were a couple of issues with our 2011 tests that I've been wanting to rectify however. First off, all of our 2011 tests were built using Windows 7 x64 pre-SP1, which meant there were potentially some 4K alignment issues that wouldn't exist had we built the trace on a system with SP1. This didn't really impact most SSDs but it proved to be a problem with some hard drives. Secondly, and more recently, I've shifted focus from simply triggering GC routines to really looking at worst case scenario performance after prolonged random IO. For years I'd felt the negative impacts of inconsistent IO performance with all SSDs, but until the S3700 showed up I didn't think to actually measure and visualize IO consistency. The problem with our IO consistency tests are they are very focused on 4KB random writes at high queue depths and full LBA spans, not exactly a real world client usage model. The aspects of SSD architecture that those tests stress however are very important, and none of our existing tests were doing a good job of quantifying that.

I needed an updated heavy test, one that dealt with an even larger set of data and one that somehow incorporated IO consistency into its metrics. I think I have that test. I've just been calling it The Destroyer (although AnandTech Storage Bench 2013 is likely a better fit for PR reasons).

Everything about this new test is bigger and better. The test platform moves to Windows 8 Pro x64. The workload is far more realistic. Just as before, this is an application trace based test - I record all IO requests made to a test system, then play them back on the drive I'm measuring and run statistical analysis on the drive's responses.

Imitating most modern benchmarks I crafted the Destroyer out of a series of scenarios. For this benchmark I focused heavily on Photo editing, Gaming, Virtualization, General Productivity, Video Playback and Application Development. Rough descriptions of the various scenarios are in the table below:

AnandTech Storage Bench 2013 Preview - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

While some tasks remained independent, many were stitched together (e.g. system backups would take place while other scenarios were taking place). The overall stats give some justification to what I've been calling this test internally:

AnandTech Storage Bench 2013 Preview - The Destroyer, Specs
  The Destroyer (2013) Heavy 2011
Reads 38.83 million 2.17 million
Writes 10.98 million 1.78 million
Total IO Operations 49.8 million 3.99 million
Total GB Read 1583.02 GB 48.63 GB
Total GB Written 875.62 GB 106.32 GB
Average Queue Depth ~5.5 ~4.6
Focus Worst case multitasking, IO consistency Peak IO, basic GC routines

SSDs have grown in their performance abilities over the years, so I wanted a new test that could really push high queue depths at times. The average queue depth is still realistic for a client workload, but the Destroyer has some very demanding peaks. When I first introduced the Heavy 2011 test, some drives would take multiple hours to complete it - today most high performance SSDs can finish the test in under 90 minutes. The Destroyer? So far the fastest I've seen it go is 10 hours. Most high performance I've tested seem to need around 12 - 13 hours per run, with mainstream drives taking closer to 24 hours. The read/write balance is also a lot more realistic than in the Heavy 2011 test. Back in 2011 I just needed something that had a ton of writes so I could start separating the good from the bad. Now that the drives have matured, I felt a test that was a bit more balanced would be a better idea.

Despite the balance recalibration, there's just a ton of data moving around in this test. Ultimately the sheer volume of data here and the fact that there's a good amount of random IO courtesy of all of the multitasking (e.g. background VM work, background photo exports/syncs, etc...) makes the Destroyer do a far better job of giving credit for performance consistency than the old Heavy 2011 test. Both tests are valid, they just stress/showcase different things. As the days of begging for better random IO performance and basic GC intelligence are over, I wanted a test that would give me a bit more of what I'm interested in these days. As I mentioned in the S3700 review - having good worst case IO performance and consistency matters just as much to client users as it does to enterprise users.

I'm reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric I've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

AT Storage Bench 2013 - The Destroyer

There's simply no comparison between the EVO and Crucial's M500. Even at half the capacity, the EVO does a better job in our consistency test. SanDisk's Extreme II remains the king here but that's more of a performance tuned part vs. something that offers better cost per GB. Note just how impactful the added spare is on giving the EVO an advantage over even the 840 Pro. It's so very important that 840 Pro owners keep as much free space on the drive as possible to keep performance high and consistent.

AT Storage Bench 2013 - The Destroyer

 

Performance Consistency & Testing TRIM Random & Sequential Performance
Comments Locked

137 Comments

View All Comments

  • eamon - Thursday, August 1, 2013 - link

    Unless you want to run some kind of continual I/O server, I suspect performance will be fast enough not to matter; I'd only look at pricing if I were you...
  • Busverpasser - Thursday, August 8, 2013 - link

    Hi there, great review, thanks a lot. Actually I do have a question... The article says "The performance story is really good (particularly with the larger capacities), performance consistency out of the box is ok (and gets better if you can leave more free space on the drive)..."

    Does leaving more free space mean that this space is supposed to be unpartitioned or just not filled with data? When I bought my Intel Postville SSD some time ago, I left some space unpartitioned but never really knew whether that was the right thing to do :D. Can someone give me a hint here?
  • xchaotic - Wednesday, August 14, 2013 - link

    @Busverpasser just leave more space free, it doesn't have to be unpartitioned.
    Worst case if you need that extra space for a while, you'll get lower performance, but more storage whenever you need it.
  • speculatrix - Saturday, August 17, 2013 - link

    the table titled "Samsung SSD 840 EVO TurboWrite Buffer Size vs. Capacity" should be titled "Capacity vs Usage vs Endurance"
  • rdugar - Friday, August 23, 2013 - link

    Am in the market for an SSD finally to replace an HDD on a Windows 7 laptop. Was almost set on the 128GB Samsung 840 Pro, but saw the comment on poor performance at almost full capacities.

    Price, reliability and endurance being the most important to me, which one should I go for?

    128Gb Samsung 840 Pro? approx $119 after coupons, etc.
    120 GB Samsung 840 EVO? probably $99 or so
    256 GB Samsung 840 EVO? probably $165 or so
    Other brand and model?

    If I have to spend $120 odd, may as well spend another $50 and get double the capacity....
  • tfop - Saturday, August 24, 2013 - link

    I have a question regarding to the NAND Comparison table.
    How do this Page and Block sizes affect the right Clustersize and Alignment of the Partition?
    If i am getting this right, the SSD 840 EVO would need a 8 KiB Clustersize and a 2 MiB Alignment.
  • Gnomer87 - Wednesday, August 28, 2013 - link

    I have a couple of questions:

    First, how much data is typically written to the average consumer HDD on a daily basis these days? I am thinking it's nowhere close to 50GiB. I guess what I am really interested in knowing, is how much data the operating system(windows 7) writes to the drive for various maintenance uses(if there are any beside defragmenting). In my mind, simply booting up the computer shouldn't mean any writes to the drive at all. Ergo, given my typical use, a 120GB SSD of that caliber, should last a lifetime. Am I right in thinking this? I mean, reading doesn't affect the durability right?

    Secondly: I've been considering getting an SSD for use as a OS drive for a long time, reason of course was to speed up boot time. However, I've long wondered WHY windows boots so slowly from HDDs in the first place. After all, the amount of data loaded during boot up isn't large. In my case the processes post-boot take up around 200 MBs, Assuming the actual amount of data loaded from the drive is about the same, it really shouldn't take that long. My HDD is capable of reading up to 120 MBs in optimal situations, so it's obvious the boot up process isn't optimal by a long shot.

    But why this slow? It can take over a minute before she(my computer) is done loading and starting all processes. Last semester I took course in Operating system at the local university. I must confess I was a horrible student, I didn't show up much. But I do remember a few key elements, namely the scheduler and how this scheduler continually does context switches, letting each process use the CPU, and thus creating parallelism. Now what was really interesting was resource management. It's the scheduler that decides which process is currently running on the cpu, and the scheduler process is run in between each context switch, effectively letting each user process run and have access to resources, such as the hard drive. Now, what happens if all the processes want data from the drive at the same time? Would each process continually interrupt the other processes loading of data, and thus causing the HDD to seek constantly?

    Could that explain why booting takes such idiotic amounts of time? An extremely inefficient resource management that basically ignores the inherent seek-time related weaknesses of an HDD? SSDs, as we know, barely have seek-time, and thus the performance loss from context switching should be negligible.

    I know my cousins SSD powered computer boots near instantly, once it's done with the usual BIOS stuff, the OS is booted and ready for use in mere seconds. And yes, we are talking a completely cold boot here, no sleep or anything like that.
  • abhilashjain30 - Friday, September 20, 2013 - link

    I purchased Samsung 120GB EVO 3 days back from OnlySSD ( http://goo.gl/HqgjId )and Drive performance is too good compare to 120GB 840 Basic Series.
  • abhilashjain30 - Friday, September 20, 2013 - link

    Available at OnlySSD dot com
  • abhilashjain30 - Wednesday, October 2, 2013 - link

    Samsung Evo Series now available online in India. You can check on OnlySSD dot com or PrimeABGB dot com

Log in

Don't have an account? Sign up now