AnandTech Storage Bench 2011

Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

The breakdown of reads vs. writes tells us more of what's going on:

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)

Random & Sequential Read/Write Speed AnandTech Storage Bench 2011 - Light Workload
Comments Locked

112 Comments

View All Comments

  • Paazel - Thursday, June 23, 2011 - link

    Do you allow your computer to sleep? I had a Vertex 2 die on me, and forum speculation led me to believe that allowing my computer to sleep may have been the culprit.
  • Anand Lal Shimpi - Thursday, June 23, 2011 - link

    My personal machine that it's deployed in is a notebook that is allowed to sleep (and does so) regularly.

    I also don't do any of the odd stability optimizations on my testbeds either. Sleep is always enabled and definitely allowed to happen (I don't always catch my testbeds after they've finished a long test so they'll go off to sleep).

    While I do believe that earlier issues may have been sleep related, I'm not sure about this one in particular.

    Take care,
    Anand
  • Ryan Smith - Thursday, June 23, 2011 - link

    Just to throw in my own $0.02, although I put my Vertex 2 in a desktop, my results are the same as what Anand has seen. My desktop hybrid sleeps regularly, and I have not encountered any issues.
  • JasonInofuentes - Friday, June 24, 2011 - link

    +1 On an Agility 2 90GB, MicroCenter Sandforce 64GB drive and Agility 2 40GB in a desktop, netbook and HTPC setting, all allowed to sleep. Indeed I blame many of my PC related issues to my inability to sleep.
  • sam. - Saturday, June 25, 2011 - link

    I have a 120GB Vertex with the Indilinx controller and had mine die on me after about a year and a half of average use in my laptop. (Mind you the RMA process was good, and they replaced it with a new identical SSD). I had nearly 2700 power on times (putting my laptop to sleep multiple times a day) and 3.7 terrabytes written onto the SSD before it started corrupting registry files and BSODing.

    To be honest, a year and a half as a lifespan seems really bad for what was a high end product, though from what I hear the Sandforce controller is better in terms of reliability. I am still willing to let my laptop sleep though, though just doing my best to write less to the SSD.
  • kahwaji_n - Thursday, June 23, 2011 - link

    i don't think so, maybe if your computer hibernate a Lot then it may be the reason for that, cause when computer sleep the ram will still hold the data and little data has to be written to disk drive contrary to hibernation where the Ram will put to sleep and all data will be written back to disk drive, if you have windows 7 and SSD in raid setup (where no trim command could be pass to controller) and your computer hibernate periodically! run the index Performance in windows 7 and see how the Performance is degraded severely.
  • iwod - Thursday, June 23, 2011 - link

    I think the first few Graph / Charts pretty much sums up what i have been saying. With Double the Seq Read, Random Read numbers, you only get less then 10% performance difference. The bottleneck for majority of our workload has shifted back from SSD storage to CPU processing speed.

    Which means, the best time to get an SSD is now!, If you can afford it and the Storage space is enough for a main OS drive.
  • L. - Thursday, June 23, 2011 - link

    Err .. it's going to be dirt cheap pretty soon .. I wouldn't spend "GFX bucks" on a storage device tbh. (Seriously, for that price I prefer my 2TBWDgreen raid10 ... makes so much more sense even though it does not serve the same purpose...)
  • khan321 - Thursday, June 23, 2011 - link

    Why no mention of the increased lifespan of 32nm NAND? This is a massive benefit to me over 25nm.
  • B3an - Thursday, June 23, 2011 - link

    Because Anand has pointed this out before. Theres absolutely nothing to worry about regarding the lifespan on 25nm with a good controller, as it would last many many decades. The nand flash will lose it's charge before this happens anyway.

Log in

Don't have an account? Sign up now