AnandTech Storage Bench 2011: Much Heavier

I didn't expect to have to debut this so soon, but I've been working on updated benchmarks for 2011. Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

I'll be sharing the full details of the benchmark in some upcoming SSD articles but here are some details:

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

AnandTech Storage Bench 2011 - Heavy Workload

Overall performance is decidedly last generation. The 320 is within striking distance of the 510 but is slower overall in our heavy workload test.

The breakdown of reads vs. writes tells us more of what's going on:

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

Random & Sequential Performance AnandTech Storage Bench 2011 - Light Workload
Comments Locked

194 Comments

View All Comments

  • jwilliams4200 - Wednesday, March 30, 2011 - link

    More OCZ spin doctoring. I was talking about using the drive heavily in a average consumer manner. For example, running CDM a couple times. The notorious Sandforce flaw causes the write speed of the SSD to slow down. No where is that flaw documented, so it is really worse than just a design flaw, it is a bug. If it were a known design flaw, there should be something in the spec sheet stating under what conditions the write speed of the drive will slow down. But there is nothing in the official specifications about that notorious Sandforce bug.
  • sean.crees - Wednesday, March 30, 2011 - link

    And no other companies have undocumented issues?

    How about my first gen Intel SSD's that now run at 1/10th their rated speeds? Where was it documented that this would happen to me?

    So by your reasoning we should toss all Intel SSD's out of the window now right?
  • jwilliams4200 - Wednesday, March 30, 2011 - link

    Intel G1 SSDs do not run at one-tenth their rated speed. There was a slowdown bug, perhaps a factor of two, but Intel long since fixed that with a firmware update.
  • kmmatney - Wednesday, March 30, 2011 - link

    Having several each of Intel and OCZ Agility SSDs, I can tell you that Intel's garbage collection is better, especially in first generation SSDs. I also have an Intel 80GB G1, and is is a little slower than new, but it hasn't slown down nearly as much as some of the OCZ drives I have do. Theya re all still much faster than spindle hard drives. I really do wish that Intel would at least enable manual TRIM of the G1 drives with the SSD Toolbox - but at least the grabage collection is very good.
  • Frallan - Tuesday, March 29, 2011 - link


    Im a happy Intel G2 user today but my 160 GB is running out bc of bl**dy Steam and their weekend/holiday offers. I have waited for the nexty generation of the Inteldrives bc of my experience with the G2 but with this they are 30% to expensive or 50% to slow.

    Intel has failed in either pricing since this drive is a valuedrive or in execution since this is a slow drive. A year ago I would have stood in line allready but now it seems as if Ill have to go with the SandForce.

    Just my 0.02 USD
    /F
  • fackamato - Tuesday, March 29, 2011 - link

    Same here. I also have the G2 160GB (good price on eBay almost a year ago). But the next one will not be this G3 drive, most likely something Sandforce.
  • marraco - Tuesday, March 29, 2011 - link

    Both the Intel 320 300GB and Intel 510 250GB are easily destroyed on price and/or capacity and/or performance by RAID 0 of Intel 320 40GB, 80GB, or Intel 510 120GB (in RAID 0 of 2, 3, or 4 units).

    I would only recommend Intel 320 80GB or Intel 510 120GB in different RAID 0 setups, but they were not tested.

    Given the scalability of SSD units, the price/performance ratio is one of the more important aspects of SSD reviews. I wish Anandtech were giving more attention to the best price/performance in RAID 0.
  • NCM - Tuesday, March 29, 2011 - link

    Remember what the zero in Raid 0 denotes...
  • NCM - Tuesday, March 29, 2011 - link

    ...It's the amount of data you'll have left if any one member of the array fails.
  • marraco - Wednesday, March 30, 2011 - link

    I don't care, since I don't store data on SSD, and I have cheap terabytes of data to automatically store periodic images.

Log in

Don't have an account? Sign up now