AnandTech Storage Bench 2011

Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

I don't expect many desktop/workstation users to buy Z-Drive R4s, but this is the fastest SSD we've tested in our AnandTech Storage Bench 2011 suite. Note the imperfect scaling from 4 to 8 controllers however, the increase over the RevoDrive 3 X2 is only 56%. Even with a strenuous desktop workload it's tough to fully load all 8 controllers in the Z-Drive R4.

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)

AS-SSD Incompressible Sequential Performance AnandTech Storage Bench 2011 - Light Workload
Comments Locked

57 Comments

View All Comments

  • josephjpeters - Tuesday, September 27, 2011 - link

    And why is that? Because of the supposed high failure rates? Can you supply any real information about this?

    OCZ has less then 1% failure rate. There may be more then 1% of customers who have "issues" but they aren't related to the drive. User error plays a pretty big role, but of course it MUST be OCZ's fault right?

    Enterprise customers are professionals who know how to install serious hardware like this. And if they don't? OCZ will help install it for them on site. That's what enterprise companies do!

  • Troff - Tuesday, September 27, 2011 - link

    I don't believe that 1% number for a second. First of all, I read some return stats from a store that listed the RETURN rate at just below 3%. Secondly, I know of 5 very different systems with Vertex 3 drives in them. All 5 have recurring lockups/BSODs. The people who built and run these systems write their own filesystems. They are extremely knowledgeable. If they can't make them run properly, they are not fit to run outside of a lab environment.

    That said, I suspect it's as much Sandforce that's the problem as it is OCZ.
  • josephjpeters - Wednesday, September 28, 2011 - link

    I think it's an Intel problem. But NooOoOo... it can't be an Intel problem...
  • geddarkstorm - Wednesday, September 28, 2011 - link

    From all the data I've been seeing, it seems to be a SATA III issue, and an issue with motherboards not being reading for such high volumes of data flow. Mechanical drives can get no where near SSD speeds, and I don't think manufacturers were really expecting how fast they'd go on SATA III (almost pegging it out at times, and it's brand new!).
  • josephjpeters - Wednesday, September 28, 2011 - link

    Exactly. It's not an OCZ issue, it's the motherboard. When will someone step in and take the blame?
  • Beenthere - Tuesday, September 27, 2011 - link

    SSDs appear to be an on-the-job learning program for SSD manufacturers with all the issues that currently exist.

    I do not however believe they are selling SSDs at low margins.

    Enterprise won't use SSDs yet for the same reason informed consumers won't use them - they have serious reliability and compatibility issues. Unless you can afford lost data and a hosed PC, SSDs are not even an option at this point in time. Maybe in a couple more years they will sort out the problems that should have resolved long agao?
  • dave1231 - Tuesday, September 27, 2011 - link

    I wonder really how much a consumer SSD costs to produce. Saying that slim margins will force companies out of business if there's a big markup on a 128GB is not true. These same drives were $100s of dollars last year and probably still aren't good value today. Unless you're saying consumers are waiting for the .50c/GB drive.
  • josephjpeters - Tuesday, September 27, 2011 - link

    It's roughly 20% margins and the price of an SSD is directly related to the cost of Flash. Owning the controller IP is key in maintaining solid margins.

    Enterprise drives will drive flash demand which will lead to economies of scale that result in cheaper Flash prices and consequently cheaper consumer SSD's.
  • ChristophWeber - Tuesday, September 27, 2011 - link

    Anand wrote: "I've often heard that in the enterprise world SSDs just aren't used unless the data is on a live mechanical disk backup somewhere. Players in the enterprise space just don't seem to have the confidence in SSDs yet."

    I use an SSD in an enterprise environment, a first gen Sandforce model from OWC. I do trust it with my main workload - database and web server in this case, but of course it is still backed up to mirrored hard drives nightly, just in case.

    I'd have no qualms deploying a Z-Drive R4 in one of our HPC clusters, but it'd be an RM88 model with capacitors, and I'd still run the nightly rsync to a large RAID unit. Now if someone would finally signal they want to spend another $100k on a cluster, and I'll spec a nice SSD solution for primary storage.
  • nytopcat98367 - Tuesday, September 27, 2011 - link

    is it bootable? can it b used 4 a desktop too?

Log in

Don't have an account? Sign up now