AnandTech Storage Bench 2011

Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

Average performance in our Heavy workload is improved over a single Vertex 3, but not tremendously. Where we really see a tremendous performance increase is when we look at the breakdown of reads/writes:

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

Write speed is much improved over a single 240GB Vertex 3. It's not quite the improvement you'd expect from a 4-controller configuration, but I'd expect that this is the sort of performance improvement most workstation users would see.

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)

Random & Sequential Read/Write Speed AnandTech Storage Bench 2011 - Light Workload
Comments Locked

38 Comments

View All Comments

  • SonicIce - Tuesday, June 28, 2011 - link

    lol last sentence >evironment
  • mmaestro - Tuesday, June 28, 2011 - link

    You can do secure erase, is there any sort of garbage cleanup tool they also supply for maintenance? I know Intel supplies these with their SSDs for if TRIM is unavailable, and that seems like something which you ought to have to make up for this shortcoming.

    It's a shame OCZ started you all off with such an expensive offering. I'd be far more interested in the performance of the Revodrive 3 (no x2) 120gb or 240gb.
  • don_k - Tuesday, June 28, 2011 - link

    I started reading this review, saw the specs and that they removed the pci-x bridge and said to myself "This thing is gonna hit 2GB/s sequential." Close.

    Man is this thing fast. It is very interesting especially on a enterprise environment and that is due to the data workload, nothing else.

    I personally have been running the original 120GB Revodrive (2 Sandforce 1200 controllers) and I've had absolutely zero reliability issues in 24x7 running on a linux box that is my personal workstation. Bought September 2010 so almost a year old now. I'd call that 'good', so far. Tentatively.

    So what is the driver used for this under Linux? Is there one? If Ocz have made changes to firmware then they need to release a driver for linux no?
  • skrewler2 - Tuesday, June 28, 2011 - link

    This thing probably wont be used by consumers. Why not at least include some more relevant DB benchmarks? tpcc would be a good place to start.
  • FunBunny2 - Tuesday, June 28, 2011 - link

    me too.
  • hurricanepkt - Tuesday, June 28, 2011 - link

    OCZ uses a pretty confusing model number schema. Are either of these the revodrive 3?
    It seems very difficult to tell
    OCZSSDPX-1RVD0240
    OCZSSDPX-1RVDX0240
  • Conficio - Wednesday, June 29, 2011 - link

    Dear Anand,
    first off all for doing what you do her. I have learned so much from your SSD (and other technology coverage).

    As you really have this powerful voice in the industry and have always used it in favor of the users and consumers I'd like you to shed more light on supported platforms of SSDs as part of your reviews and tests.

    I have at work a Mac Pro with a couple of 120GB OCZ Vertex SSDs which got corrupted by a power outage Monday morning. Not a big deal as the data on them was of temporary nature.

    However, my attempt to use that opportunity and secure erase the drives and may be update the firmware failed miserably:
    * OCZ does not make it easy to find the tools
    * OCZ makes you download a package for all platforms which then you have to select (w/o a guide) the "right" model and burn the tools on a CD
    * The Insturctions require to set BIOS modes for ATAPI, so are PC only
    * The required BIOS mode (IDE) is again PC only
    * Finally I had to use PC Laptop to to boot the CD and it painfully failed to boot because of some CD driver for the CD drive missing.

    So to actually restore the drive to factory conditions one needs to have some very specific hardware (if it works at all then, never got to that point).

    In my book this is clearly poor customer support and platform support. I'd like to see more details about the actually supported platforms for this new technology including the secure erase tooling (or other tooling that OCZ does provide, like some form of garbage collection tool).
  • neotiger - Wednesday, June 29, 2011 - link

    1) All the benchmarks are for desktop apps. I really doubt people would be buying PCIe SSD for "Starcraft II & WoW" (!!) Far more likely people would be using it for MySQL, Solr, Hadoop, VM. Can we get a benchmark based on those apps?

    2) You only tested random read at low queue depth and simply concluded that "Low queue depth random read performance is nothing to be impressed by." What about high queue depth? Random read IO at high concurrency and high queue depth is just about the *MOST* important metrics for server SSD uses -- yes even more important than random write IO. Yet you completely skip over that. Any chance you can update the benchmark to include results at high concurrency and high queue depth?
  • alphadog4 - Thursday, June 30, 2011 - link

    One way this review would be nice is if we built 4x80GB striped RAID with a SAS controller setup and compared. I'm just not sure what I am gaining here.
  • chadwilson - Thursday, June 30, 2011 - link

    I really really really would like to see some server testing done with server class devices. Specifically I'd like to see the testing done on a modern *nix kernel, as this type of device will likely see usage in this environment.

Log in

Don't have an account? Sign up now