AnandTech Storage Bench 2011

Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

I don't expect many desktop/workstation users to buy Z-Drive R4s, but this is the fastest SSD we've tested in our AnandTech Storage Bench 2011 suite. Note the imperfect scaling from 4 to 8 controllers however, the increase over the RevoDrive 3 X2 is only 56%. Even with a strenuous desktop workload it's tough to fully load all 8 controllers in the Z-Drive R4.

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)

AS-SSD Incompressible Sequential Performance AnandTech Storage Bench 2011 - Light Workload
Comments Locked

57 Comments

View All Comments

  • jdietz - Tuesday, September 27, 2011 - link

    I looked up the prices for these on Google Shopping - $7 / GB.

    These offer extreme performance, but probably only an enterprise server can ever benefit from this much performance. Enthusiast users of single-user machines should probably stick with RevoDrive X2 for around $2 / GB.
  • NCM - Tuesday, September 27, 2011 - link

    Anand writes: "During periods of extremely high queuing the Z-Drive R4 is a few orders of magnitude faster than a single drive."

    Umm, a bit hyperbolic! With "a few" meaning three or more, the R4 would need to be at least 1000 times faster. That's nowhere near the case.
  • JarredWalton - Tuesday, September 27, 2011 - link

    Correct. I've edited the text slightly, though even a single order of magnitude is huge, and we're looking at over 30x faster with the R4 CM88 (and over two orders of magnitude faster on the service times for the weekly stats update).
  • Casper42 - Tuesday, September 27, 2011 - link

    Where do you plan on testing it? (EU vs US)

    Have you tried asking HP for an "IO Accelerator" ? (Its a Fusion card)

    I worked with a customer a few weeks ago near me and they were testing 10 x 1.28TB Fusion IO cards in 2 different DB Server upgrade projects. 8 in a DL980 for one project and 2 in a DL580g7 for a separate project.
  • Movieman420 - Tuesday, September 27, 2011 - link

    I see all these posts taking all kinds of punishment, please try and remember that ANY company that uses SandForce has the SAME issues, but since Ocz is the largest they catch all the flack. If anything, SF needs to beef up validation testing first and foremost.
  • josephjpeters - Wednesday, September 28, 2011 - link

    Like I said before, it's really more of a motherboard issue with the SATA ports then it is a SF/OCZ issue. They designed to spec...
  • Yabbadooo - Tuesday, September 27, 2011 - link

    I note that on the Windows Live Team blog they write that they are moving to flash based blob storage for their file systems.

    Maybe they will use a few of these? That would definitely be a big vote of confidence, and the testimonials from that would be influential.
  • Guspaz - Tuesday, September 27, 2011 - link

    I have to wonder at the utility of these drives. They're not really PCIe drives, they're four or eight RAID-0 SAS drives and a SAS controller on a single PCB. They're still going to be bound by the limitations of RAID-0 and SAS. There are proper PCIe SSDs on the market (Fusion-io makes some), but considering the price-per-gig, these Z-Drives seem to offer little benefit other than saving space.

    Why should I spend $11,200 on a 1600GB Z-Drive when I can spend about the same on eight OCZ Talos SAS drives and a SAS RAID controller, and get 3840GB of capacity? Or spend half as much on eight OCZ Vertex 3 drives and a SATA RAID controller, and get 1920GB of capacity?

    I'm just trying to see the value proposition here. Even with enterprise-grade SSDs (like the Talos) and RAID controllers, the Z-Drive seems to cost twice as much per-gig than OCZ's own products.
  • lorribot - Tuesday, September 27, 2011 - link

    I'm with you on this.

    What happens if a controller toasts itself wheres your data then?
    I would rather have smaller hot swap units sitting behind a raid controller.
    It is a shame OCZ couldn't supply such a setup for you to compare performance, or perhaps they know it would be comparable.

    Yes it is a great bit of kit but f I can't raid it then it is of no more use to me than as a cache and RAM is better at that, and a lot cheaper, $11000 buys some big quantities of DDR3.

    In the enterprise space security of data is king, speed is secondary. Losing data means a new job, slow data you just get moaned at. That is why SANs are so well used. Having all your storage in one basket that could fail easily is a big no, no and has been for many years.
  • Guspaz - Tuesday, September 27, 2011 - link

    To be fair, you can RAID it in software if required. You could RAID a bunch of USB sticks if you really wanted to. There are more than a few enterprise-grade SAN solutions out there that ultimately rely on Linux's software RAID, after all.

Log in

Don't have an account? Sign up now