AnandTech Storage Bench 2011

Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

Here's a high level overview:

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

Digging a little deeper, the test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

AnandTech Storage Bench 2011 - Heavy Workload

The breakdown of reads vs. writes tells us more of what's going on:

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

Random & Sequential Performance AnandTech Storage Bench 2011 - Light Workload
Comments Locked

90 Comments

View All Comments

  • tech6 - Friday, May 6, 2011 - link

    Thanks for another thorough review. I noticed that the Samsung 470 (and its OEM equivalent) are getting very popular. Any chance of a review?
  • Anand Lal Shimpi - Friday, May 6, 2011 - link

    I've been meaning to do a roundup focusing on 3Gbps drives and diving deeper on the 470, it's just a matter of finding the time. It's definitely on the list though.

    Take care,
    Anand
  • darwinosx - Friday, May 6, 2011 - link

    With Samsungs drive division having been sold to Seagate you have to wonder what happens to service and support of Samsung branded drives.
  • TotalLamer - Friday, May 6, 2011 - link

    ...any chance the Vertex 3s won't spontaneously brick themselves whenever they damned well choose like the Vertex 2s did? Such horridly unreliable drives.

    No Intel, no care.
  • jmunjr - Friday, May 6, 2011 - link

    A lot of ultraportable laptops use 7mm height drives. From what I ahve read the Vertex series are 9.5mm and cannot be easily modded to fit. The Crucial however have a space that can be easily removed(though it voids the warranty). Is really would be nice to have more SSD choices in the 7mm height option.
  • jcompagner - Friday, May 6, 2011 - link

    I think also a warning must be given, because they really don't work quite right out of the box with everything default.

    I have a Dell XPS17 (L702x) and the Vertex 3 240GB, and installing windows 7 (sp1) is quite hard. The default AHCI driver of windows really doesn't work with the Vertex 3. So yes you can install it in sata mode that kind of works but you want AHCI, And setting that after install is not that easy (you really have to make sure that the latest intel drivers are there and then tweak some registry setting)
    Best thing to go around this is to use the Intel F6 driver right from the installer of Win7. That will help and then you can install it at once.
    The thing is that OCZ sees this as a problem with drivers or the system, i completely don't agree with this, there are many complaining constantly on the forum because of this. And the intel drive that i also have never have these problems they install just fine. So it is really OCZ which should look into why they are not compatible.

    Besides that after you have taken this hurdle you have ofcourse the LPM registry tweak you have to do to kill LPM mode. But this is not only a OCZ/Vertex problem also Crucial (C300) has this same problem. But again with the Intel SSD i haven't seen this problem also.

    I just think that the Vertex doesn't behave completely correct on all the SATA commands that are out there. I really hope for them that they can fix that (they get a bit of bad name i know enough forums that really don't recommend OCZ because of all this)

    But after all these install troubles i must say it is fast and works quite well.
    I don't really like that now a MAXIOPS version is coming also for the 240GB! I am curious of how much faster that will be

    One question: If the number of die's tells everything about the speeds, why is the 480GB then slower? (at least on paper)
  • Ammaross - Friday, May 6, 2011 - link

    "One question: If the number of die's tells everything about the speeds, why is the 480GB then slower? (at least on paper) "

    You didn't read the interleaving example then. If 2 die per chip, and 2 chips per channel fill up 4 of the 5 theoretical "slots" in the 5-clock example, imagine what 4 dies per chip and 2 chips per channel does trying to cram/schedule 8 dies into 5 slots? Then think what happens if all requests are going to one or two die on the same package? It's just a matter of clogging the pipes or burning slots due to a package already processing a request. You can think of it like the 8x/8x/4x SLI/CF situation with P67 where that 3rd gfx card just doesn't help much at all due to being data-starved, or the overhead of SLI/CF in itself.
  • jcompagner - Saturday, May 7, 2011 - link

    but still, why is it even slower and not the same speed as the 240GB then?
  • darwinosx - Friday, May 6, 2011 - link

    It seems from public comments on New Egg and elsewhere that there a lot of unhappy owners of tis drive. High failure rate and many people continue to comment on how poor OCZ's tech support is which is the opposite of what this review says.
  • Lingyis - Friday, May 6, 2011 - link

    is there something anandtech can test about reliability? i had 3 OCZ vertex from a few years ago and 2 of them had bad sectors after about 6 months of use. whatever time i saved on the SSD was more than wiped out by my time having to reinstall software, and possibly each time it has to run chkdsk related commands. i have been quite reluctant to use SSD since--i went with good ol' HDD in my new laptop and chkdsk has yet to reveal any errors.

    some time ago, i read on this site that the officially from intel, failure rates are something like 1.2% for non-Intel drives and 0.5% for intel drives? obviously, massive data is required to get these kind of statistics, but if you can figure out some way of testing reliability on these SSD, that'll be much more important to people like me as SSD is fast enough for most practical purposes. perhaps you can run these drives intensely over a period of 30 days (probably more) and see if any data corruption sets in. if there's a way to limit read/write to a certain region of the SSD than better obviously, but the controller i suppose might have a say in that.

Log in

Don't have an account? Sign up now