AnandTech Storage Bench 2011: Much Heavier

I didn't expect to have to debut this so soon, but I've been working on updated benchmarks for 2011. Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

I'll be sharing the full details of the benchmark in some upcoming SSD articles (again, I wasn't expecting to have to introduce this today so I'm a bit ill prepared) but here are some details:

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

AnandTech Storage Bench 2011 - Heavy Workload

The Vertex 3 Pro on a 6Gbps interface is around 24% faster than Crucial's RealSSD C300. Note that the old SF-1200 (Corsair Force F120) can only deliver 60% of the speed of the new SF-2500. Over a 3Gbps interface the Vertex 3 Pro is quick, but only 15% faster than the next fastest 3Gbps drive. In order to get the most out of the SF-2500 you need a 6Gbps interface.

If we break out our performance results into average read and write speed we get a better idea for the Vertex 3 Pro's strengths:

AnandTech Storage Bench 2011 - Heavy Workload

The SF-2500 is significantly faster than its predecessor and all other drives in terms of read performance. Good read speed is important as it influences application launch time as well as overall system responsiveness.

AnandTech Storage Bench 2011 - Heavy Workload

Average write speed is still class leading, but this benchmark uses a lot of incompressible data - you'll note that the Vertex 3 Pro only averages 225.9MB/s - barely over its worst case write speed. It's in this test that I'm expecting the new C400 to do better than SandForce.

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

The Performance Degradation Problem AnandTech Storage Bench 2011 - Light Workload
Comments Locked

144 Comments

View All Comments

  • HangFire - Friday, February 18, 2011 - link

    As a software engineer, I can tell you that temp files are used over in-memory storage because the s/w was originally written that way, and no bug report concerning them will ever reach high priority status because it is ranked as a system configuration issue that can be fixed by the user.

    In other words, inertia of the "good enough" file writing code (written when RAM was sparse) will prevent software from being re-written to more optimal in-memory usage. The long backlog of truly important bugs taking precedence insures that.

    You have a good point about ramdisks competing with disk caching. What is optimal depends on your application load, and to some extent your storage subsystem.
  • cdillon - Thursday, February 17, 2011 - link

    The idea of moving the page-file to a RAM disk makes my head hurt. That's just retarded. You'd do better to turn off paging entirely, but that's also of questionable benefit because paging isn't really that hard on your SSD.

    Putting the temp directory along with browser caches and other non-critical frequently-written data is not a bad idea as long as you don't over-do it. The only problem with putting the temp drive on a volatile software-based RAM drive is that any software installation you do that requires a reboot with intermediate installer files kept in the temp directory which are expected to be there on the next reboot is going to fail.
  • Qapa - Saturday, February 19, 2011 - link

    Hi Anand,

    I second this request :)

    A few changes though:
    - DISABLE page file
    --- no matter if you have SSD or HDD, windows writes to the page file even if only using 10% of RAM), so you decrease writes to disk which does 2 thing: increase life of disk and increase speed of system. possibly both only marginally, but that's what benchs would show;
    - browser caches
    --- for sure this is one of the most wasteful disk writing and it should be more and more a great amount of writes since we are ever more on the web
    - temporary folders
    --- as someone else mentions you could come into problems if you need a install-reboot-finish_install kind of instalation
    --- and I agree, with the sw engineer - if it works it won't get changed, so programs will put stupid stuff to files just because that was the way they did it at some point in time

    I think a 1-2Gb RAM Disk is more than enough for browser and temp files, considering an initial starting RAM size of 4-8Gb of RAM. And yes, I do believe this improves system performance.

    Can you do the benchs?

    Thanks for the site - all reviews - and hope you can add this request as another review.
  • shawkie - Thursday, February 17, 2011 - link

    I notice that the Intel SSD 510 has just started to appear on some retailer websites. It looks like it is SATA 6Gbs and comes in 120GB and 250GB versions. Pricing looks pretty high at this point.
  • BansheeX - Thursday, February 17, 2011 - link

    Color me unexcited. SSD is fast and reliable enough for people to want it. The price per GB isn't coming down anywhere near as fast as other technologies. I paid $200 1.5 years ago for an 80GB SSD drive that goes for $180 today.
  • chrysrobyn - Friday, February 18, 2011 - link

    Maybe 80GB for $200 is good enough for you, but I need twice that capacity, and I'm unwilling to pay more than $200. The next generation of SSDs that are coming out between now and May are going to come far closer to that price point for me.
  • seapeople - Friday, February 18, 2011 - link

    The point is that 1.5 years ago the OP purchased a SSD for $2.5/GB which had anywhere from a 2x-30x performance improvement over its predecessor (HD's), and here we are in 2011 reading a review about the next generation SSD which uses smaller, cheaper flash with half the available write-cycle life which is going to sell for... $5/GB and get a 1.2x-3x performance improvement over its predecessor (initial SSD's).

    What's next? A solid state drive that reads and writes at 2,000 GB/s and sells for $10,000 for the 1 TB model? Oh I can't wait for that.
  • ABR - Saturday, February 19, 2011 - link

    I have to agree. Year after year we see more and more mind-boggling performance improvements over regular HDDs, but little or no price drop. Perhaps the materials costs are just insurmountable and the replacement of HDDs won't be happening after all. SSDs will be like SLR digital cameras -- premium and professional use only, and pricing a previous generation of amateur users out of a market they used to be in.
  • FunBunny2 - Saturday, February 19, 2011 - link

    From what I see: as each feature size drop in the NAND, the controller has to get increasingly more byzantine, needs more cache, and so on just to maintain performance. Word is that IMFT 25nm includes an ECC engine on die!!!
  • Aernout - Saturday, February 19, 2011 - link

    Maybe we wil hear more of the hybride disks like the momentus XT from seagate in the future, for 'standard' users they can offer a lot.
    now they have a 4 gb flash with 500 gb but its 10 months old.
    I think lots of people are hoping they will multiply those specs.
    I'm thinking of getting one for my laptop, but then on the otherside i am not sure if i will use 500 gig on my laptop, maybe i should buy a 64 ssd in stead.

Log in

Don't have an account? Sign up now