AnandTech Storage Bench 2011

Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. Anand assembled the traces out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally we kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system. Later, however, we created what we refer to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. This represents the load you'd put on a drive after nearly two weeks of constant usage. And it takes a long time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011—Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading, and multitasking with all of this that you can really notice performance differences between drives.

2) We tried to cover as many bases as possible with the software incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II and WoW are both a part of the test), as well as general use stuff (application installing, virus scanning). We included a large amount of email downloading, document creation, and editing as well. To top it all off we even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011—Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential; the rest ranges from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result we're going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time we'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, we will also break out performance into reads, writes, and combined. The reason we do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. It has lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback, as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

We don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea. The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011—Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011—Average Data Rate

The 840 is quite average in our Heavy suite and performs similarly to most SandForce drives. The 840 Pro is a lot faster under heavy workloads, so it should be obvious by now why Samsung is offering two SSDs instead of one like they used to.

Heavy Workload 2011—Average Read Speed

Heavy Workload 2011—Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011—Disk Busy Time

Heavy Workload 2011—Disk Busy Time (Reads)

Heavy Workload 2011—Disk Busy Time (Writes)

Performance vs. Transfer Size AnandTech Storage Bench 2011 - Light Workload
Comments Locked

86 Comments

View All Comments

  • xdrol - Monday, October 8, 2012 - link

    You sir need to learn how SSDs work. Static data is not static on the flash chip - the controller shuffles it around, exactly because of wear levelling.
  • name99 - Tuesday, October 9, 2012 - link

    "I think Kristian should have made this all more clear because too many people don't bother to actually read stuff and just look at charts."

    Kristian is not the problem.
    There is a bizarre fraction of the world of tech "enthusiasts" who are convinced that every change in the world is a conspiracy to screw them over.

    These people have been obsessing about the supposed fragility of flash memory from day one. We have YEARS of real world experience with these devices but it means nothing to them. We haven't been screwed yet, but with TLC it's coming, I tell you.
    The same people spent years insisting that non-replacable batteries were a disaster waiting to happen.
    Fifteen years ago they were whining about the iMac not including a floppy drive, for the past few years they have been whining about recent computers not including an optical drive.
    A few weeks ago we saw the exact same thing regarding Apple's new Lightning connector.

    The thing you have to remember about these people is
    - evidence means NOTHING. you can tell them all the figures you want, about .1% failure rates, or minuscule return rates or whatever. None of that counts against their gut feeling that this won't work, or even better an anecdote that some guy some somewhere had a problem.
    - they have NO sense of history. Even if they lived through these transitions before, they cannot see how changes in 2000 are relevant to changes in 2012.
    - they will NEVER admit that they were wrong. The best you can possibly get out of them is a grudging acceptance that, yeah, Apple was right to get rid of floppy disks, but they did it too soon.

    In other words these are fools that are best ignored. They have zero knowledge of history, zero knowledge of the market, zero knowledge of the technology --- and the grandiose opinions that come from not actually knowing any pesky details or facts.
  • piiman - Tuesday, February 19, 2013 - link

    Then stick with Intel not because they last longer but they have a great warranty.(5 years) My drive went bad at about 3.5 years and Intel replaced it no questions asked and did it very quickly. I sent it in and had a new one 2 days after they received my old one. great service!
  • GTRagnarok - Monday, October 8, 2012 - link

    This is assuming a very exaggerated amplification of 10x.
  • Kristian Vättö - Monday, October 8, 2012 - link

    Keep in mind that it's an estimation based on the example numbers. 10x write amplification is fairly high for consumer workloads, most usually have something between 1-3x (though it gets a big bigger when taking wear leveling efficiency into account). Either way, we played safe and used 10x.

    Furthermore, the reported P/E cycle counts are the minimums. You have to be conservative when doing endurance ratings because every single die you sell must be able to achieve that. Hence it's completely possible (and even likely) that TLC can do more than 1,000 P/E cycles. It may be 1,500 or 3,000, I don't know; but 1,000 is the minimum. There is a Samsung 830 at XtremeSystems (had to remove the link as our system thought it was spam, LOL) that has lasted for more than 3,000TiBs, which would translate to over 10,000 P/E cycles (supposedly, that NAND is rated at 3,000 cycles).

    Of course, as mentioned at the end of the review, the 840 is something you would recommend to a light user (think about your parents or grandparents for instance), whereas the 840 Pro is the drive for heavier users. Those users are not writing a lot (heck, they may not use their system for days!), hence the endurance is not an issue.
  • A5 - Monday, October 8, 2012 - link

    Ah. I didn't know the 10x WA number was exceedingly conservative. Nevermind, then.
  • TheinsanegamerN - Friday, July 5, 2013 - link

    3.5 years is considering you are writing 36.5 GB of data a day. if the computer it is sitting in is mostly used for online work of document editing, youll get far more. the laptop would probably die long before the ssd did.
    also, this only apples to the tls ssds. mlc ssds last 3 times longer, so the 840 pro would be better for a computer kept longer than 3 years.
  • Vepsa - Monday, October 8, 2012 - link

    Might just be able to convince the wife that this is the way to go for her computer and my computer.
  • CaedenV - Monday, October 8, 2012 - link

    That is how I did it. My wife's old 80GB system drive died a bit over a year ago, and it was one of those issues of $75 for a decent HDD, or $100 for an SSD that would be 'big enough' for her as a system drive (60GB at the time). So I spent the extra $25, and it made her ~5 year old Core2Duo machine faster (for day-to-day workloads) than my brand new i7 monster that I had just build (but was still using traditional HDD at the time).

    I eventually got so frustrated by the performance difference that I ended up finally getting one for myself, and then after my birthday came then I spent my fun money on a 2nd one for RAID0. It did not make a huge performance increase (I mean it was faster in benchmarks, but doubling the speed of instant is still instant lol), but it did allow me to have enough space to load all my programs on the SSD instead of being divided between the SSD and HDD.
  • AndersLund - Sunday, November 25, 2012 - link

    Notice, that setting up a RAID with your SSD might hinder the OS to see the SSDs as SSD and not sending TRIM commands to the disks. My first (and current) gamer system consists of two Intel 80 GB SSD in a RAID0 setup, but the OS (and Intel's toolbox) does not recognize them as SSD.

Log in

Don't have an account? Sign up now