AnandTech Storage Bench 2011

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running in 2010.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

The Extreme II's peak performance isn't as good as the 840 Pro or OCZ Vector, but it's definitely very quick.

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)

Performance vs. Transfer Size AnandTech Storage Bench 2011 - Light Workload
Comments Locked

51 Comments

View All Comments

  • dsumanik - Tuesday, June 4, 2013 - link

    The benches on this drive are good.....not great, and I don think the opening bias is necessary. Who runs any disk at capacity 24/7? Perhaps some people temporarily... But 24/7 drive full???

    Only a fool.

    Kudos to sandisk for making a competitive offering, but please anandtech keep the bias out of the reviews....specially when it's not warranted.

    Storage bench is great, but it's not the only metric.

    Haswell is good, not great. But if your rocking a 2600k from 2 years ago? Meh.

    Where are the legendary power savings? Why don't we have 4 ghz + skus? 8 cores? 64gb ram support? Quick sync degraded lol!! Good job on iris pro. Why can't I buy it and slap it into an enthusiast board?

    Yet you read this review and the haswell review and come away feeling positive.

    Real life:

    Intel,
    A mild upgrade in IPC, higher In use TDP, 2 year old CPU's are still competitive

    Sandisk,
    Mixed bag of results, on unproven firmware.
    .
  • Death666Angel - Tuesday, June 4, 2013 - link

    Why do you keep ignoring the Samsung 840 Pro with spare area increased when it comes to consistency. It seems to me to be the best drive around. And if you value and know about consistency it seems pretty straight forward to increase the spare area and you should have the abilities to do so as well.
  • seapeople - Wednesday, June 5, 2013 - link

    Agreed, it looks like a Samsung 840 Pro that's not completely full would be the performance king in every aspect - most consistent (check the 25% spare area graphs!), fastest in every test, good reliability history, and the best all around power consumption numbers, especially in the idle state which is presumably the most important.

    Yet this drive is virtually ignored in the review, other than the ancillary mention in all the performance benchmarks it still wins, "The SanDisk did great here! Only a little behind all the Samsung drives... and as long as the Samsung drives are completely full, then the SanDisk gets better consistency, too! The SanDisk is my FAVORITE!"

    The prevailing theme of this review should probably be "The SanDisk gives you performance nearly as good as a Samsung at a lower price." Not, "OMG I HAVE A NEW FAV0RIT3 DRIVE! Look at the contrived benchmark I came up with to punish all the other drives being used in ways that nobody would actually use them in..."

    Seriously, anybody doing all that junk with their SSD would know to partition 25% of spare area into it, which then makes the Samsung Pro the clear winner, albeit at a higher cost per usable GB.
  • FunBunny2 - Tuesday, June 4, 2013 - link

    To the extent that "cloud" (re-)creates server-dense/client-thin computing, how well an SSD behaves in today's "client" doesn't matter much. Server workloads, with lots o random operations, will be where storage happens. Anand is correct to test SSDs under loads more server-like. As many have figured out, HDD in the enterprise are little different from consumer parts. "Cloud" vendors, in order to make money, will segue to "consumer" SSD. Thus, we do need to know how well they behave doing "server" loads; they will in any case. Clients will come with some amount flash (not necessarily even on current file system protocols).
  • joel4565 - Tuesday, June 4, 2013 - link

    Any word on whether this drive will be offered in a 960 GB capacity for a reasonable price in the near future?

    This looks like the best performing drive yet reviewed, but I doubt I will see that big of difference from my 120 GB Crucial M4 in day to day usage. I really don't think most of us will see a large difference until we go to a faster interface.

    So unless this drastically change in the next few months, I think my next drive will be the Crucial M500 960GB. Yes it will not be as consistent or quite as fast as the SanDisk Extreme II, but I won't have to worry about splitting my files, or moving steam games from my 7200 rpm drive to the SSD if they have long load times.
  • clepsydrae - Wednesday, June 5, 2013 - link

    Question for those more knowledgeable: I'm building a new DAW (4770k, win 8) which will also be used for development (Eclipse in linux). Based on earlier anandtech reviews I ordered a 128GB 840P Pro for use as the OS drive and eclipse workspace directory and the like. Reading this article, i'm not sure if I should return the 840P for the SanDisk... the 840P leads it in almost all the metrics except the one that is the most "real-world" and which seems to mimic what I'll be using it for (i.e. Eclipse.)

    Opinions?
  • bmgoodman - Wednesday, June 5, 2013 - link

    I gave up on SanDisk after they totally botched TRIM on their previous generation drive. They did such a poor job admitting it and finally fixing it that it left a bad taste in my mouth. They'd have to *give* me a drive for me to try their products again.
  • samster712 - Friday, June 7, 2013 - link

    So would anyone recommend this drive over the 840pro 256? Im very indecisive about buying a new drive.
  • Rumboogy - Thursday, July 11, 2013 - link

    Quick question. You mentioned a method to create an unused block of storage that could be used by the controller by creating a new partition (I assume fully formatting it) and then deleting it. This assumes TRIM marks the whole set of LBAs that covered the partition as being available. What is the comparable procedure on a Mac? Particularly if you don't get TRIM by default. And if you do turn it would it work in this case? Is there a way to guarantee you are allocating a block of LBAs to non-use on the Mac?
  • pcmax - Monday, August 12, 2013 - link

    Would have been really nice to compare it to their previous gen the Extreme I?

Log in

Don't have an account? Sign up now