AnandTech Storage Bench 2011

Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. Anand assembled the traces out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally we kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system. Later, however, we created what we refer to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. This represents the load you'd put on a drive after nearly two weeks of constant usage. And it takes a long time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011—Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading, and multitasking with all of this that you can really notice performance differences between drives.

2) We tried to cover as many bases as possible with the software incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II and WoW are both a part of the test), as well as general use stuff (application installing, virus scanning). We included a large amount of email downloading, document creation, and editing as well. To top it all off we even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011—Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential; the rest ranges from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result we're going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time we'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, we will also break out performance into reads, writes, and combined. The reason we do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. It has lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback, as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

We don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea. The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

The SX900 is slightly faster than the Vertex 3 and Force GT here, which is most likely due to the newer firmware as I noted in the incompressible sequential read/write speed section. I asked Anand to test a 120GB ADATA S511, which is based on the same SF-2281 controller and utilizes 25nm Intel MLC NAND. The SX900 and S511 should be as similar as possible; however, they don't share the same firmware version. The SX900 is reported to have firmware version 5.0.1 in Device Manager, whereas the S511 Anand tested was running 3.2.2. That's the most current firmware for the S511 and it dates back to November 2011, so this is likely the reason for the difference in performance.

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)

AS-SSD Incompressible Sequential Performance AnandTech Storage Bench 2011 - Light Workload
Comments Locked

58 Comments

View All Comments

  • leexgx - Friday, June 8, 2012 - link

    use an usa vpn loads of free ones
  • Tujan - Friday, June 8, 2012 - link

    This is yes "nand",. What would be the complexity of simply creating a curcuit board,that fits regular DDR3 ,and places it into the PCI-e slot. Put a battery onto the board,with perhaps a simplified voltage regulator. Then this saves the state of the RAM when the board shuts off. Would be serous situation for such curcuit boards. Ho-hum save the state of the memory,where no change equals,saved state. Strobe etc,. Even the ideal of having RAM on the curcuit board w/o the saving is a serious relationship to performance 'in session' on a computer of course.

    Fail to understand reasons vendors would ask so much for such a PCI-e board. When I see a memory curcuit on a MB for example,a fraction of a whole MB,that would just as well be able to fit onto a removabl PCI-e board. For a PCI slot.
  • jabber - Friday, June 8, 2012 - link

    Gigabyte did this about 7 years ago.

    http://www.anandtech.com/show/1742
  • jabber - Friday, June 8, 2012 - link

    Just not through the PCI-e slot.
  • Einy0 - Friday, June 8, 2012 - link

    Thanks for the trip down memory lane!!! I used to day dream for hours about how I would use one of those.
  • Stahn Aileron - Friday, June 8, 2012 - link

    In one word: Capacity. Next issue would be power efficieny since you would always be feeding it power in some form (be it main power or battery power. And that battery will only last so long.) The power issue is relative minor point though.

    The other main power would be volatility. RAM drives are only useful when you absolutely, positively need high-speed, low latency read/write performance (like forcing cache to DRAM instead of the storage system.) As soon as you power them off, you lose all the data stored on them. Battery back-up systems can only go so far to retain your data. This type of storage is too risky for permanent data storage. Any loss of power to the RAM equates to losing all your data. Unlike non-volatile systems (HDDs, SSDs), you're not recovering that data either.

    You're not gonna see RAM drive tech in the consumer space unless they can makes non-volatile RAM. Right now, it's mainly an enterprise thing. Even then, with the uptake of 64-bit software during the past decade or so in that market, there's very little need for RAM drives since a 64-bit OS will give you direct access to practically any and all RAM you have installed in a system these days. You don't need a RAM Drive workaround to access more RAM (32-bit OSes have and inherent 4GiB memory space limitation without workarounds.) I can only see RAM drives being used in the consumer/professional space if some software used explicitly required cache/scratch space on a drive. Something like Photoshop scratch space would be better served on a RAM drive, perhaps.

    Past that, it's cost. Just the RAM itself is about $10/GB these days (give or take.) Fully assembled SSDs using a common interface (SATA) are averaging between $1 & $2 per GB.
  • Shadowmaster625 - Friday, June 8, 2012 - link

    I have a bricked Sandcrap drive that tells me all that RAISE crap is pretty useless so they may as well just use that NAND for more capacity.

    Hell, they should just advertise it as a 1TB drive. Once it is bricked, who can ever tell the difference?
  • DigitalFreak - Friday, June 8, 2012 - link

    U mad bro?
  • Belard - Monday, June 11, 2012 - link

    Sandforce has become the dominate player in the market. If their controllers were pure crap, then intel wouldn't have touched them. Notice how long intel's G2 drives were THE #1 drives to get, not just in performance but in reliability.

    I admit, I'm a bit confused that intel didn't continue to use their own controller in their top in drive (Their 320 series are like a "G3" and perform slightly better but cheaper than the G2s).

    Also, what sandcrap drive did you get? OCZ makes about 4 different versions of any particular size. Even 6 months ago, you can pick up a 120GB OCZ for $95, but also spend $250... the cheap drive had a much higher failure rate, include DOA. Its performance was crap, becoming slower than a HD after a while.

    Was talking with friends who are looking to upgrade soon. They are going over the various drive brands and pricing. I said, "I don't care... intel 320 or 520. Reliability counts. Saving $50 in exchange for BSOD / performance loss / lack of support isn't worth it".

    I don't know about other brands, but OCZ has no tools for their drives, other than a firmware upgrader. That's it. Intel has a tool-kit, it tells you everything about the SSD, optimization, config, diagnostics and more.

    I'm open to buy a drive from someone else, I'm NOT an intel fan. But I want quality over fandom first.
  • ImSpartacus - Friday, June 8, 2012 - link

    Knowing this was an SSD review, I assumed Anand wrote it. After I read the intro piece, I was CONVINCED that it was Anand. But lo and behold, when I glanced at the top of the next page, I did not see the name I was expected.

    Bravo, Mr. Vättö, bravo.

Log in

Don't have an account? Sign up now