AnandTech Storage Bench 2011

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running in 2010.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

Seagate's 600 does reasonably well here, but if you don't take into account IO consistency then the 600/600 Pro are still behind Samsung's SSD 840 Pro.

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)

Performance vs. Transfer Size AnandTech Storage Bench 2011 - Light Workload
Comments Locked

59 Comments

View All Comments

  • Kristian Vättö - Tuesday, May 7, 2013 - link

    The units we have are all based on the older 24nm NAND. A while back I asked Corsair for review samples of the 128/256GB Neutrons (the original ones are 120/240) but they said they are not sampling them (yet). I can ask if they have changed their mind, although there shouldn't be much difference since 19nm Toshiba NAND has the same page/block/die size as 24nm.
  • FunBunny2 - Tuesday, May 7, 2013 - link

    Does "Toshiba" mean toggle-mode NAND, by definition? Or do they sell all types?
  • Kristian Vättö - Wednesday, May 8, 2013 - link

    Yes, Toshiba uses Toggle-Mode interface for their NAND. Here's the breakdown of NAND interfaces and manufacturers:

    Toggle-Mode: Toshiba/SanDisk (joint-venture) & Samsung
    ONFI: Intel/Micron (aka IMFT, also a joint-venture) & Hynix
  • LtGoonRush - Tuesday, May 7, 2013 - link

    HardOCP showed pretty significant performance increases, though that could also be due to the new firmware (which is not being back-ported as I understand).
  • romrunning - Tuesday, May 7, 2013 - link

    I really wish we had more tests of SSDs in RAID-5 arrays. This is really useful for SMBs who may not want/afford a SAN. I'm very curious to see if the 20% spare area affects SSDs just as much when they're RAIDed together as it does standalone. I also don't care of the SSDs are branded as being "enterprise" drives. It would be nice to see how a 5x256GB Samsung 840 Pro RAID-5 array would peform, or even a 5x400GB Seagate 600 Pro RAID-5 array.
  • FunBunny2 - Tuesday, May 7, 2013 - link

    No legitimate RDBMS vendor would allow its database on a RAID-5 machine. Never. Never. Never.
  • romrunning - Wednesday, May 8, 2013 - link

    I can't tell if you're just trolling or you're actually serious. Obviously, SMBs use RAID-5 arrays ALL the time, and they use "legitimate" database products like MS-SQL, etc. It doesn't have to be an IBM AIX server running DB2, or anything high-end.
  • daniel_mayes - Wednesday, May 8, 2013 - link

    What is FunBunny2 talking about? What Raid would you want to run them on 1,5,6,10, no ssd's?
    You aren't the only one that want's to see more tests with SSD's in a Raid 5. I would also like to see the destroyer run on ssd's with a higher provision and please add Intel DC S3700 to the destroyer benchmark next.
  • FunBunny2 - Wednesday, May 8, 2013 - link

    "I always have found that based on those requirements RAID 5 requires more spindles to satisfy those requirements than RAID 10 - and this has been found even with a Read/Write of 9:1. "

    here: http://sqlblog.com/blogs/linchi_shea/archive/2007/...
    (no, that's not me)

    Fact is, SSD still writes slower than reads, so what kind of RAID one uses matters. Having a 3NF (or higher) schema is a more productive avenue for performance on SSD, anyways, irregardless. Getting rid of all that bloated [un|de]normalized byte pile will allow, in most cases, you to have a much smaller database, and thus not worry about bunches and bunches of discs.
  • romrunning - Friday, May 10, 2013 - link

    That blog is from 2007, and SSDs weren't really in the picture at all. It has been demonstrated how SSDs can trump spinning disks in virtually all I/O-bound operations. The man in the blog even showed a test of RAID-5 beating RAID-10 on the same hardware, so his test was in direct contradiction to the one who later commented about spindles.

    That being said, I think you're trying to say that getting rid of unnecessary in your database will result in a smaller database & thus lower performance requirements. That might be true at one point, but when you've normalized your data already, then additional data will just make the database grow. After all, if you're writing something like electronic orders to your normalized database, it will grow based upon real data addition. That's why you need to make sure your storage array can handle the increased load.

    RAID-5 has been the best for SMBs because it provide the fault-tolerance and the higher utilization of total storage capacity that they want. That's why I would like to see tests of SSDs in RAID-5 arrays - to get Anandtech to test these great SSD performers in something I could use in a database server. Something like their tests of their own website databases would be nice, or even smaller ones using a 10-20GB database.

Log in

Don't have an account? Sign up now