AnandTech Storage Bench 2011: Much Heavier

I didn't expect to have to debut this so soon, but I've been working on updated benchmarks for 2011. Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

I'll be sharing the full details of the benchmark in some upcoming SSD articles but here are some details:

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

AnandTech Storage Bench 2011 - Heavy Workload

Overall performance is decidedly last generation. The 320 is within striking distance of the 510 but is slower overall in our heavy workload test.

The breakdown of reads vs. writes tells us more of what's going on:

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

Random & Sequential Performance AnandTech Storage Bench 2011 - Light Workload
Comments Locked

194 Comments

View All Comments

  • piquadrat - Monday, March 28, 2011 - link

    But is it true that on most of todays mobos and their bioses you can't set ATA password with more than 8 characters? I've read about this in many places.

    Anand says that passwords used to encrypt Intel's ssd are generated automatically during each secure erase. So user can define ATA pass only during secure erasing the drive? Every change of ATA pass require secure erase?
    I thought that ATA pass is defined in bios and changed in bios. It means outside operating system and any soft toolbox Intel provides.

    There are so many places in the net and even commertial companies that offer unlocking ATA secured deviced. Mostly they use non-official unlocking codes (manufacturers implemented them in firmware!!!). Can we trust that no such codes for 320 series surfaced in the near future? To sum up: is this secure on the enterprise level?

    Is there any independent secure certificate Intel can give us at the moment?
  • overzealot - Tuesday, March 29, 2011 - link

    The user doesn't supply a key - the drive generates one itself.
    The data can be accessed by ANY computer if you're accessing it by SATA, the point is to secure the data on the NAND (pro hackers attach a controller to the memory chips and read it directly).
    Using previous SSD's, you could format the drive and just read the data straight off the chips. On these drives, erased data is inaccessible this way.

    If you want secure data, use TrueCrypt or Bitlocker.
  • piquadrat - Tuesday, March 29, 2011 - link

    Truecrypt and bitlocker are not suitable for ssds as they contradict internal wear-leveling mechanisms and kill performance of compression based controllers (like all SandForces).

    To sum up all this AES thing in intel's 320 is no different then in SF based drives (like Vertex 2)?
    If keys are internal and not linked to ATA password in bios, when someone steal my drive (bios not supporting ATA pass or ATA pass bypassed/hacked) he has full access to it. So tell me:

    Why anybody would BOTHER with "attaching a controller to memory chips and read it directly"? WHAT FOR? He has full access to drive without all that hassle.
  • overzealot - Thursday, March 31, 2011 - link

    I'm sorry I wasn't clear.
    When you format or delete data from an SSD without internal encryption, data is still accessible directly from the chips. This encryption is to stop that.

    Also, the performance of Sandforce drives does not drop handling compressed data! It just can't compress it any more, so really it's still throwing the exact same amount of data around!
    You would see an improvement in throughput on drives that don't natively compress, but from the data I've seen they'd still be slower than Sandforce.
  • piquadrat - Saturday, April 2, 2011 - link

    Just like in conventional HDDs, you can always do secure erase even without internal AES. It only adds max 1-2 to the live counter of each cell. On the other end how often does typical user need that kind of maintenance.
    Yes, they are dropping in performance vs typical statistical compression ratio. They drop from 250 to around 100 MB/s in writes.
    Encrypting ssd drive with truecrypt means that effectively wear leveling algorithms see the drive as fully loaded. Spare area is used much more intensively. The drive starts to have problems with trimming and Garbage Collector. Additional empty partition required.
    To sum up for Vertex2 with truecrypt and intel NI AES supported processor: read: 140MB/s, write: 70MB/s. 4K reduced by 50-60%. These are facts.
  • MeanBruce - Monday, March 28, 2011 - link

    Intel's plan was to give us a little bump in performance every two years just to keep us happy and keep us buying and maybe go to 6GB/s by 2015 after their sata 2 drives had showed a slow yet incremental speed increase just enough to apease the masses. They never saw SandForce coming! I thought Intel would simply pull more performance out of their hat to meet the challenge, I never thought the hat would be empty!
  • Drag0nFire - Monday, March 28, 2011 - link

    Dear Anand,

    Given the switch to a new generation of NAND, any comments on the expected longevity of these new drives? How long will one last in a normal usage scenario?

    Thanks,
    --Jonathan
  • Drag0nFire - Monday, March 28, 2011 - link

    Also, would it be possible to report the physical dimensions of the drive? I believe I may need a 2.5in drive with a height of 7mm for my next laptop, but it is difficult to find information on the height of SSDs.

    Thanks!
  • y.a.k - Tuesday, March 29, 2011 - link

    Intel drives come with a spacer that makes them 9.5mm high. Removing this makes them 7mm high.
  • B3an - Monday, March 28, 2011 - link

    This has been commented on other recent SSD articles.

Log in

Don't have an account? Sign up now