AnandTech Storage Bench 2011

Although the S3700 isn't a client focused drive, I was curious to see how it would perform in our client Storage Bench suites.

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

Nothing seems capable of reaching the 840 Pro's performance level, but the S3700 ends up doing very well. If it were released today as a consumer drive, it would be the fastest Intel had ever shipped.

AnandTech Storage Bench 2011 - Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Despite the reduction in large IOs, over 60% of all operations are perfectly sequential. Average queue depth is a lighter 2.2029 IOs.

Light Workload 2011 - Average Data Rate

The S3700's performance in our light workload isn't as impressive, falling slightly behind the 520. Compared to Intel's last drive made fully in house, the S3700 does provide a healthy improvement. Samsung still owns the top of the client performance charts.

Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance Power Consumption
Comments Locked

30 Comments

View All Comments

  • JonnyDough - Thursday, November 15, 2012 - link

    There are a ton of new technologies that could replace NAND. There might even be a "betamax" or "HD DVD" in there that miss the mark and lose out to some better or cheaper tech. We'll just have to wait and see what comes to market and catches on. It won't be mere enthusiasts or gamers who decide, it will be the IT industry. It usually is.
  • mckirkus - Tuesday, November 6, 2012 - link

    On interesting point to note is that if you run benchmarks on a RAMDisk, you get random 4k write IOPS in the neighborhood of 600MB/s. So in that regard, flash has a long way to go before the 6Gbit/s limitations of SATA 3.0 really hurt enterprise performance.
  • extide - Tuesday, November 6, 2012 - link

    I am not sure I understand this. First of all random 4K against a ramdisk will be HIGHLY dependent on the hardware, and I am sure you could see wayy better numbers than 600MB/sec. Also, 600MB/sec is pretty close to 6Gbit/sec, anyways.
  • jwilliams4200 - Friday, November 9, 2012 - link

    I think mckirkus is trying to say that there is a lot of headroom before sustained 4KiB random I/O SSD throughput will saturate a SATA 6Gbps link.

    For example, the sustained QD32 4KiB random write speed for the S3700 is apparently less than 150MB/s (35K IOPS). It will need to double and double again before it saturates a 6Gbps SATA link
  • mayankleoboy1 - Saturday, November 10, 2012 - link

    How long do we have to wait before SATA Express drives and interface get commercial ?
  • justaviking - Saturday, November 10, 2012 - link

    If I read this the "Update" section correctly, Oracle recommends modifying their settings to change the way the log files are written.

    Would it be possible to re-run the the Swingbench tests using the modified settings? I'd love to see how performance changes, especially on THIS drive, and then also on some others for comparison purposes.
  • blackbrrd - Saturday, November 10, 2012 - link

    I am guessing most people will run their Oracle database behind a raid card with some nvram to cache, which would remove the problem if the raid controller combined the writes. It would be interesting to see the performance behind a typical raid controller card with nvram cache.
  • iwod - Sunday, November 11, 2012 - link

    I am a regular Anandtech Reader, ( actually it is on my RSS Feeds so i read it everyday ) and i dont ever record Anand doing a Review on Toshiba SSD. So when i saw the performance of the MK4001 i had to look it up in Google to know it is an SAS SLC Enterprise SSD.

    The article did eventually have a brief mention of its Spec. But i thought it was very late in the article. Would have help it the spec was actually listed out before hand.

    It seems to me the Magic is actually in the software and not the hardware. A 1:1 mapping of NAND data Address table making Random Read and Write a consistent behaviour seems more like Software magic and could easily be made on any other SSD Controller with enough amount of RAM in it. The only hardware side of things that requires this tweak is ECC Memory.

    And again we are fundamentally limited by Port Speed.
  • mmrezaie - Monday, November 12, 2012 - link

    I agree!
  • alamundo - Monday, November 12, 2012 - link

    Given the enterprise focus, this drive seems to be competitive with the Intel 910 PCI card. It would be interesting to see the 3700 benchmarked against the 910.

Log in

Don't have an account? Sign up now