AnandTech Storage Bench

To avoid any potential optimizations for industry standard benchmarks and to provide another example of real world performance we've assembled our own storage benchmarks that we've creatively named the AnandTech Storage Bench.

The first in our benchmark suite is a light/typical usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

The performance results are reported in average I/O Operations per Second (IOPS):

AnandTech Storage Bench - Typical Workload

The higher capacity SandForce drives rule the roost here, but the C300, X25-M G2 and V+100 are not too far behind. Despite its age, Intel's X25-M G2 performs very well in our light usage test. The V+100 isn't far behind thanks to its 8.5% improvement over the original V+.

As far as small capacity drives go, the Corsair Force F40 and other similarly sized SandForce drives are the clear winners here. Crucial's 64GB RealSSD C300 is quicker than the X25-V, but no match for the 40GB SF drive.

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

AnandTech Storage Bench - Heavy Multitasking Workload

This is another one of those SYSMark-like situations. The old Toshiba controller did just awesome in our heavy multitasking workload and the new update does even better. At 1135 IOPS, the V+100 is 55% faster than the Indilinx based Corsair Nova. Thanks to the incompressible nature of much of the data we're moving around in this benchmark the SandForce drives don't do so well. Although not pictured here, the 256GB C300 would be #2 - still outperformed by the V+100.

The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

AnandTech Storage Bench - Gaming Workload

The perplexing nature of the V+100 continues here. While it boasts great sequential read numbers, the smaller and somewhat random accesses drop the V+100 behind the SandForce and Crucial SSDs.

Overall System Performance using SYSMark 2007 Power Consumption
Comments Locked

96 Comments

View All Comments

  • Gonemad - Thursday, November 11, 2010 - link

    Yes, a longevity test. Put it on grind mode until it pops. P51 Mustangs benefited from engines tested that way.

    Now, this one, should it be fully synthetic or more life like? Just place the drive in "write 0, write 1" until failure, record how many times it can be used, or create some random workload scripted in such a manner that it behaves pretty much like real usage, overusing the first few bytes of every 4k sector... if it affects any results. What am I asking is, will it wear only the used bytes, or the entire 4k rewritten sector will be worn evenly, if I am expressing myself correctly here.

    On another comment, I always thought SSD drives were like overpriced undersized Raptors, since they came to be, but damn... I hope fierce competition drive the prices down. Way down. "Neck and neck to mag drives" down.

    And what about defragmenting utilities? Don't they lose their sense of purpose on a SSD? Are they blocked from usage, since the best situation you have on a SSD is actually randomly sprayed data, because there is no "needle" running over it at 7200rpm in a forcibly better sequential read? Should they be renamed to "optimization tools" when concerning SSD's? Should anybody ever consider manually giving permission to a system to run garbage collection, TRIM, whatever, while blocking it until strictly necessary, in order to increase life span?
  • Iketh - Thursday, November 11, 2010 - link

    win7 "automatically disables" their built-in defrag for SSDs, though if you go in and manually tell it to defrag an SSD, it will do it without question

    prefetch and superfetch are supposedly also disabled automatically when the OS is installed on an SSD, though I don't feel comfortable until i change the values myself in the registry
  • cwebersd - Thursday, November 11, 2010 - link

    I have torture-tested a 50 GB Sandforce-based drive (OWC Mercury Extreme Pro RE) with the goal to destroy it. I stopped writing our semi-random data after 21 days because I grew tired.
    16.5 TB/d, 360 disk fills/day, 21 days more or less 24/7 duty cycle (we stopped a few times for an hour or two to make adjustments)
    ~7500 disk fills total, 350 TB written
    The drive still performs as good as new, and SMART parameters look reasonably good - to the extent that current tools can interpret them anyway.

    If I normally write 20 GB/d this drive is going to outlast me. Actually, I expect it to die from "normal" (for electronics) age-related causes, not flash cells becoming unwritable.
  • Anand Lal Shimpi - Thursday, November 11, 2010 - link

    This is something I've been working on for the past few months. Physically wearing down a drive as quickly as possible is one way to go about it (all of the manufacturers do this) but it's basically impossible to do for real world workloads (like the AT Storage Bench). It would take months on the worst drives, and years on the best ones.

    There is another way however. Remember NAND should fail predictably, we just need to fill in some of the variables of the equation...

    I'm still a month or two away from publishing but if you're buying for longevity, SandForce seems to last the longest, followed by Crucial and then Intel. There's a *sharp* fall off after Intel however. The Indilinx and JMicron stuff, depending on workload, could fail within 3 - 5 years. Again it's entirely dependent on workload, if all you're doing is browsing the web even a JMF618 drive can last a decade. If you're running a workload full of 4KB random writes across the entire drive constantly, the 618 will be dead in less than 2 years.

    Take care,
    Anand
  • Greg512 - Thursday, November 11, 2010 - link

    Wow, I would have expected Intel to last the longest. I am going to purchase an ssd and longevity is one of my main concerns. In fact, longevity is the main reason I have not yet bought a Sandforce drive. Well, I guess that is what happens when you make assumptions. Looking forward to the article!
  • JohnBooty - Saturday, November 13, 2010 - link

    Awesome news. Looking forward to that article.

    A torture test like that is going to sell a LOT of SSDs, Anand. Because right now that's the only thing keeping businesses and a lot of "power users" from adopting them - "but won't they wear out soon?"

    That was the exact question I got when trying to get my boss to buy me one. Though I was eventually able to convince him. :)
  • Out of Box Experience - Thursday, November 11, 2010 - link

    Here's the problem

    Synthetic Benchmarks won't show you how fast the various SSD controllers handle uncompressible data

    Only a copy and paste of several hundred megabytes to and from the same drive under XP will show you what SSD's will do under actual load

    First off, due to Windows 7's caching scheme, ALL drives (Slow or Fast) seem to finish a copy and paste in the same amount of time and cannot be used for this test

    In a worst case scenario, using an ATOM computer with Windows XP and Zero SSD Tweaks, a OCZ VERTEX 2 will copy and paste data at only 3.6 Megabytes per second

    A 5400RPM laptop drive was faster than the Vertex 2 in this test because OCZ drives require massive amounts of Tweaking and highly compressible data to get the numbers they are advertizing

    A 7200RPM desktop drive was A LOT faster than the Vertex 2 in this type of test

    Anyone working with uncompressible data "already on the drive" such as video editors should avoid Sandforce SSD's and stick with the much faster desktop platter drives

    Using a slower ATOM computer for these tests will amplify the difference between slower and faster drives and give you a better idea of the "Relative" speed difference between drives

    You should use this test for ALL SSD's and compare the results to common hard drives so that end users can get a feel for the "Actual" throughput of these drives on uncompressible data

    Remember, Data on the Vertex drive's is already compressed and cannot be compressed again during a copy/paste to show you the actual throughput of the drive under XP

    Worst case scenario testing under XP is the way to go with SSD's to see what they will really do under actual workloads without endless tweaking and without getting bogus results due to Windows 7's caching scheme
  • Anand Lal Shimpi - Friday, November 12, 2010 - link

    The issue with Windows XP vs. Windows 7 doesn't have anything to do with actual load, it has to do with alignment.

    Controllers designed with modern OSes in mind (Windows 7, OS X 10.5/10.6) in mind (C300, SandForce) are optimized for 4K aligned transfers. By default, Windows XP isn't 4K aligned and thus performance suffers. See here:

    http://www.anandtech.com/show/2944/10

    If you want the best out of box XP performance for incompressible data, Intel's X25-M G2 is likely the best candidate. The G1/G2 controllers are alignment agnostic and will always deliver the same performance regardless of partition alignment. Intel's controller was designed to go after large corporate sales and, at the time it was designed, many of those companies were still on XP.

    Take care,
    Anand
  • Out of Box Experience - Saturday, November 13, 2010 - link

    Thanks Anand

    Thats good to know

    With so many XP machines out there for the foreseeable future, I would think more SSD manufacturers would target the XP market with alignment agnostic controllers instead of making the consumers jump through all these hoops to get reasonable XP performance from their SSD's

    Last question..

    Would OS agnostic garbage collection like that on the new Kingston SSD work with Sandforce controllers if the manufacturers chose to include it in firmware or is it irrelevant with Duraclass ?

    I still think SSD's should be plug and play on ALL operating Systems

    Personally, I'd rather just use the drives instead of spending all this time tweaking them
  • sheh - Thursday, November 11, 2010 - link

    This seems like a worrying trend, though time will tell how reliable SSDs are long-term. What's the situation with 2Xnm? And where does SLC fit into all that regarding reliability, performance, pricing, market usage trends?

Log in

Don't have an account? Sign up now