AnandTech Storage Bench

To avoid any potential optimizations for industry standard benchmarks and to provide another example of real world performance we've assembled our own storage benchmarks that we've creatively named the AnandTech Storage Bench.

The first in our benchmark suite is a light/typical usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

The performance results are reported in average I/O Operations per Second (IOPS):

AnandTech Storage Bench - Typical Workload

The higher capacity SandForce drives rule the roost here, but the C300, X25-M G2 and V+100 are not too far behind. Despite its age, Intel's X25-M G2 performs very well in our light usage test. The V+100 isn't far behind thanks to its 8.5% improvement over the original V+.

As far as small capacity drives go, the Corsair Force F40 and other similarly sized SandForce drives are the clear winners here. Crucial's 64GB RealSSD C300 is quicker than the X25-V, but no match for the 40GB SF drive.

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

AnandTech Storage Bench - Heavy Multitasking Workload

This is another one of those SYSMark-like situations. The old Toshiba controller did just awesome in our heavy multitasking workload and the new update does even better. At 1135 IOPS, the V+100 is 55% faster than the Indilinx based Corsair Nova. Thanks to the incompressible nature of much of the data we're moving around in this benchmark the SandForce drives don't do so well. Although not pictured here, the 256GB C300 would be #2 - still outperformed by the V+100.

The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

AnandTech Storage Bench - Gaming Workload

The perplexing nature of the V+100 continues here. While it boasts great sequential read numbers, the smaller and somewhat random accesses drop the V+100 behind the SandForce and Crucial SSDs.

Overall System Performance using SYSMark 2007 Power Consumption
Comments Locked

96 Comments

View All Comments

  • Taft12 - Thursday, November 11, 2010 - link

    Can you comment on any penalty for 3Gbps SATA?

    I'm not convinced any SSD can exhibit any performance impact of the older standard except in the most contrived of benchmarks.
  • Sufo - Thursday, November 11, 2010 - link

    Well, i've seen speeds spike above 375MB/s tho ofc this could well be erroneous reporting on windows' side. I haven't actually hooked the drive up to my 3gbps ports so in all honesty, i can't compare the two - perhaps i should run a couple of benches...
  • Hacp - Thursday, November 11, 2010 - link

    It seems that you recommend drives despite the results of your own storage bench. It shows that the Kingston is the premier ssd to have if you want a drive that handles multi-tasking well.

    Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!
  • JNo - Thursday, November 11, 2010 - link

    "Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!"

    Er... I do. Well obviously I would want a drive that does well handling heavy task loads as well but there are limits to how much I can pay and the cost per gig of some of the better performers is significantly higher. Maybe money is no object for you but if I'm *absolutely honest* with myself, I only *very rarely* perform the type of very heavy loads that Anand uses in his heavy load bench (it has almost ridiculously levels of multi-tasking). So the premium for something that benefits me only 2-3% of the time is unjustified.
  • Anand Lal Shimpi - Thursday, November 11, 2010 - link

    That's why I renamed our light benchmark a "typical" benchmark, because it's not really a light usage case but rather more of what you'd commonly do on a system. The Kingston drive does very well there and in a few other tests, which is why I'd recommend it - however concerns about price and write amplification keep it from being a knock out of the park.

    Take care,
    Anand
  • OneArmedScissorB - Thursday, November 11, 2010 - link

    "Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!"

    Uh...pretty much every single person who buys one for a laptop?
  • cjcoats - Thursday, November 11, 2010 - link

    I have what may be an unusual access pattern -- seeks within a file -- that I haven't
    seen any "standard" benchmarks for, and I'm curious how drives do under it, particularly
    the Sandforce drives that depend upon (inherently sequential?) compression. Quite possibly, heavy database use has the same problem, but I haven't seen benchmarks on that, either.

    I do meteorology and other environmental modeling, and frequently we want to "strip mine" the data in various selective ways. A typical data file might look like:

    * Header stuff -- file description, etc.

    * Sequence of time steps, each of which is an
    > array of variables, each of which is a
    + 2-D or 3-D grid of values

    For example, you might have a year's worth of hourly meteorology (about 9000 time steps),
    for ten variables (of which temperature is the 2'nd), on a 250-row by 500-column grid.

    So for this file, that's 0.5 MB per variable, 5 MB per time step, total size 45 GB, with
    one file per year.

    Now you might want to know, "What's the temperature for Christmas Eve?" The logical sequence of operations to be performed is:

    1. Read the header
    2. Compute timestep-record descriptions
    3. Seek to { headersize + 8592*5MB + 500KB }
    4. Read 0.5 MB

    Now with a "conventional" disk, that's two seeks and two reads (assuming the header is not already cached by either the OS or the application), returning a result almost instantaneously.
    But what does that mean for a Sandforce-style drive that relies on compression, and implicitly on reading the whole thing in sequence? Does it mean I need to issue the data request and then go take a coffee break? I remember too well when this sort of data was stored in sequential ASCII files, and such a request would mean "Go take a 3-martini lunch." ;-(

  • FunBunny2 - Thursday, November 11, 2010 - link

    I've been asking for similar for a while. What I want to know from a test is how as SSD behaves as a data drive for a real database, DB2/Oracle/PostgreSQL with 10's of gig of data doing realistic random transactions. The compression used by SandForce becomes germane, in that engine writers are incorporating compression/security in storage. Whether one should use consumer/prosumer drives for real databases is not pertinent; people do.
  • Shadowmaster625 - Thursday, November 11, 2010 - link

    Yes I have been wondering about exactly this sort of thing too. I propose a seeking and logging benchmark. It should go something like this:

    Create a set of 100 log files. Some only a few bytes. Some with a few MB of random data.

    Create one very large file for seek testing. Just make an uncompressed zip file filled with 1/3 videos and 1/3 temporary internet files and 1/3 documents.

    The actual test should be two steps:

    1 - Open one log file and write a few bytes onto the end of it. Then close the file.

    2 - Open the seek test file and seek to random location and read a few bytes. Close the file.

    Then I guess you just count the number of loops this can run in a minute. Maybe run two threads, each working on 50 files.
  • Shadowmaster625 - Thursday, November 11, 2010 - link

    Intel charging too much, surely you must be joking!

    Do you know what the Dow Jones Industrial Average would be trading at if every DOW component (such as Intel) were to cut their margins down to the level of companies like Kingston? My guess would be about 3000. Something to keep in mind as we witness Bernanke's helicopter induced meltup...

Log in

Don't have an account? Sign up now