AnandTech Storage Bench

Note that our 6Gbps controller driver isn't supported by our custom storage bench here, so the C300 results are only offered in 3Gbps mode.

The first in our benchmark suite is a light usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

The performance results are reported in average I/O Operations per Second (IOPS):

AnandTech Storage Bench - Light Workload

Under a typical, light poweruser workload, the Crucial RealSSD C300 bests OCZ's Vertex LE by 4.5% - not a tangible difference, just a (barely) measurable one. Intel's SLC X25-E is actually still the fastest thing here, which must be frustrating for Intel since the only thing separating the G2s from topping the charts is sequential write speed.

The Toshiba based Kingston drive performs similarly to the MLC based Indilinx drives, which is good since that's exactly where it's supposed to perform.

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

AnandTech Storage Bench - Heavy Workload

I ran and re-ran the tests - they're accurate. The Vertex LE does well, just not as good as the Kingston or Crucial drives here. The Crucial RealSSD C300 is simply a beast in our write-heavy test. I suspect that the fact that many of our writes here are compressed is to blame for the Vertex LE not being as fast as usual here. Remember that SandForce's architecture works by data reduction, whether through compression, deduplication or other similar natured algorithm. By definition those algorithms don't work well on data that is already being written in reduced form. If you're dealing with a lot of compressed archives, the Vertex LE will perform well, but not as well as the RealSSD C300.

Our final test focuses on actual gameplay in four 3D games: World of Warcraft, Batman: Arkham Asylum, FarCry 2 and Risen, in that order. The games are launched and played, altogether for a total of just under 30 minutes. The benchmark measures game load time, level load time, disk accesses from save games and normal data streaming during gameplay.

The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

AnandTech Storage Bench - Gaming Workload

Just as we saw with our PCMark tests, all of the drives perform about the same here. If you're just going to be tossing games on your SSD, you can't really go wrong with any of these drives. It's possible that if we were able to use our 6Gbps controller here that Crucial would break the mold as the drives here appear to be limited by sequential read speed.

Overall System Performance using PCMark Vantage Final Words
Comments Locked

83 Comments

View All Comments

  • AnnonymousCoward - Sunday, February 21, 2010 - link

    > Zoomer: The point of SSDs is to improve the user
    > response time.

    Exactly! So why don't we compare response times?


    > erple2: Saying that one drive attains 600 IOPS on
    > "Anand's light StorageBench" where another
    > attains 500 IOPS _ON THE SAME BENCHMARK_ does, in
    > fact, give you a reasonably accurate comparison.

    Sorry, not true. Like I said, SandForce's compression makes IOPS not equal to bandwidth. See http://tinyurl.com/yden7kc">http://tinyurl.com/yden7kc . And allow me to restate my comments from the last article: In article http://tinyurl.com/yamfwmg">http://tinyurl.com/yamfwmg , in IOPS, RAID0 was 20-38% faster! Then the loading *time* comparison had RAID0 giving equal and slightly worse performance! Anand concluded, "Bottom line: RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance."

    So there you have it. Why measure IOPS?


    > erple2: what is important is the general ranking
    > of these devices in the same benchmark. The
    > benchmark is measuring the _relative_ performance
    > of each of the drives in the same sequence of
    > tests.

    What "general ranking" lacks is the issue of significance. I apologize, but I will again restate what I posted on the last article: is the performance difference between drives significant or insignificant? Does the SandForce cost twice as much as the others and launch applications just 0.2s faster? Let's say I currently don't own an SSD: I would sure like to know that an HDD takes 15s at some task, whereas the Vertex takes 7.1s, the Intel takes 7.0s, and the SF takes 6.9! Then my purchase decision would be entirely based on price! The current benchmarks leave me in the dark regarding this.
  • jimhsu - Saturday, February 20, 2010 - link

    The performance/free space dropoff is a significant issues, especially with otherwise-fast SSDs (i.e. Intel). For example, the 80GB X25-M should really be relabeled as a 60GB drive due to progressive worsening performance as the amount of free space decreases (beyond 70GB, it starts getting REALLY bad). Do these drives show any improvement in the performance to free space degradation curve?
  • Demon-Xanth - Saturday, February 20, 2010 - link

    Given the state of so many SSDs out there using so many controllers with the performance being so dependent on the controllers...

    ...if is possible to get "summary" chart of what drives use what controller configurations?
  • yottabit - Saturday, February 20, 2010 - link

    Thought I would point out that Page 8's title lists "Apricon" instead of "Apricorn"

    As always, thanks for great articles Anand!
  • aarste - Saturday, February 20, 2010 - link

    I'm building a new PC soon and was going to buy another Agility 60 and use it with my existing 60gb agility and raid0 them up. But since the Intel X25-M 80GB is almost the same price, and blows away the agility in random reads (which is more relevant in OS/App usage than sequential speed, correct?) would it be better just to buy and run the single Intel drive instead?

    I'm not too fussed about losing out on 120gb of capacity in raid0, and besides, I can install the games to the Agility instead, and use the Intel for the OS/Apps.
  • leexgx - Saturday, February 20, 2010 - link

    No TRIM support in RAID so the drives would end up like 1 or less SSD speeds, SSDs really tank in speed once the drive has to erase before Write (Filled State)

    need to use Standard AHCI drivers (install win7 but do not install chipset drivers or Intel matrix drivers as that would disable TRIM)

    1 intel SSD 80gb and the Agility 60GB with updated Firmware one you got now is the best option

    (correct about random reads and Writes the intel one hardly drops in speed at random Writes at all, most get to Focused on sequential speeds, as long as TRIM support is there)
  • cjcoats - Saturday, February 20, 2010 - link

    The one thing missing is the one that's really relevant to me: workstation performance.
    It's probably close to the "heavy load" scenario", but... For me, it's a mix of compiles, compute-intensive modeling, visualization, and GIS use. Of these, the compiles, the visualization, and the GIS are the really-interactive items, so are probably most important.
    There are lots of compile-benchmarks out there; it would be relatively easy to generate a GIS benchmark, using some of the GRASS GIS logs I have from what I've been doing lately.

    FWIW.
  • NeBlackCat - Tuesday, February 23, 2010 - link

    I completely agree that there should be a developers benchmark, and keep mentioning this when these articles appear.

    Compiling a large software project seems to me to be a good general purpose test. There'll be random and sequential reads and writes, of a few bytes to many megabytes, in some hard to predict ratio, as the build process reads sources/headers, uses temporary files and writes output. It isn't obvious to me whether the Intel or the Indilinx/Micron characteristics would be favored.

    But afaik no-one's studied this from an SSD angle, and I wish Anand would at least add a benchmark which could, say, build a Linux distro while grepping it repeatedly for some random text.

    What say you Anand?

  • OfficeITGeek - Saturday, February 20, 2010 - link

    Anand,
    As always another great article. I just wanted to say that it is looking really bright for SSD's. The performance benefits of SSD's are just to great to ignore (unlike the switch from DDR2 to DDR3). But I am going to hold off though until Q4 as by then, the market will have alot more competition (hence lower prices), bugs will be sorted out and the thought of dead drives (such as the one you experienced) just gives me the creeps even if they do replace it with a new one.
  • MadMan007 - Saturday, February 20, 2010 - link

    Anand, it is getting hard to keep track of different SSDs, which controller they use, how many flash chips, etc. It would be wonderful if you could start an 'SSD decoder ring' chart that lists the relevant information, maybe even linked to performance numbers like you've done with CPUs.

Log in

Don't have an account? Sign up now