AnandTech Storage Bench

Note that our 6Gbps controller driver isn't supported by our custom storage bench here, so the C300 results are only offered in 3Gbps mode.

The first in our benchmark suite is a light usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

The performance results are reported in average I/O Operations per Second (IOPS):

AnandTech Storage Bench - Light Workload

I've always loved the performance of SandForce's controllers, I've just been worried about their reliability. While we wait for the latter to prove itself over time, the performance is very good today. The Corsair Force is very competitive, as fast as any other SF drive and faster than nearly all other SSDs.

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

AnandTech Storage Bench - Heavy Workload

Once again, very little difference between Corsair's Force and OCZ's Vertex LE. SandForce's performance isn't as strong in our heavy downloading workload, the Corsair Nova (with the latest Indilinx firmware) actually does better.

The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

AnandTech Storage Bench - Gaming Workload

Here we're completely bound by read performance. I'm afraid the only way you'll get faster is via RAID or a 6Gbps controller in Crucial's case.

Overall System Performance using PCMark Vantage TRIM Performance
Comments Locked

63 Comments

View All Comments

  • Anand Lal Shimpi - Wednesday, April 14, 2010 - link

    That I'm not sure of, the 2008 Iometer build is supposed to use a fairly real world inspired data set (Intel helped develop the random algorithm apparently) and the performance appears to be reflected in our real world tests (both PCMark Vantage and our Storage Bench).

    That being said, SandForce is apparently working on their own build of Iometer that lets you select from all different types of source data to really stress the engine.

    Also keep in mind that the technology at work here is most likely more than just compression/data deduplication.

    Take care,
    Anand
  • keemik - Wednesday, April 14, 2010 - link

    Call me anal, but I am still not happy with the response ;)
    Maybe the first 4k block is filled with random data, but then that block is used over and over again.

    That random read/write performance is too good to be true.
  • Per Hansson - Wednesday, April 14, 2010 - link

    Just curious about the missing capacitor, will there not be a big risk of dataloss incase of power outage?

    Do you know what design changes where done to get rid of the capacitor, where any additional components other than the capacitor removed?

    Because it can be bought in low quantities for a quite ok retail price of £16.50 here;
    http://www.tecategroup.com/ultracapacitors/product...
  • bluorb - Wednesday, April 14, 2010 - link

    A question: if the controller is using lossless compression in order to write less data, is it not possible to say that the drive work volume is determined by the type of information written to it?

    Example: if user x data can be routinely compressed at a 2 to 1 ratio then it can be said that for this user the drive work volume is 186GB and cost per GB is 2.2$.

    Am I on to something or completely of the track ?
  • semo - Wednesday, April 14, 2010 - link

    this compression is detectable by the OS. As the name suggests (DuraWrite) it is there to reduce the wear on the drive which can also give better performance but not extra capacity.
  • ptmixer - Wednesday, April 14, 2010 - link

    I'm also wondering about the capacity on these SandForce drives. It seems the actual capacity is variable depending on the type of data stored. If the drive has 128 GB of flash, 93.1 usable after spare area, then that must be the amount of compressed/thinned data you can store, so the amount of 'real' data should be much more.. thereby helping the price/GB of the drive.

    For example, if the drive is partly used and your OS says it has 80 GB available, then you store 10 GB of compressible data on it, won't it then report that it perhaps still has 75 GB available (rather than 70 GB as on a normal drive)? Anand -- help us with our confusion!

    ps - thanks for all the great SSD articles! Could you also continue to speculate how well a drive will work on a non trim-enabled system, like OS X, or as a ESXi Datastore?
  • JarredWalton - Wednesday, April 14, 2010 - link

    I commented on this in the "This Just In" article, but to recap:

    In terms of pure area used, Corsair sets aside 27.3% of the available capacity. However, with DuraWrite (i.e. compression) they could actually have even more spare area than 35GiB. You're guaranteed 93GiB of storage capacity, and if the data happens to compress better than average you'll have more spare area left (and more performance) while with data that doesn't compress well (e.g. movies and JPG images) you'll get less spare area remaining.

    So even at 0% compression you'd still have at least 35GiB of spare and 93GiB of storage, but with an easily achievable 25% compression average you would have as much as ~58GiB of spare area (45% of the total capacity would be "spare"). If you get an even better 33% compression you'd have 66GiB of spare area (51% of total capacity), etc.
  • KaarlisK - Wednesday, April 14, 2010 - link

    Just resize the browser window.
    Margins won't help if you have a 1920x1080 screen anyway.
  • RaistlinZ - Wednesday, April 14, 2010 - link

    I don't see a reason to opt for this over the Crucial C300 drive, which performs better overall and is quite a bit cheaper per GB. Yes, these use less power but I hardly see that as a determining factor for people running high-end CPU's and video cards anyway.

    If they can get the price down to $299 then I may give it a look. But $410 is just way too expensive considering the competition that's out there.
  • Chloiber - Wednesday, April 14, 2010 - link

    I did test it. If you create the test file it compressable to 0 percent of its original size.
    But if you write sequential or random data to the file you can't compress it at all. So i think that iometer uses random data for the tests. Of course this is a critical point when testing such drives and I am sure that anand did test it too before doing the tests. I hope so at least ;)

Log in

Don't have an account? Sign up now