Final Words

Update: Random write performance of the drive we reviewed may change with future firmware updates. Click here to find out more.

Well, so much for the SF-1200 being a slouch. From the looks of it, I'd say that what you're looking at is a binned SF-1500 that performs well enough for most client use. Corsair's Force delivers the exact same performance as OCZ's Vertex LE, which also makes it your best bet if you're looking for a single drive that offers better-than-Intel performance. Unfortunately you do pay for it. If you look at how much you're paying per usable GB of space, these SandForce drives carry a ~33% price premium over Intel's X25-M G2.

The Corsair Force drive should be resilient enough to maintain good performance regardless of desktop workload. SandForce's unique architecture means that read performance actually suffers more than write performance over time, thankfully you'll never be in a situation where you'll need to read back LBAs that have garbage data in them so this shouldn't be a problem.

I really have no performance complaints about the SF-1200 or Corsair's Force drive. Both did very well in our tests. My concerns continue to be about long term reliability since we've only really seen these drives shipping in earnest for a couple of months now. If you're fine taking the risk, the performance is very nice. Otherwise I'd wait to see how these drives hold up over time before committing.

TRIM Performance
Comments Locked

63 Comments

View All Comments

  • Anand Lal Shimpi - Wednesday, April 14, 2010 - link

    That I'm not sure of, the 2008 Iometer build is supposed to use a fairly real world inspired data set (Intel helped develop the random algorithm apparently) and the performance appears to be reflected in our real world tests (both PCMark Vantage and our Storage Bench).

    That being said, SandForce is apparently working on their own build of Iometer that lets you select from all different types of source data to really stress the engine.

    Also keep in mind that the technology at work here is most likely more than just compression/data deduplication.

    Take care,
    Anand
  • keemik - Wednesday, April 14, 2010 - link

    Call me anal, but I am still not happy with the response ;)
    Maybe the first 4k block is filled with random data, but then that block is used over and over again.

    That random read/write performance is too good to be true.
  • Per Hansson - Wednesday, April 14, 2010 - link

    Just curious about the missing capacitor, will there not be a big risk of dataloss incase of power outage?

    Do you know what design changes where done to get rid of the capacitor, where any additional components other than the capacitor removed?

    Because it can be bought in low quantities for a quite ok retail price of £16.50 here;
    http://www.tecategroup.com/ultracapacitors/product...
  • bluorb - Wednesday, April 14, 2010 - link

    A question: if the controller is using lossless compression in order to write less data, is it not possible to say that the drive work volume is determined by the type of information written to it?

    Example: if user x data can be routinely compressed at a 2 to 1 ratio then it can be said that for this user the drive work volume is 186GB and cost per GB is 2.2$.

    Am I on to something or completely of the track ?
  • semo - Wednesday, April 14, 2010 - link

    this compression is detectable by the OS. As the name suggests (DuraWrite) it is there to reduce the wear on the drive which can also give better performance but not extra capacity.
  • ptmixer - Wednesday, April 14, 2010 - link

    I'm also wondering about the capacity on these SandForce drives. It seems the actual capacity is variable depending on the type of data stored. If the drive has 128 GB of flash, 93.1 usable after spare area, then that must be the amount of compressed/thinned data you can store, so the amount of 'real' data should be much more.. thereby helping the price/GB of the drive.

    For example, if the drive is partly used and your OS says it has 80 GB available, then you store 10 GB of compressible data on it, won't it then report that it perhaps still has 75 GB available (rather than 70 GB as on a normal drive)? Anand -- help us with our confusion!

    ps - thanks for all the great SSD articles! Could you also continue to speculate how well a drive will work on a non trim-enabled system, like OS X, or as a ESXi Datastore?
  • JarredWalton - Wednesday, April 14, 2010 - link

    I commented on this in the "This Just In" article, but to recap:

    In terms of pure area used, Corsair sets aside 27.3% of the available capacity. However, with DuraWrite (i.e. compression) they could actually have even more spare area than 35GiB. You're guaranteed 93GiB of storage capacity, and if the data happens to compress better than average you'll have more spare area left (and more performance) while with data that doesn't compress well (e.g. movies and JPG images) you'll get less spare area remaining.

    So even at 0% compression you'd still have at least 35GiB of spare and 93GiB of storage, but with an easily achievable 25% compression average you would have as much as ~58GiB of spare area (45% of the total capacity would be "spare"). If you get an even better 33% compression you'd have 66GiB of spare area (51% of total capacity), etc.
  • KaarlisK - Wednesday, April 14, 2010 - link

    Just resize the browser window.
    Margins won't help if you have a 1920x1080 screen anyway.
  • RaistlinZ - Wednesday, April 14, 2010 - link

    I don't see a reason to opt for this over the Crucial C300 drive, which performs better overall and is quite a bit cheaper per GB. Yes, these use less power but I hardly see that as a determining factor for people running high-end CPU's and video cards anyway.

    If they can get the price down to $299 then I may give it a look. But $410 is just way too expensive considering the competition that's out there.
  • Chloiber - Wednesday, April 14, 2010 - link

    I did test it. If you create the test file it compressable to 0 percent of its original size.
    But if you write sequential or random data to the file you can't compress it at all. So i think that iometer uses random data for the tests. Of course this is a critical point when testing such drives and I am sure that anand did test it too before doing the tests. I hope so at least ;)

Log in

Don't have an account? Sign up now