The Corsair Force GS

Now that the TRIM issue is out of the way, it's time to take a closer look at Corsair's Force GS SSD. Not much has happened in the SandForce SSD frontier for a while and the Force GS isn't exactly special either. As with most SandForce based SSDs, it's based on SandForce's SF-2281 controller, although Corsair has chosen SanDisk, a bit more uncommon choice, as the NAND supplier. SanDisk's NAND uses the same Toggle-Mode interface as Toshiba's and Samsung's NAND, which is rarer in SandForce SSDs than ONFi NAND. That's not to say that the Force GS is the first Toggle-Mode NAND based SandFroce SSD; there are quite a few that use Toggle-Mode NAND as well, such as OWC's Mercury 6G and Mushkin's Chronos Deluxe.

Comparison of NAND Interfaces
  ONFi Toggle-Mode
Manufacturers IMFT (Intel, Micron, Spectec), Hynix Toshiba/SanDisk, Samsung
Version 1.0 2.0 2.x 3.0 1.0 2.0
Max Bandwidth 50MB/s 133MB/s 200MB/s 400MB/s 166MB/s 400MB/s

By using Toggle-Mode NAND, Corsair claims to achieve slightly higher write speeds than ONFi based SandForce SSDs, although the difference is only about 5MB/s in sequential write and 5K IOPS in 4K random write. While SanDisk NAND is quite rare, it should not be of lower quality than any other NAND. Toshiba and SanDisk have a NAND joint venture similar to Intel's and Micron's IMFT: SanDisk owns 49.9% and Toshiba owns the remaining 50.1% of the joint venture. As the NAND comes from the same fabs, there is no physical difference between SanDisk and Toshiba NAND, although validation methods may of course be different.

Corsair Force Series GS Specifications
User Capacity 180GB 240GB 360GB 480GB
Controller SandForce SF-2281
NAND SanDisk 24nm Toggle-Mode MLC NAND
Raw NAND Capacity 192GiB 256GiB 384GiB 512GiB
Number of NAND Packages 12 16 12 16
Number of Die per Package 2 2 4 4
Sequential Read 555MB/s 555MB/s 555MB/s 555MB/s
Sequential Write 525MB/s 525MB/s 530MB/s 455MB/s
Max 4K Random Write 90K IOPS 90K IOPS 50K IOPS 50K IOPS

The interesting thing in Force GS are the available capacities; Corsair isn't offering anything smaller than 180GB and there is also a more uncommon 360GB model included. As explained in our pipeline article of the Force GS launch, 180GB and 360GB models are achieved by running the SF-2281 controller in 6-channel mode and using either 6 or 12 NAND packages. Corsair only had 240GB review samples available, but they promised to send us a 360GB sample once they get them.

Price Comparison (11/22/2012)
  120/128GB 180GB 240/256GB 360GB 480/512GB
Corsair Force GS N/A $160 $220 $315 $400
Corsair Force GT $130 $185 $220 N/A $390
Corsair Neutron $120 N/A $213 N/A N/A
Plextor M5S $110 N/A $200 N/A N/A
Crucial m4 $110 N/A $185 N/A $389
Intel 520 Series $130 $190 $234 N/A $370
Samsung SSD 830 $104 N/A $200 N/A $550
OCZ Vertex 3 $89 N/A $200 N/A $425
OCZ Vertex 4 $75 N/A $160 N/A $475
Mushkin Chronos Deluxe $100 N/A $180 N/A N/A

Force GS is priced competitively against other SSDs at all capacities. All capacities are priced noticeably below $1 per GB, even the not so common 180GB and 360GB models. Of course, it should be kept in mind that SSD prices change frequently (e.g. some of the models like the 480GB Vertex 3 have dropped in price by 30% or more in the past two months!), so you should do your own research before buying. We can only quote the prices at the time of writing, there is a good chance that our pricing table will be at least somewhat out of date in less than a week.

But How About Incompressible Data and TRIM? Inside The Corsair Force GS and Test Setup
Comments Locked

56 Comments

View All Comments

  • Sivar - Saturday, November 24, 2012 - link

    Do you understand how data deduplication works?
    This is a rhetorical question. Those who have read your comments know the answer.
    Please read the Wikipedia article on data deduplication, or some other source, before making further comments.
  • JellyRoll - Saturday, November 24, 2012 - link

    I am repeating the comments above for you, since you referenced the Wiki I would kindly suggest that you might have a look at it yourself before commenting further.
    "the intent of storage-based data deduplication is to inspect large volumes of data and identify large sections – such as entire files or large sections of files – that are identical, in order to store only one copy of it."
    This happens without any regard to whether data is compressible or not.
    If you have two matching sets of data, be they incompressible or not, they would be subject to deduplicatioin. It would merely require mapping to the same LBA addresses.
    For instance, if you have two files that consist of largely incompressible data, but they are still carbon copies of each, they are still subject to data deduplication.
  • 'nar - Monday, November 26, 2012 - link

    You contradict yourself dude. You are regurgitating the words, but their meaning isn't sinking in. If you have two sets of incompressible data, then you have just made it compressible, ie. 2=1

    When the drive is hammered with incompressible data, there is only one set of data. If there were two or more sets of identical data then it would be compressible. De-duplication is a form of compression. If you have incompressible data, it cannot be de-duped.

    Write amplification improvements come from compression, as in 2 files=1 file. Write less, lower amplification. Compressible data exhibits this, but incompressible data cannot because no two files are identical. Write amp is still high with incompressible data like everyone else. Your conclusion is backwards. De-duplication can only be applied on compressible data.

    The previous article that Anand himself wrote suggested dedupe, it did not state that it was used, as that was not divulged. Either way, dedupe is similar to compression, hence the description. Although vague, it's the best we got from Sandforce to describe what they do.

    What Sandforce uses is speculation anyhow, since it deals with trade secrets. If you really want to know you will have to ask Sandforce yourself. Good luck with that. :)
  • JellyRoll - Tuesday, November 27, 2012 - link

    If you were to write 100 exact copies of a file, with each file consisting of incompressible data and 100MB in size, deduplication would only write ONE file, and link back to it repeatedly. The other 99 instances of the same file would not be repeatedly written.
    That is the very essence of deduplication.
    SandForce processors do not exhibit this characteristic, be it 100 files or even only two similar files.
    Of course SandForce doesn't disclose their methods, but full on terming it dedupe is misleading at best.
  • extide - Wednesday, November 28, 2012 - link

    DeDuplication IS a form of compression dude. Period!!
  • FunnyTrace - Wednesday, November 28, 2012 - link

    SandForce presumably uses some sort of differential information update. When a block is modified, you find the difference between the old data and the new data. If the difference is small, you can just encode it over a smaller number of bits in the flash page. If you do the difference encoding, you cannot gc the old data unless you reassemble and rewrite the new data to a different location.

    Difference encoding requires more time (extra read, processing, etc). So, you must not do it when the write buffer is close to full. You can always choose whether or not you do differential encoding.

    It is definitely not deduplication. You can think of it as compression.

    A while back my prof and some of my labmates tried to guess their "DuraWrite" (*rolls eyes*) technology and this is the best guess have come up with. We didn't have the resources to reverse engineer their drive. We only surveyed published literature (papers, patents, presentations).

    Oh, and here's their patent: http://www.google.com/patents/US20120054415
  • JellyRoll - Friday, November 30, 2012 - link

    Hallelujah!
    Thanks funnytrace, i had a strong suspicion that it was data differencing. In the linked patent document it lists this 44 times. Maybe that many repetitions will sink in for some who still believe it is deduplication?
    Also, here is a link to data differencing for those that wish to learn..
    http://en.wikipedia.org/wiki/Data_differencing
    Radoslav Danilak is listed as the inventor, not surprising i believe he was SandForce employee #2. He is now running Skyera, he is an excellent speaker btw.
  • extide - Saturday, November 24, 2012 - link

    It's no different than SAN's and ZFS and other enterprise level storage solutions doing block level de-duplication. It's not magic, and it's not complicated. Why is it so hard to believe? I mean, you are correct that the drive has no idea what bytes go to what file, but it doesn't have to. As long as the controller sends the same data back to the host for a given read on an lba as the host sent to write, it's all gravvy. It doesnt matter what ends up on the flash,.
  • JellyRoll - Saturday, November 24, 2012 - link

    Aboslutely correct. However, they have much more powerful processors. You are talking about a very low wattage processor that cannot handle deduplication on this scale. SandForce also does not make the statement that they actually DO deduplication.
  • FunBunny2 - Saturday, November 24, 2012 - link

    here: http://thessdreview.com/daily-news/latest-buzz/ken...

    "Speaking specifically on SF-powered drives, Kent is keen to illustrate that the SF approach to real time compression/deduplication gives several key advantages."

    Kent being the LSI guy.

Log in

Don't have an account? Sign up now