For the past six months I've been working on research and testing for the next major AnandTech SSD article. I figured I had enough time to line up its release with the first samples of the next-generation of high end SSDs. After all, it seems like everyone was taking longer than expected to bring out their next-generation controllers. I should've known better.

At CES this year we had functional next-generation SSDs based on Marvell and SandForce controllers. The latter was actually performing pretty close to expectations from final hardware. Although I was told that drives wouldn't be shipping until mid-Q2, it was clear that preview hardware was imminent. It was the timing that I couldn't predict.

A week ago, two days before I hopped on a flight to Barcelona for MWC, a package arrived at my door. OCZ had sent me a preproduction version of their first SF-2500 based SSD: the Vertex 3 Pro. The sample was so early that it didn't even have a housing, all I got was a PCB and a note:

Two days isn't a lot of time to test an SSD. It's enough to get a good idea of overall performance, but not enough to find bugs and truly investigate behavior. Thankfully the final release of the drive is still at least 1 - 2 months away, so this article can serve as a preview.

The Architecture

I've covered how NAND Flash works numerous times in the past, but I'll boil it all down to a few essentials.

NAND Flash is non-volatile memory, you can write to it and it'll store a charge even if you remove power from the device. Erase the NAND too many times and it will stop being able to hold a charge. There are two types of NAND that we deal with: single-level cell (SLC) and multi-level cell (MLC). Both are physically the same, you just store more data in the latter which drives costs, performance and reliability down. Two-bit MLC is what's currently used in consumer SSDs, the 3-bit stuff you've seen announced is only suitable for USB sticks, SD cards and other similar media.

Writes to NAND happen at the page level (4KB or 8KB depending on the type of NAND), however you can't erase a single page. You can only erase groups of pages at a time in a structure called a block (usually 128 or 256 pages). Each cell in NAND can only be erased a finite number of times so you want to avoid erasing as much as possible. The way you get around this is by keeping data in NAND as long as possible until you absolutely have to erase it to make room for new data. SSD controllers have to balance the need to optimize performance with the need to write evenly to all NAND pages. Conventional controllers do this by keeping very large tables that track all data being written to the drive and optimizes writes for performance and reliability. The controller will group small random writes together and attempt to turn them into large sequential writes that are easier to burst across all of the NAND devices. Smart controllers will even attempt to reorganize data while writing in order to keep performance high for future writes. All of this requires the controller to keep track of lots of data, which in turn requires the use of large caches and DRAMs to make accessing that data quick. All of this work is done to ensure that the controller only writes data it absolutely needs to write.

SandForce's approach has the same end goal, but takes a very different path to get there. Rather than trying to figure out what to do with the influx of data, SandForce's approach simply writes less data to the NAND. Using realtime compression and data deduplication techniques, SandForce's controllers attempt to reduce the size of what the host is writing to the drive. The host still thinks all of its data is being written to the drive, but once the writes hit the controller, the controller attempts to reduce the data as much as possible.

The compression/deduplication is done in realtime and what results is potentially awesome performance. Writing less data is certainly faster than writing everything. Similar technologies are employed by enterprise SAN solutions, but SandForce's algorithms are easily applicable to the consumer world. With the exception of large, highly compressed multimedia files (think videos, MP3s) most of what you write to your HDD/SSD is pretty easily compressible.

You don't get any extra space with SandForce's approach, the drive still has to accommodate the same number of LBAs as it advertises to the OS. After all, you could write purely random data to the drive, in which case it'd behave like a normal SSD without any of its superpowers.

Since the drive isn't storing your data bit for bit but rather storing hashes, it's easier for SandForce to do things like encrypt all of the writes to the NAND (which it does by default). By writing less, SandForce also avoids having to use a large external DRAM - its designs don't have any DRAM cache. SandForce also claims to be able to use its write-less approach in order to use less reliable NAND, in order to ensure reliability the controller actually writes some amount of redundant data. Data is written across multiple NAND die in parallel along with additional parity data. The parity data occupies the space of a single NAND die. As a result, SandForce drives set aside more spare area than conventional controllers.

What's New

Everything I've described up to this point applies to both the previous generation (SF-1200/1500) and the new generation (SF-2200/2500) of SandForce controllers. Now let's go over what's new:

1) Toggle Mode & ONFI 2 NAND support. Higher bandwidth NAND interfaces mean we should see much better performance without any architectural changes.

2) To accommodate the higher bandwidth NAND SandForce increased the size of on-chip memories and buffers as well as doubled the number of NAND die that can be active at one time. Finally there's native 6Gbps support to remove any interface bottlenecks. Both 1 & 2 will manifest as much higher read/write speed.

3) Better encryption. This is more of an enterprise feature but the SF-2000 controllers support AES-256 encryption across the drive (and double encryption to support different encryption keys for separate address ranges on the drive).

4) Better ECC. NAND densities and defect rates are going up, program/erase cycles are going down. The SF-2000 as a result has an improved ECC engine.

All of the other features that were present in the SF-1200/1500 are present in the SF-2000 series.

The Unmentionables: NAND Mortality Rate
Comments Locked

144 Comments

View All Comments

  • PubicTheHare - Thursday, February 17, 2011 - link

    Anand,

    Would this be the drive to pair with a Sandy Bridge Macbook Pro, or do we still need to wait for native TRIM support in OSX?

    I'm really eager to buy a Sandy Bridge MBP and do this if the V3P is the one to get. I can't recall which controller you recommended for Macs.
  • markjx1 - Thursday, February 17, 2011 - link

    Why no mention of the fact this was originally slated for the SF-2000, and now its got the enterprise class SF-2500 with disclaimers of "hardware not final" all over the article?

    The truth is they had major problems with SF-2000 and nobody wants to talk about it - of all people I'd think you would have. Now go ahead and delete my post again.
  • markjx1 - Thursday, February 17, 2011 - link

    Resorting to using the enterprise-class SF-2500 means I worry about cost competitiveness against other upcoming Gen3 offerings from Intel and Crucial/Micron.

    OCZ took a $25 mil bank loan recently so they need a winner. I worry about the "hardware is not final" disclaimers in the article. Sending these samples out for some buzz smells like buying time while they work out production dilemmas since originally they were going to use a different SF controller.
  • bplewis24 - Friday, February 18, 2011 - link

    Why are you trolling? You have 3 of essentially the same posts on this page. We get it, you think he "sold out" for not mentioning that it's using a different controller.

    Brandon
  • vol7ron - Thursday, February 17, 2011 - link

    Does this mean a higher QD, might improve performance.

    It looks like QD of 16 might be the sweet spot in terms of latency vs IOPS
  • DJMiggy - Thursday, February 17, 2011 - link

    Thanks for the great article and thank you OCZ for making my Vertex 1 even more obsolete. That's ok though! Not a complaint! lol Just looking forward to my next upgrade!
  • Rasterman - Thursday, February 17, 2011 - link

    Since the drive is 400MB/s read, if you use two in RAID-0 are you going to get 800MB/s or the realistic max of 6gpbs of ~700MB/s?

    If you use RAID-0 X4 are you going to get 1600MB/s or ~700MB/s?
  • jwilliams4200 - Thursday, February 17, 2011 - link

    With a decent RAID card, then RAID 0 of N drives will give you slightly less than N times the performance of a single drive.

    For on-motherboard RAID, you will also get approximately N times the performance of a single drive, but there is usually a ceiling to total performance on motherboard RAID. It is around 600 MB/s for ICHR-10.
  • myterrybear - Thursday, February 17, 2011 - link

    I've been a loyal reader of this sight, but what realy myths me if you's use EVERY other controler but those found on the AMD platform .... especialy more so with a Sata 6.0 SSD like this it would match up well for those of us amd users whom have paid money to upgrade the motherboard to have that sb8xx controler onboard to utilize hard drives & other devices that run on the now becoming Sata 6.0 standard.
  • Per Hansson - Thursday, February 17, 2011 - link

    Hi, has the much talked about hibernation problem with the Sandforce drives been solved in the 2xxx series?
    Loosing Hibernation is pretty much a deal killer for me in a laptop :(

    http://www.google.se/search?hl=sv&safe=off&...

Log in

Don't have an account? Sign up now