For the past six months I've been working on research and testing for the next major AnandTech SSD article. I figured I had enough time to line up its release with the first samples of the next-generation of high end SSDs. After all, it seems like everyone was taking longer than expected to bring out their next-generation controllers. I should've known better.

At CES this year we had functional next-generation SSDs based on Marvell and SandForce controllers. The latter was actually performing pretty close to expectations from final hardware. Although I was told that drives wouldn't be shipping until mid-Q2, it was clear that preview hardware was imminent. It was the timing that I couldn't predict.

A week ago, two days before I hopped on a flight to Barcelona for MWC, a package arrived at my door. OCZ had sent me a preproduction version of their first SF-2500 based SSD: the Vertex 3 Pro. The sample was so early that it didn't even have a housing, all I got was a PCB and a note:

Two days isn't a lot of time to test an SSD. It's enough to get a good idea of overall performance, but not enough to find bugs and truly investigate behavior. Thankfully the final release of the drive is still at least 1 - 2 months away, so this article can serve as a preview.

The Architecture

I've covered how NAND Flash works numerous times in the past, but I'll boil it all down to a few essentials.

NAND Flash is non-volatile memory, you can write to it and it'll store a charge even if you remove power from the device. Erase the NAND too many times and it will stop being able to hold a charge. There are two types of NAND that we deal with: single-level cell (SLC) and multi-level cell (MLC). Both are physically the same, you just store more data in the latter which drives costs, performance and reliability down. Two-bit MLC is what's currently used in consumer SSDs, the 3-bit stuff you've seen announced is only suitable for USB sticks, SD cards and other similar media.

Writes to NAND happen at the page level (4KB or 8KB depending on the type of NAND), however you can't erase a single page. You can only erase groups of pages at a time in a structure called a block (usually 128 or 256 pages). Each cell in NAND can only be erased a finite number of times so you want to avoid erasing as much as possible. The way you get around this is by keeping data in NAND as long as possible until you absolutely have to erase it to make room for new data. SSD controllers have to balance the need to optimize performance with the need to write evenly to all NAND pages. Conventional controllers do this by keeping very large tables that track all data being written to the drive and optimizes writes for performance and reliability. The controller will group small random writes together and attempt to turn them into large sequential writes that are easier to burst across all of the NAND devices. Smart controllers will even attempt to reorganize data while writing in order to keep performance high for future writes. All of this requires the controller to keep track of lots of data, which in turn requires the use of large caches and DRAMs to make accessing that data quick. All of this work is done to ensure that the controller only writes data it absolutely needs to write.

SandForce's approach has the same end goal, but takes a very different path to get there. Rather than trying to figure out what to do with the influx of data, SandForce's approach simply writes less data to the NAND. Using realtime compression and data deduplication techniques, SandForce's controllers attempt to reduce the size of what the host is writing to the drive. The host still thinks all of its data is being written to the drive, but once the writes hit the controller, the controller attempts to reduce the data as much as possible.

The compression/deduplication is done in realtime and what results is potentially awesome performance. Writing less data is certainly faster than writing everything. Similar technologies are employed by enterprise SAN solutions, but SandForce's algorithms are easily applicable to the consumer world. With the exception of large, highly compressed multimedia files (think videos, MP3s) most of what you write to your HDD/SSD is pretty easily compressible.

You don't get any extra space with SandForce's approach, the drive still has to accommodate the same number of LBAs as it advertises to the OS. After all, you could write purely random data to the drive, in which case it'd behave like a normal SSD without any of its superpowers.

Since the drive isn't storing your data bit for bit but rather storing hashes, it's easier for SandForce to do things like encrypt all of the writes to the NAND (which it does by default). By writing less, SandForce also avoids having to use a large external DRAM - its designs don't have any DRAM cache. SandForce also claims to be able to use its write-less approach in order to use less reliable NAND, in order to ensure reliability the controller actually writes some amount of redundant data. Data is written across multiple NAND die in parallel along with additional parity data. The parity data occupies the space of a single NAND die. As a result, SandForce drives set aside more spare area than conventional controllers.

What's New

Everything I've described up to this point applies to both the previous generation (SF-1200/1500) and the new generation (SF-2200/2500) of SandForce controllers. Now let's go over what's new:

1) Toggle Mode & ONFI 2 NAND support. Higher bandwidth NAND interfaces mean we should see much better performance without any architectural changes.

2) To accommodate the higher bandwidth NAND SandForce increased the size of on-chip memories and buffers as well as doubled the number of NAND die that can be active at one time. Finally there's native 6Gbps support to remove any interface bottlenecks. Both 1 & 2 will manifest as much higher read/write speed.

3) Better encryption. This is more of an enterprise feature but the SF-2000 controllers support AES-256 encryption across the drive (and double encryption to support different encryption keys for separate address ranges on the drive).

4) Better ECC. NAND densities and defect rates are going up, program/erase cycles are going down. The SF-2000 as a result has an improved ECC engine.

All of the other features that were present in the SF-1200/1500 are present in the SF-2000 series.

The Unmentionables: NAND Mortality Rate
Comments Locked

144 Comments

View All Comments

  • slickr - Thursday, February 17, 2011 - link

    Now this is what I'm talking about about reviews/previews. Tons of benchmarks at various settings and loads. You can really make a difference now and see how the drives perform.

    I would also like a good old fashion test with Starcraft 2, how long it takes to load a 5-6mb custom map.

    I would also like another test where you select 30 files and open them at the same time and see how much time it takes to open all. I'm talking about selecting few 3-5mb images, few MP4 360p videos, few H.264 720p videos, dozen office documents from 500kb up to 3mb, several applications like GPU-z, skype, Live, Xfire, firefox etc... and opening few highly compressed script files.
  • MamiyaOtaru - Thursday, February 17, 2011 - link

    smaller process, less reliability, and higher price? We've been waiting for years fr prices to become reasonable next to magnetic storage but there's been barely a drop at all, and the drops that do come (smaller processes, supposedly) reduce reliability. At this point I don't see myself ever getting one for my desktop.

    Laptops sure, hard drives die there all the time, and I don't use them as my primary machine. Smaller storage requirements + hard drives dying far more often in laptops makes SSDs the better choice for me there.
  • Chloiber - Thursday, February 17, 2011 - link

    I do generally agree. I don't want faster drives, I want cheaper drives. They are already very fast. Of course, faster is always better but at the moment I prefer low price and reliabilty over speed.
  • seapeople - Friday, February 18, 2011 - link

    Seriously... I don't need a brand new controller that might or might not be reliable and is so fast that it would still seem fast if I taped myself using the computer and replayed it in slow motion. What I want is an x25m-like drive at 160 GB for under $200. Still extremely fast, legendary (for SSD's) reliability, and make it affordable.

    The reason I don't buy Ferrari's right now is not because I don't think they're fast enough, it's because they COST too much.
  • RU482 - Thursday, February 17, 2011 - link

    this might be the problem with OCZ. They are an SSD marketing company with a manufacturing division
  • TimK - Thursday, February 17, 2011 - link

    Damn, Anand, so this is what an engineering degree will get you, not to mention some heavy duty skill at writing. Comprehensive and comprehensible. Thanks very much. At your recommendation I bought an early Vertex 30GB SSD for my unibody MacBook. From time to time I take it out, thinking to have everything in one place on a bigger drive, but I just can't let go of the speed. It's still working great.
  • dlang1234 - Thursday, February 17, 2011 - link

    The Samsung 470 seems to be in a lot of the benchmarks but not all, and seems to do well in every one that it is listed in.

    I can't seem to find a review of it here, but would be interested in it possibly.
  • markjx1 - Thursday, February 17, 2011 - link

    No mention of the fact this thing was originally slated with the SF-2000 controller, which proved to be plagued with problems in the lab and the dirty little secret no vendor would discuss at CES which was why no one had anything SF-2000 based up and running. And now OCZ had to resort to slapping Sandforce's enterprise class SF-2500 controller on it.

    Great except its going to be hella expensive and not cost competitive with the Crucial C400 unless OCZ bleeds margin, and given they took a $25 million bank loan recently, well let's just say OCZ isn't a company I'd rely on to fulfill a warranty replacement a couple years down the road when your drive dies.

    Lastly, notice the "hardware isn't final" disclaimers all over the article. This is nothing more than OCZ trying to get some buzz, and have painted themselves into a corner now if they go switching back to the SF-2000 since they've already set expectations high.
  • jwilliams4200 - Thursday, February 17, 2011 - link

    I thought the article was fairly well done. The only problem I have with it is a passing mention to the SSD being unusuable on a Macbook Pro, and yet not a single benchmark shows any problems with the SSD. It seems the benchmark suite Anand is using needs to have some more components added. Perhaps a latency test?
  • Anand Lal Shimpi - Sunday, February 20, 2011 - link

    Check back on the site by the end of the week ;)

    Take care,
    Anand

Log in

Don't have an account? Sign up now