Capacities and Hella Overprovisioning

SandForce’s attention is focused on the enterprise, which makes sense given that’s where the money is. As a result, its drives are aimed at enterprise capacity points. The first products you’ll see based on SandForce will be 50, 100, 200 and 400GB capacity points. That’s in GB, in terms of user space it’s 46.6 GiB, 93.1GiB, 186.3GiB and 372.5GiB.

On top of the ~7% spare area you get from the GB to GiB conversion, SandForce specifies an additional 20% flash be set aside for spare area. The table below sums up the relationship between total flash, advertised capacity and user capacity on these four drives:

Advertised Capacity Total Flash User Space
50GB 64GB 46.6GB
100GB 128GB 93.1GB
200GB 256GB 186.3GB
400GB 512GB 372.5GB


This is more spare area than even Intel sets aside on its enterprise X25-E drive. It makes sense when you consider that SandForce does have to store more data in its spare area (all of that DuraWrite and RAISE redundancy stuff).

Dedicating almost a third of the flash capacity to spare area is bound to improve performance, but also seriously screw up costs. That doesn’t really matter for the enterprise market (who’s going to complain about a $1500 drive vs. a $1000 drive?), but for the client space it’s a much bigger problem. Desktop and notebook buyers are much more price sensitive. This is where SandForce’s partners will need to use cheaper/lower grade NAND flash to stay competitive, at least in the client space. Let’s hope SandForce’s redundancy and error correction technology actually works.

There’s another solution for client drives. We’re getting these odd capacity points today because the majority of SF’s work was on enterprise technology, the client version of the firmware with less spare area is just further behind. We’ll eventually see 60GB, 120GB, 240GB and 480GB drives. Consult the helpful table below for the lowdown:

Advertised Capacity Total Flash User Space
60GB 64GB 55.9GB
120GB 128GB 111.8GB
240GB 256GB 223.5GB
480GB 512GB 447.0GB


That’s nearly 13% spare area on a consumer drive! Almost twice what Intel sets aside. SandForce believes this is the unavoidable direction all SSDs are headed in. Intel would definitely benefit from nearly twice the spare area, but how much more you willing to pay for a faster SSD? It would seem that SandForce’s conclusion only works if you can lower the cost of flash (possibly by going with cheaper NAND).

Controlling Costs with no DRAM and Cheaper Flash Inside the Vertex 2 Pro
Comments Locked


View All Comments

  • blowfish - Friday, January 1, 2010 - link

    80GB? You really need that much? I'm not sure how much space current games take up, but you'd hope that if they shared the same engine, you could have several games installed in significantly less space than the sum of their separate installs. On my XP machines, my OS plus programs partitions are all less than 10GB, so I reckon 40GB is the sweet spot for me and it would be nice to see fast drives of that capacity at a reasonable price. At least some laptop makers recognise the need for two drive slots. Using a single large SSD for everything, including data, seems like extravagant overkill.
  • Gasaraki88 - Monday, January 4, 2010 - link

    Just as a FYI, Conan take 30GB. That's one game. Most new games are around 6GB. WoW takes like 13GB. 80GB runs out real fast.
  • DOOMHAMMADOOM - Friday, January 1, 2010 - link

    I wouldn't go below 160 GB for a SSD. The games in just my Steam folder alone go to 170 GB total. Games are big these days. The thought of putting Windows and a few programs and games onto an 80GB hard drive is not something I would want to do.
  • Swivelguy2 - Thursday, December 31, 2009 - link

    This is very interesting. Putting more processing power closer to the data is what has improved the performance of these SSDs over current offerings. That makes me wonder: what if we used the bigger, faster CPU on the other side of the SATA cable to similarly compress data before storing it on an X25-M? Could that possible increase the effective capacity of the drive while addressing the X25-M's major shortcoming in sequential write speed? Also, compressing/decompressing on the CPU instead of in the drive sends less through SATA, relieving the effects of the 3 GB/s ceiling.

    Also, could doing processing on the data (on either end of SATA) add more latency to retrieving a single file? From the random r/w performance, apparently not, but would a simple HDTune show an increase in access time, or might it be apparent in the "seat of the pants" experience?

    Happy new year, everyone!
  • jacobdrj - Friday, January 1, 2010 - link

    The race to the true 'Isolinear Chip' from Star Trek is afoot...
  • Fox5 - Thursday, December 31, 2009 - link

    This really does look like something that should have been solved with smarter file systems, and not smarter controllers imo. (though some would disagree)

    Reiser4 does support gzip compression of the file system though, and it's a big win for performance. I don't know if NTFS's compression is too, but I know in the past it had a negative impact, but I don't see why it wouldn't perform better if there was more cpu performance.
  • blagishnessosity - Thursday, December 31, 2009 - link

    I've wondered this myself. It would be an interesting experiment. There are (NTFS, Btrfs, ZFS and Reiser4). In windows, I suppose this could be tested by just right clicking all your files and checking "compress" and then running your benchmarks as usual. In linux, this would be interesting to test with btrfs's SSD mode paired with a low-overhead io scheduler like noop or deadline.

    What interests me the most though is SSD performance on a as they theoretically should never have random reads or writes. In the linux realm, there are several log-based filesystems (JFFS2, UBIFS, LogFS, NILFS2) though none seem to perform ideally in real world usage. Hopefully that'll change in the future :-)
  • blagishnessosity - Thursday, December 31, 2009 - link

    There are">several filesystems that support transparent compression (NTFS, Btrfs, ZFS and Reiser4).

    What interests me the most though is SSD performance on a">Log-based filesystem as they theoretically should never have random reads or writes.

    (note to web admin: the comment wysiwig does not appear to work for me)
  • themelon - Thursday, December 31, 2009 - link

    Note that ZFS now also has native DeDupe support as of build 128">

  • grover3606 - Saturday, November 13, 2010 - link

    Is the used performance with trim enabled?

Log in

Don't have an account? Sign up now