Enter the SandForce

OCZ actually announced its SandForce partnership in November. The companies first met over the summer, and after giggling at the controller maker’s name the two decided to work together.


Use the SandForce

Now this isn’t strictly an OCZ thing, far from it. SandForce has inked deals with some pretty big players in the enterprise SSD market. The public ones are clear: A-DATA, OCZ and Unigen have all announced that they’ll be building SandForce drives. I suspected that Seagate may be using SandForce as the basis for its Pulsar drives back when I was first briefed on the SSDs. I won’t be able to confirm for sure until early next year, but based on some of the preliminary performance and reliability data I’m guessing that SandForce is a much bigger player in the market than its small list of public partners would suggest.

SandForce isn’t an SSD manufacturer, rather it’s a controller maker. SandForce produces two controllers: the SF-1200 and SF-1500. The SF-1200 is the client controller, while the SF-1500 is designed for the enterprise market. Both support MLC flash, while the SF-1500 supports SLC. SandForce’s claim to fame is thanks to their extremely low write amplification, MLC enabled drives can be used in enterprise environments (more on this later).

Both the SF-1200 and SF-1500 use a Tensilica DC_570T CPU core. As SandForce is quick to point out, the CPU honestly doesn’t matter - it’s everything around it that determines the performance of the SSD. The same is true for Intel’s SSD. Intel licenses the CPU core for the X25-M from a third party, it’s everything else that make the drive so impressive.

SandForce also exclusively develops the firmware for the controllers. There’s a reference design that SandForce can supply, but it’s up to its partners to buy Flash, layout the PCBs and ultimately build and test the SSDs.

Page Mapping with a Twist

We talked about LBA mapping techniques in The SSD Relapse. LBAs (logical block addresses) are used by the OS to tell your HDD/SSD where data is located in a linear, easy to look up fashion. The SSD is in charge of mapping the specific LBAs to locations in Flash. Block level mapping is the easiest to do, requires very little memory to track, and delivers great sequential performance but sucks hard at random access. Page level mapping is a lot more difficult, requires more memory but delivers great sequential and random access performance.

Intel and Indilinx use page level mapping. Intel uses an external DRAM to cache page mapping tables and block history, while Indilinx uses it to do all of that plus cache user data.

SandForce’s controller implements a page level mapping scheme, but forgoes the use of an external DRAM. SandForce believes that it’s not necessary because their controllers simply write less to the flash.

Index The Secret Sauce: 0.5x Write Amplification
Comments Locked

100 Comments

View All Comments

  • blowfish - Friday, January 1, 2010 - link

    80GB? You really need that much? I'm not sure how much space current games take up, but you'd hope that if they shared the same engine, you could have several games installed in significantly less space than the sum of their separate installs. On my XP machines, my OS plus programs partitions are all less than 10GB, so I reckon 40GB is the sweet spot for me and it would be nice to see fast drives of that capacity at a reasonable price. At least some laptop makers recognise the need for two drive slots. Using a single large SSD for everything, including data, seems like extravagant overkill.
  • Gasaraki88 - Monday, January 4, 2010 - link

    Just as a FYI, Conan take 30GB. That's one game. Most new games are around 6GB. WoW takes like 13GB. 80GB runs out real fast.
  • DOOMHAMMADOOM - Friday, January 1, 2010 - link

    I wouldn't go below 160 GB for a SSD. The games in just my Steam folder alone go to 170 GB total. Games are big these days. The thought of putting Windows and a few programs and games onto an 80GB hard drive is not something I would want to do.
  • Swivelguy2 - Thursday, December 31, 2009 - link

    This is very interesting. Putting more processing power closer to the data is what has improved the performance of these SSDs over current offerings. That makes me wonder: what if we used the bigger, faster CPU on the other side of the SATA cable to similarly compress data before storing it on an X25-M? Could that possible increase the effective capacity of the drive while addressing the X25-M's major shortcoming in sequential write speed? Also, compressing/decompressing on the CPU instead of in the drive sends less through SATA, relieving the effects of the 3 GB/s ceiling.

    Also, could doing processing on the data (on either end of SATA) add more latency to retrieving a single file? From the random r/w performance, apparently not, but would a simple HDTune show an increase in access time, or might it be apparent in the "seat of the pants" experience?

    Happy new year, everyone!
  • jacobdrj - Friday, January 1, 2010 - link

    The race to the true 'Isolinear Chip' from Star Trek is afoot...
  • Fox5 - Thursday, December 31, 2009 - link

    This really does look like something that should have been solved with smarter file systems, and not smarter controllers imo. (though some would disagree)

    Reiser4 does support gzip compression of the file system though, and it's a big win for performance. I don't know if NTFS's compression is too, but I know in the past it had a negative impact, but I don't see why it wouldn't perform better if there was more cpu performance.
  • blagishnessosity - Thursday, December 31, 2009 - link

    I've wondered this myself. It would be an interesting experiment. There are http://en.wikipedia.org/wiki/Comparison...systems#... (NTFS, Btrfs, ZFS and Reiser4). In windows, I suppose this could be tested by just right clicking all your files and checking "compress" and then running your benchmarks as usual. In linux, this would be interesting to test with btrfs's SSD mode paired with a low-overhead io scheduler like noop or deadline.

    What interests me the most though is SSD performance on a http://en.wikipedia.org/wiki/Log-structured_file_s... as they theoretically should never have random reads or writes. In the linux realm, there are several log-based filesystems (JFFS2, UBIFS, LogFS, NILFS2) though none seem to perform ideally in real world usage. Hopefully that'll change in the future :-)
  • blagishnessosity - Thursday, December 31, 2009 - link

    correction:
    There are http://en.wikipedia.org/wiki/Comparison...systems#...">several filesystems that support transparent compression (NTFS, Btrfs, ZFS and Reiser4).

    What interests me the most though is SSD performance on a http://en.wikipedia.org/wiki/Log-structured_file_s...">Log-based filesystem as they theoretically should never have random reads or writes.

    (note to web admin: the comment wysiwig does not appear to work for me)
  • themelon - Thursday, December 31, 2009 - link

    Note that ZFS now also has native DeDupe support as of build 128

    http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup">http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup

  • grover3606 - Saturday, November 13, 2010 - link

    Is the used performance with trim enabled?

Log in

Don't have an account? Sign up now