Security: AES-256 and Double Encryption

The SF-1200/1500 controllers have a real time AES-128 encryption engine. Set a BIOS password and the drive should be locked unless you supply that password once again (note I haven’t actually tried this). The SF-2000 implements an AES-256 engine and supports double encryption. The former enables stronger encryption, while the latter allows you to set a different encryption key for multiple address ranges on the drive.

Enhanced ECC

As NAND densities go up, so will error rates and in response SandForce boosted the error correction engine on its controller. The SF-2000 error checks and corrects at a 512-byte granularity with a 55-bit BCH, up from 24-bits per 512-bytes.

The Family

SandForce is announcing three parts today: the SF-2300, SF-2500 and SF-2600. All three controllers have the same performance specs but differ in features.

The SF-2500 is the base enterprise drive. For industrial use there’s the SF-2300 that can operate at more ridiculous temperatures. The SF-2600 ships with the external SAS bridge and a special firmware revision to enable support for non-512B sectors.

Many enterprise storage systems use larger-than-512B sectors to store error correction information among other things. These sizes can be awkward like 520 bytes, 524 bytes, 528 bytes or even a 4K sector with an additional data integrity field. Currently the SF-1200/1500 controllers support these non-standard sector sizes, but you run into performance issues since writes aren’t aligned to how the drive is organized internally. With the SF-2600, there’s firmware support for any one of these sector types. The drive handles alignment issues in hardware, presumably without any performance overhead. SandForce indicated that you’d need to configure the drive for sector size in the firmware, meaning the adjustment isn’t dynamic.

Since this is a very particular type of enterprise SSD feature that’s usually seen in SAS devices, the SF-2600 is paired with a native SAS to SATA bridge. The controller is still SATA internally but the SF-2600 reference design will feature a SAS bridge on-board.

All of the enterprise SF-2000 controllers support TRIM. They also support performance throttling based on remaining program/erase cycles on the drive’s NAND (slow down the drive so the NAND lasts longer, as well as power based performance throttling (slow down the drive to reduce power consumption). SandForce hasn’t announced power specs for the SF-2000 drives, but given Intel’s drive power went up with the 3rd generation X25s I would expect something similar here.

The consumer member of the SF-2000 family will be announced sometime early next year. We will hopefully see a fairly high end version of the consumer part, missing only the enterprise specific features but retaining all of the performance.

Performance: Welcome to the 500 Club Final Words
Comments Locked

84 Comments

View All Comments

  • karndog - Thursday, October 7, 2010 - link

    Put two of these babys in RAID0 for 1GB/s reads AND writes. Very nice IF it lives up to expectations!
  • Silenus - Thursday, October 7, 2010 - link

    Indeed. We will have to wait and see. Hopefully the numbers are not too optimistic. Hopefully there are not too many firmware pains. Still...it's an exciting time for SSD development. Beginning of next year is when I will be ready to buy an SSD for my desktop (have one in my laptop already). Should be nice new choices by then!
  • Nihility - Thursday, October 7, 2010 - link

    It'll be 1 GB/s only on non-compressed / non-random data.
    Still, very cool.
  • mailman65er - Thursday, October 7, 2010 - link

    better yet, put that behind Nvelo's "Dataplex" software, and use it as a cache for your disk(s). Seems like a waste to use it as a storage drive, most bits sitting idle most of the time...
  • vol7ron - Thursday, October 7, 2010 - link

    "most bits sitting idle most of the time... "

    Thus, the extenuation life.
  • mailman65er - Thursday, October 7, 2010 - link

    Thus, the extenuation life.

    Well yes, you could get infinite life out of it (or any other SSD) if you never actually used it...
    The point is that if you are going to spend the $$'s for the SSD that uses this controller (I assume both NAND and controller will be spendy), then you want to actually "use" it, and get the max efficiency out of it. Using it as a storage drive means that most bits are sitting idle, using it as a cache drive keeps it working more. Get that Ferrari out of the barn and drive it!
  • mindless1 - Tuesday, October 19, 2010 - link

    Actually no, the last thing you want to use a MLC flash SSD drive for is mere, constant write, caching.
  • Havor - Friday, October 8, 2010 - link

    I really don't get the obsession whit raid specially raid 0

    Its the IOPs that count for how fast your PC boots ore starts programs and whit 60k IOPs i think you're covert.

    Putting these drives in R0 could actually for some data patterns slow them down as data is divided over 2 drives they have to arrive at the same time ore one of the drives have to wait for the other to catch up.

    Yes you will see a huge boost in sequential reads/writs but whit small random data the benefit would negative, and the overall benefit would be around up to a 5% benefit. and the down side would be the higher risk of data loss if one of the drives breaks down.
  • mindless1 - Tuesday, October 19, 2010 - link

    No it isn't. Typical PC boot and app loading is linear in nature, it's only benchmarks that try to do several things (IOPS) simultaneously, very limited apps or servers which need IOPS significantly more than random read/write performance.

    You are also incorrect about slowing them down waiting because if not the drives' DRAM cache, there is the system main memory cache, and on some raid controllers (mid to higher end discrete cards) there is even the *3rd* level of controller cache on the card.

    Overall benefit 5%? LOL, if you are going to make up numbers at least try harder to get close or, get ready for it, actually try it as-in actually RAIDing two then run typical PC usage representative benchmarks.

    Overall the benefit will depend highly on task, or to put it another way you probably don't need to speed up things that are already reasonably quick, rather to focus on the slowest or most demanding tasks on that "PC".
  • Golgatha - Thursday, October 7, 2010 - link

    DO WANT!!!

Log in

Don't have an account? Sign up now