Security: AES-256 and Double Encryption

The SF-1200/1500 controllers have a real time AES-128 encryption engine. Set a BIOS password and the drive should be locked unless you supply that password once again (note I haven’t actually tried this). The SF-2000 implements an AES-256 engine and supports double encryption. The former enables stronger encryption, while the latter allows you to set a different encryption key for multiple address ranges on the drive.

Enhanced ECC

As NAND densities go up, so will error rates and in response SandForce boosted the error correction engine on its controller. The SF-2000 error checks and corrects at a 512-byte granularity with a 55-bit BCH, up from 24-bits per 512-bytes.

The Family

SandForce is announcing three parts today: the SF-2300, SF-2500 and SF-2600. All three controllers have the same performance specs but differ in features.

The SF-2500 is the base enterprise drive. For industrial use there’s the SF-2300 that can operate at more ridiculous temperatures. The SF-2600 ships with the external SAS bridge and a special firmware revision to enable support for non-512B sectors.

Many enterprise storage systems use larger-than-512B sectors to store error correction information among other things. These sizes can be awkward like 520 bytes, 524 bytes, 528 bytes or even a 4K sector with an additional data integrity field. Currently the SF-1200/1500 controllers support these non-standard sector sizes, but you run into performance issues since writes aren’t aligned to how the drive is organized internally. With the SF-2600, there’s firmware support for any one of these sector types. The drive handles alignment issues in hardware, presumably without any performance overhead. SandForce indicated that you’d need to configure the drive for sector size in the firmware, meaning the adjustment isn’t dynamic.

Since this is a very particular type of enterprise SSD feature that’s usually seen in SAS devices, the SF-2600 is paired with a native SAS to SATA bridge. The controller is still SATA internally but the SF-2600 reference design will feature a SAS bridge on-board.

All of the enterprise SF-2000 controllers support TRIM. They also support performance throttling based on remaining program/erase cycles on the drive’s NAND (slow down the drive so the NAND lasts longer, as well as power based performance throttling (slow down the drive to reduce power consumption). SandForce hasn’t announced power specs for the SF-2000 drives, but given Intel’s drive power went up with the 3rd generation X25s I would expect something similar here.

The consumer member of the SF-2000 family will be announced sometime early next year. We will hopefully see a fairly high end version of the consumer part, missing only the enterprise specific features but retaining all of the performance.

Performance: Welcome to the 500 Club Final Words
Comments Locked

84 Comments

View All Comments

  • Rasterman - Sunday, October 17, 2010 - link

    Anyone have any guesses on sizes and prices?
  • ABR - Tuesday, October 19, 2010 - link

    I don't give much of a hoot about 6GB/s controllers and sequential performance. I'd be happier to see a 1.5GB/s interface actually utilized. Real desktop and a lot of server use consists of overlapping highly random reads, and moderately random writes. Loading an application with tons of resource files, saving documents and state to 3 or 4 directories at once, things like that. This is the whole big win of SSDs: eliminating the seek time. So I don't care about some 500MB/s number if you happen to be copying a giant file from one freshly written drive to a new blank drive. I do care about random access to a heavily used disk, and this is still sitting back in the realm of 10's of MB/s.

    In fact this whole hoopla about ever-increasing pure sequential transfers reminds me of the megahertz wars -- everyone shouted 10 years ago about this several hundred MHz or that GHz, meanwhile memory (and hence most actual computing) poked along at tens of MHz. Most of the progress in CPUs since that time has been in using all kinds of fancy branch prediction and pipelining structures exactly for the purpose of dealing with memory latency.
  • Powersupply - Wednesday, October 27, 2010 - link

    Good post!
  • kevith - Wednesday, October 20, 2010 - link

    It certainly looks as if we are approaching some amazing speeds in the months to come, lovely.

    But what is it with RAID, that makes it pop up every now and then in storage-reviews?

    "... the sort of performance you’ll be able to get through a multi-drive array will be staggering..."

    As far as I can read in every single test of RAID-0 vs single drives I´ve ever read - here at Anandtech inclusive - it appears, that there is no real difference in speed with a RAID-0 setup.

    In synthetic benchmarks, there is a measurable effect, but in everyday use there is none.

    A lot of folks in various forums claim they have doubled their transfer-speed, some even tripled, but everytime a serious Magazine like Your own, Tom´s, BitTech or others try to find out just how great it is, it always end up the same way: No bang for the buck at all.

    Are there maybe new aspects when we´re talking SSD´s, that I am not aware of?

Log in

Don't have an account? Sign up now