Original Link: http://www.anandtech.com/show/2899
OCZ's Vertex 2 Pro Preview: The Fastest MLC SSD We've Ever Testedby Anand Lal Shimpi on December 31, 2009 12:00 AM EST
- Posted in
One thing AMD has taught me is that you can never beat Intel at its own game. Simply trying to do what Intel does will leave you confined to whatever low margin market Intel deems too unattractive to pursue. It’s exactly why AMD’s most successful CPU architectures are those that implement features that Intel doesn’t have today, but perhaps will have in a few years. Competing isn’t enough, you must innovate. Trying to approach the same problem in the same way but somehow do it better doesn’t work well when your competition makes $9B a quarter.
We saw this in the SSD space as well. In the year since Intel’s X25-M arrived, the best we’ve seen is a controller that can sort-of do what Intel’s can just at a cheaper price. Even then, the cost savings aren’t that great because Intel gets great NAND pricing. We need companies like Indilinx to put cost pressure on Intel, but we also need the equivalent of an AMD. A company that can put technological pressure on Intel.
That company, at least today, is SandForce. And its disciple? OCZ. Yep, they’re back.
Why I Hate New SSDs
I’ll admit, I haven’t really been looking forward to this day. Around the time when OCZ and Indilinx finally got their controller and firmware to acceptable levels, OCZ CMO Alex Mei dropped a bombshell on me - OCZ’s Vertex 2 would use a new controller by a company I’d never heard of. Great.
You may remember my back and forth with OCZ CEO Ryan Petersen about the first incarnation of the Vertex drive before it was released. Needless to say, what I wrote in the SSD Anthology was an abridged (and nicer) version of the back and forth that went on in the months prior to that product launch. After the whole JMicron fiasco, I don’t trust these SSD makers or controller manufacturers to deliver products that are actually good.
Aw, sweet. You'd never hurt me would you?
Which means that I’ve got to approach every new drive and every new controller with the assumption that it’s either going to somehow suck, or lose your data. And I need to figure out how. Synonyms for daunting should be popping into your heads now.
Ultimately, the task of putting these drives to the test falls on the heads of you all - the early adopters. It’s only after we collectively put these drives through hundreds and thousands of hours of real world usage that we can determine whether or not they’re sponge-worthy. Even Intel managed to screw up two firmware releases and they do more in-house validation than any company I’ve ever worked with. The bugs of course never appeared in my testing, but only in the field in the hands of paying customers. I hate that it has to be this way, but we live in the wild west of solid state storage. It’ll be a while before you can embrace any new product with confidence.
And it only gets more complicated from here on out. The old JMicron drives were easy to cast aside. They behaved like jerks when you tried to use them. Now the true difference between SSDs rears its head after months or years of use.
I say that because unlike my first experience with OCZ’s Vertex, the Vertex 2 did not disappoint. Or to put it more directly: it’s the first drive I’ve seen that’s actually better than Intel’s X25-M G2.
If you haven't read any of our previous SSD articles, I'd suggest brushing up on The Relapse before moving on. The background will help.
Enter the SandForce
OCZ actually announced its SandForce partnership in November. The companies first met over the summer, and after giggling at the controller maker’s name the two decided to work together.
Use the SandForce
Now this isn’t strictly an OCZ thing, far from it. SandForce has inked deals with some pretty big players in the enterprise SSD market. The public ones are clear: A-DATA, OCZ and Unigen have all announced that they’ll be building SandForce drives. I suspected that Seagate may be using SandForce as the basis for its Pulsar drives back when I was first briefed on the SSDs. I won’t be able to confirm for sure until early next year, but based on some of the preliminary performance and reliability data I’m guessing that SandForce is a much bigger player in the market than its small list of public partners would suggest.
SandForce isn’t an SSD manufacturer, rather it’s a controller maker. SandForce produces two controllers: the SF-1200 and SF-1500. The SF-1200 is the client controller, while the SF-1500 is designed for the enterprise market. Both support MLC flash, while the SF-1500 supports SLC. SandForce’s claim to fame is thanks to their extremely low write amplification, MLC enabled drives can be used in enterprise environments (more on this later).
Both the SF-1200 and SF-1500 use a Tensilica DC_570T CPU core. As SandForce is quick to point out, the CPU honestly doesn’t matter - it’s everything around it that determines the performance of the SSD. The same is true for Intel’s SSD. Intel licenses the CPU core for the X25-M from a third party, it’s everything else that make the drive so impressive.
SandForce also exclusively develops the firmware for the controllers. There’s a reference design that SandForce can supply, but it’s up to its partners to buy Flash, layout the PCBs and ultimately build and test the SSDs.
Page Mapping with a Twist
We talked about LBA mapping techniques in The SSD Relapse. LBAs (logical block addresses) are used by the OS to tell your HDD/SSD where data is located in a linear, easy to look up fashion. The SSD is in charge of mapping the specific LBAs to locations in Flash. Block level mapping is the easiest to do, requires very little memory to track, and delivers great sequential performance but sucks hard at random access. Page level mapping is a lot more difficult, requires more memory but delivers great sequential and random access performance.
Intel and Indilinx use page level mapping. Intel uses an external DRAM to cache page mapping tables and block history, while Indilinx uses it to do all of that plus cache user data.
SandForce’s controller implements a page level mapping scheme, but forgoes the use of an external DRAM. SandForce believes that it’s not necessary because their controllers simply write less to the flash.
The Secret Sauce: 0.5x Write Amplification
The downfall of all NAND flash based SSDs is the dreaded read-modify-write scenario. I’ve explained this a few times before. Basically your controller goes to write some amount of data, but because of a lot of reorganization that needs to be done it ends up writing a lot more data. The ratio of how much you write to how much you wanted to write is write amplification. Ideally this should be 1. You want to write 1GB and you actually write 1GB. In practice this can be as high as 10 or 20x on a really bad SSD. Intel claims that the X25-M’s dynamic nature keeps write amplification down to a manageable 1.1x. SandForce says its controllers write a little less than half what Intel does.
SandForce states that a full install of Windows 7 + Office 2007 results in 25GB of writes to the host, yet only 11GB of writes are passed on to the drive. In other words, 25GBs of files are written and available on the SSD, but only 11GB of flash is actually occupied. Clearly it’s not bit-for-bit data storage.
What SF appears to be doing is some form of real-time compression on data sent to the drive. SandForce told me that it’s not strictly compression but a combination of several techniques that are chosen on the fly depending on the workload.
SandForce referenced data deduplication as a type of data reduction algorithm that could be used. The principle behind data deduplication is simple. Instead of storing every single bit of data that comes through, simply store the bits that are unique and references to them instead of any additional duplicates. Now presumably your hard drive isn’t full of copies of the same file, so deduplication isn’t exactly what SandForce is doing - but it gives us a hint.
Straight up data compression is another possibility. The idea behind lossless compression is to use fewer bits to represent a larger set of bits. There’s additional processing required to recover the original data, but with a fast enough processor (or dedicated logic) that part can be negligible.
Assuming this is how SandForce works, it means that there’s a ton of complexity in the controller and firmware. Much more than what even a good SSD controller needs to deal with. Not only does SandForce have to manage bad blocks, block cleaning/recycling, LBA mapping and wear leveling, but it also needs to manage this tricky write optimization algorithm. It’s not a trivial matter, SandForce must ensure that the data remains intact while tossing away nearly half of it. After all, the primary goal of storage is to store data.
The whole write-less philosophy has tremendous implications for SSD performance. The less you write, the less you have to worry about garbage collection/cleaning and the less you have to worry about write amplification. This is how the SF controllers get by without having any external DRAM, there’s just no need. There are fairly large buffers on chip though, most likely on the order of a couple of MBs (more on this later).
Manufacturers are rarely honest enough to tell you the downsides to their technologies. Representing a collection of bits with a fewer number of bits works well if you have highly compressible data or a ton of duplicates. Data that is already well compressed however, shouldn’t work so nicely with the DuraWrite engine. That means compressed images, videos or file archives will most likely exhibit higher write amplification than SandForce’s claimed 0.5x. Presumably that’s not the majority of writes your SSD will see on a day to day basis, but it’s going to be some portion of it.
Controlling Costs with no DRAM and Cheaper Flash
SandForce is a chip company. They don’t make flash, they don’t make PCBs and they definitely don’t make SSDs. As such, they want the bulk of the BOM (Bill Of Materials) cost in an SSD to go to their controllers. By writing less to the flash, there’s less data to track and smaller tables to manage on the fly. The end result is SF promises its partners that they don’t need to use any external DRAMs alongside the SF-1200 or SF-1500. It helps justify SandForce’s higher controller cost than a company like Indilinx.
By writing less to flash SandForce also believes its controllers allow SSD makers to use lower grade flash. Most MLC NAND flash on the market today is built for USB sticks or CF/SD cards. These applications have very minimal write cycle requirements. Toss some of this flash into an SSD and you’ll eventually start losing data.
Intel and other top tier SSD makers tackle this issue by using only the highest grade NAND available on the market. They take it seriously because most users don’t back up and losing your primary drive, especially when it’s supposed to be on more reliable storage, can be catastrophic.
SandForce attempts to internalize the problem in hardware, again driving up the cost/value of its controller. By simply writing less to the flash, a whole new category of cheaper MLC NAND flash can be used. In order to preserve data integrity the controller writes some redundant data to the flash. SandForce calls it similar to RAID-5, although the controller doesn’t generate parity data for every bit written. Instead there’s some element of redundancy, the extent of which SF isn’t interested in delving into at this point. The redundant data is striped across all of the flash in the SSD. SandForce believes it can correct errors at as large as the block level.
There’s ECC and CRC support in the controller as well. The controller has the ability to return correct data even if it comes back with errors from the flash. Presumably it can also mark those flash locations as bad and remember not to use them in the future.
I can’t help but believe the ability to recover corrupt data, DuraWrite technology and AES-128 encryption are somehow related. If SandForce is storing some sort of hash of the majority of data on the SSD, it’s probably not too difficult to duplicate that data, and it’s probably not all that difficult to encrypt it either. By doing the DuraWrite work up front, SandForce probably gets the rest for free (or close to it).
Capacities and Hella Overprovisioning
On top of the ~7% spare area you get from the GB to GiB conversion, SandForce specifies an additional 20% flash be set aside for spare area. The table below sums up the relationship between total flash, advertised capacity and user capacity on these four drives:
|Advertised Capacity||Total Flash||User Space|
This is more spare area than even Intel sets aside on its enterprise X25-E drive. It makes sense when you consider that SandForce does have to store more data in its spare area (all of that DuraWrite and RAISE redundancy stuff).
Dedicating almost a third of the flash capacity to spare area is bound to improve performance, but also seriously screw up costs. That doesn’t really matter for the enterprise market (who’s going to complain about a $1500 drive vs. a $1000 drive?), but for the client space it’s a much bigger problem. Desktop and notebook buyers are much more price sensitive. This is where SandForce’s partners will need to use cheaper/lower grade NAND flash to stay competitive, at least in the client space. Let’s hope SandForce’s redundancy and error correction technology actually works.
There’s another solution for client drives. We’re getting these odd capacity points today because the majority of SF’s work was on enterprise technology, the client version of the firmware with less spare area is just further behind. We’ll eventually see 60GB, 120GB, 240GB and 480GB drives. Consult the helpful table below for the lowdown:
|Advertised Capacity||Total Flash||User Space|
That’s nearly 13% spare area on a consumer drive! Almost twice what Intel sets aside. SandForce believes this is the unavoidable direction all SSDs are headed in. Intel would definitely benefit from nearly twice the spare area, but how much more you willing to pay for a faster SSD? It would seem that SandForce’s conclusion only works if you can lower the cost of flash (possibly by going with cheaper NAND).
Inside the Vertex 2 Pro
This time there were no stickers telling me that I’d love this SSD, just a brown ESD bag and a plain looking SSD inside.
Pop the top off and you are greeted with a 90mF capacitor. Its duty is to deliver enough power to the controller to commit any buffered data to flash if there’s ever a sudden loss of power.
I asked SandForce why they needed such a large capacitor as Intel can get away with much smaller caps. It actually has to do with the amount of data buffered. Intel’s X25-M buffers somewhere in the low hundreds of KB of data (with a 512KB L2 cache I’m guessing it’s somewhere below that). The SF controllers buffer a couple of megabytes of data, hence the much larger capacitor.
SandForce did point out that the capacitor is a feature of the SF-1500 design, despite OCZ’s use of it on the Vertex 2 Pro.
That brings us to the controller used in the Vertex 2 Pro. Ultimately SandForce is going to have two controllers - the SF-1200 and the SF-1500. Currently the two controllers have a unified firmware and feature set, which is why both OCZ and SF refer to the Vertex 2 Pro as being somewhere in between a 1200 and a 1500. It’s a SF-1200 controller with the firmware of the SF-1500 as far as I can tell. The final shipping version with be a full fledged SF-1500.
The cost of the Vertex 2 Pro is going to be high. Higher than Intel’s X25-M and any other consumer level SSD on the market today. OCZ is targeting it at the very high end desktop/workstation user or perhaps even entry level enterprise customer.
We won’t see the Vertex 2 Pro available in the channel until March. But this isn’t the only SandForced based SSD we’ll get from OCZ though. At some point in the future we’ll have an SF-1200 based SSD that’s priced around the same level as the top-bin Indilinx based Vertex drives. It’s too early to talk about timing on that one though.
The OCZ Toolbox
With the original Vertex all you got was a command line wiper tool to manually TRIM the drive. While Vertex 2 Pro supports Windows 7 TRIM, you also get a nifty little toolbox crafted by SandForce and OCZ:
The coolest part of the toolbox as far as I’m concerned? Single click Secure Erase from within Windows. I’m not sure how much that helps end users, but it makes my life a lot easier.
You also get an indication of flash health, bad blocks, etc... The Vertex is all grown up now. Say goodbye to Indilinx.
I should preface the benchmarks with the following spoiler: the SandForce based OCZ Vertex 2 Pro is the fastest single-controller MLC SSD I’ve ever tested. You can post higher numbers with internally RAIDed solutions like the OCZ Colossus, but for a single drive using MLC flash - I haven’t seen anything faster than the Vertex 2 Pro.
To be honest, this was my first experience with a pre-release non-Intel SSD that went flawlessly. The Indilinx drives always had issues that had to be worked out, but SandForce is operating in a completely different class. The SF engineers and marketing folks I spoke with kept calling their technology enterprise-class. Given the crap that I’ve seen in the SSD market, I think I’d tend to agree. It just works and it seems to work well.
|CPU||Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)|
|Motherboard:||Intel DX58SO (Intel X58) |
|Chipset:|| Intel X58 |
|Chipset Drivers:||Intel 18.104.22.1685 + Intel IMSM 8.9|
|Memory:||Qimonda DDR3-1066 4 x 1GB (7-7-7-20)|
|Video Card:||eVGA GeForce GTX 285|
|Video Drivers:||NVIDIA ForceWare 190.38 64-bit|
|Desktop Resolution:||1920 x 1200|
|OS:||Windows 7 x64|
New vs. Used Performance - Hardly an Issue
With the X25-M G2 Intel managed to virtually eliminate the random-write performance penalty on a sequentially filled drive. In other words, if you used an X25-M G2 as a normal desktop drive, 4KB random write performance wouldn’t really degrade over time. Even without TRIM.
Intel accomplished this by doubling its external DRAM size from 16MB to 32MB and simply using more historical data in its write placement algorithms. SandForce accomplished virtually the same thing, but thanks to its write-less (DuraWrite) technology:
|OCZ Vertex 2 Pro 100GB||"New" Performance||"Used" Performance|
|4KB Random Write||50.9 MB/s||45.2 MB/s|
|2MB Sequential Write||252 MB/s||252 MB/s|
I saw 4KB random write speed drop from 50MB/s down to 45MB/s. Sequential write speed remained similarly untouched. But now I’ve gone and ruined the surprise.
Sequential Performance - Virtually Bound by 3Gbps SATA
Most high end SSDs have sequential read speeds that are pretty much bound by existing 3Gbps SATA interfaces:
OCZ’s Vertex 2 Pro is no different. At around 265MB/s, we’d need to have a redesigned version of the controller with 6Gbps SATA support to go any faster.
It’s the sequential write speed that’s just freakin’ ridiculous:
At 252MB/s the Vertex 2 Pro delivers more sequential write speed than even the best SLC based SSDs. We’re almost bound by 3Gbps SATA here!
Random Performance - Better than Intel
I hear some OCZ employees nearly cried when they saw the random write performance of the Vertex 2 Pro:
At 50.9MB/s in my desktop 4KB random write test, the Vertex 2 Pro is 36% faster than Intel’s X25-M G2. Looking at it another way, the Vertex 2 Pro has nearly 4x the 4KB random write performance of today’s OCZ Vertex Turbo.
Random read performance is quite good at 51.3MB/s, but still lower than Intel’s X25-M G2 whopping 64MB/s.
PCMark Vantage - A New Leader
The Vertex 2 Pro’s dominance doesn’t stop in the synthetic tests - we have a new winner in PCMark Vantage.
While I don’t like Vantage much as a CPU benchmark, it is one of the best real world indicators of SSD performance. Far better than a lot of the synthetic tests that are used by most. Performance in Vantage isn’t all that matters, but as a part of a suite it’s very important.
Luckily for OCZ and SandForce, the Vertex 2 Pro doesn’t disappoint here either. As a testament to how much they have their act together, I didn’t have to tell SandForce what their Vantage scores were - they already use it as a part of their internal test suite. This is in stark contrast to other newcomers to the SSD market that were surprised when I told them that their drives don’t perform well in the real world.
The Vertex 2 Pro is 6% faster than the X25-M G2 in the overall PCMark Vantage test and 12% faster in the HDD specific suite. It’s at the borderline for what’s noticeable in the real world for most users but the advantage is there.
The memories suite includes a test involving importing pictures into Windows Photo Gallery and editing them, a fairly benign task that easily falls into the category of being very influenced by disk performance.
The TV and Movies tests focus on on video transcoding which is mostly CPU bound, but one of the tests involves Windows Media Center which tends to be disk bound.
The gaming tests are very well suited to SSDs since they spend a good portion of their time focusing on reading textures and loading level data. All of the SSDs dominate here, but as you'll see later on in my gaming tests the benefits of an SSD really vary depending on the game. Take these results as a best case scenario of what can happen, not the norm.
In the Music suite the main test is a multitasking scenario: the test simulates surfing the web in IE7, transcoding an audio file and adding music to Windows Media Player (the most disk intensive portion of the test).
The Communications suite is made up of two tests, both involving light multitasking. The first test simulates data encryption/decryption while running message rules in Windows Mail. The second test simulates web surfing (including opening/closing tabs) in IE7, data decryption and running Windows Defender.
I love PCMark's Productivity test; in this test there are four tasks going on at once, searching through Windows contacts, searching through Windows Mail, browsing multiple webpages in IE7 and loading applications. This is as real world of a scenario as you get and it happens to be representative of one of the most frustrating HDD usage models - trying to do multiple things at once. There's nothing more annoying than trying to launch a simple application while you're doing other things in the background and have the load take forever.
The final PCMark Vantage suite is HDD specific and this is where you'll see the biggest differences between the drives:
AnandTech Storage Bench
I introduced our storage suite in our last SSD article and it’s back, now with more data :)
Of the MLC SSDs represented here, there’s just nothing faster than the SandForce based OCZ Vertex 2 Pro.
Intel’s SLC based X25-E actually does very well, especially for a controller that’s as old as it is. It is worth noting however that the only thing separating Intel from SandForce-level performance is the X25-M’s low sequential write speed...
The first in our benchmark suite is a light usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.
There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.
The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.
The performance results are reported in average I/O Operations per Second (IOPS):
If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.
The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.
Our final test focuses on actual gameplay in four 3D games: World of Warcraft, Batman: Arkham Asylum, FarCry 2 and Risen, in that order. The games are launched and played, altogether for a total of just under 30 minutes. The benchmark measures game load time, level load time, disk accesses from save games and normal data streaming during gameplay.
The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.
SandForce’s Achilles’ Heel
I surmised that SandForce’s DuraWrite technology would only be able to reduce the number of flash writes on data that was easily compressible. Documents, libraries, executables, as evidenced by the Windows 7 + Office 2007 install achieving a very low write amplification of 0.44x.
To see how bad the drive’s performance would suffer if we dealt primarily with compressed files I created a test that exclusively copied compressed files to the drive - MP3s, JPGs, x264s, RARs and DivX movies. I wrote roughly 20GB of compressed data to the drive and measured average IOs per second and disk bandwidth.
|OCZ Vertex 2 Pro 100GB||241 IOPS||145.9 MB/s|
|OCZ Vertex 256GB||181 IOPS||181.5 MB/s|
|Intel X25-M G2 160GB||110 IOPS||109.9 MB/s|
While the Vertex 2 Pro achieves a competitive average IOPS, its effective bandwidth is actually lower than the older Indilinx based Vertex SSD. What this tells us is that some of those write requests took much longer to complete on the Vertex 2. It’s akin to having two video cards with the same average frame rate, but one with a much lower minimum frame rate.
I wouldn’t characterize its performance as unacceptable by any means. Remember this is pre-production hardware and it’s still faster than X25-M G2 thanks to Intel’s ~100MB/s write limit. What it does show however is a weakness in the DuraWrite technology. Presumably the majority of your file writes aren’t going to be compressed files so your performance shouldn’t be gated by this issue, even then I’ve shown that you shouldn’t be any worse off than you would be with Intel’s X25-M.
SandForce started work on its controllers back in 2007. The late start meant a late arrival to the party. Even today we’re looking at another 1 - 3 months before we see widespread availability of SSDs based on the SF-1200 or SF-1500. I guess it’s more fashionably late than anything else, because this thing is good.
The OCZ Vertex 2 Pro is the fastest single-controller MLC SSD I’ve ever tested, and it’s not even running final firmware. It’s quite telling that SandForce decided to make its first public showing with OCZ. Perhaps all of the initial hard work with Indilinx in the Vertex days paid off.
The controller and product both look very good. The only concern for the majority of users seems to be price. For the enterprise market I doubt it’ll be much of an issue. The Vertex 2 Pro should come in cheaper than Intel’s X25-E in a cost per GB sense, but for high end desktop and notebook users it may be a tough pill to swallow. Especially for a controller whose reliability will only be proven over time. I’m curious to see what the cheaper SF-1200 based SSDs will perform like. I’m hearing that they offer the same sequential read/write speed, but have lower random write performance.
OCZ says they will continue to work with Indilinx to bring out new products based on its controllers. SandForce simply adds to the stack.
Then there's Intel. Current roadmaps put the next generation of Intel SSDs out in Q4 2010, although Intel tells me it will be a "mid-year" refresh. For the mainstream market the capacities are 160GB, 300GB and 600GB. I'm guessing we'll see 160GB down at $225, 600GB at $500+ and the 300GB drive somewhere in between. The X25-E also gets a much needed upgrade with 100GB, 200GB and 400GB capacities.
Competition is a good thing. We need companies to not only keep Intel aggressive on price, but competitive on features as well. Indilinx did the former and it looks like SandForce is going to do the latter.