A few weeks ago a very smart friend of mine sent me an email asking why we haven’t seen more PCIe SSDs by now. While you can make the argument for keeping SATA around as an interface with traditional hard drives, it ends up being a bottleneck when it comes to SSDs. The move to 6Gbps SATA should alleviate that bottleneck for a short period, but it is easy enough to put NAND in parallel that you could quickly saturate it as well. So why not a higher bandwidth interface like PCIe?

The primary reason appears to be cost. While PCIe can offer much more bandwidth than SATA, the amount of NAND you’d need to get there and the controllers necessary would be cost prohibitive. The unfortunate reality is that good SSDs launched at the worst possible time. The market would’ve been ripe in 2006 - 2007, but in the post recession period getting companies to spend even more money on PCs wasn’t very easy. A slower than expected SSD ramp put the brakes on a lot of development on exotic PCIe SSDs.

We have seen a turnaround however. At last year’s IDF Intel showed off a proof of concept PCIe SSD that could push 1 million IOPS. And with the consumer SSD market dominated by a few companies, the smaller players turned to building their own PCIe SSDs to go after the higher margin enterprise market. Enterprise customers had the budget and the desire to push even more bandwidth. Throw a handful of Indilinx controllers on a PCB, give it a good warranty and you had something you could sell to customers for over a thousand dollars.

OCZ was one of the most eager in this space. We first met their Z-Drive last year:

The PCIe x8 card was made up of four Indilinx barefoot controllers configured in RAID-0, delivering up to four times the performance of a single Indilinx SSD but on a single card. That single card would set you back anywhere between $900 - $3500 depending on capacity and configuration.

With the SSD controllers behind a LSI Logic RAID controller there was no way to get TRIM commands to the data. OCZ instead relied on idle garbage collection to keep Z-Drive owners happy. Even today the company is still working on bringing a TRIM driver to Z-Drive owners.

The Z-Drive apparently sold reasonably well. Well enough for OCZ to create a follow on drive: the Z-Drive R2. This card uses custom NAND cards that would allow users to upgrade their drive capacity down the line. The cards are SO-DIMMs populated with NAND, available only through OCZ. The new Z-Drive still carries the hefty price tag of the original.

Ryan Petersen, OCZ’s CEO, hopes to change that with a new PCIe SSD: the OCZ RevoDrive. Announced at Computex 2010, the RevoDrive uses SandForce controllers instead of the Indilinx controllers of the Z-Drives. The first incarnation uses two SandForce controllers in RAID-0 on a PCIe x4 card. As far as attacking price: how does $369 for 120GB sound? And it is of course bootable.

OCZ sent us the more expensive $699.99 240GB version but the sort of performance scaling we'll show here today should apply to the smaller, more affordable card as well. Below is a shot of our RevoDrive sample:

The genius isn’t in the product, but in how OCZ made it affordable. Looking at the RevoDrive you’ll see the two SandForce SF-1200 controllers that drive the NAND, but you’ll also see a Silicon Image RAID controller and a Pericom PI7C9X130 bridge chip.

The Silicon Image chip is a SiI3124 PCI-X to 4 port 3Gbps SATA controller. The controller supports up to four SATA devices, which means OCZ could make an even faster version of the RevoDrive with four SF-1200 controllers in RAID.

Astute readers will note that I said the Sil3124 chip is a PCI-X to SATA controller. The Pericom bridge converts PCI-X to a PCIe x4 interface which is what you see at the bottom of the card.



The Pericom PCI-X to PCIe Bridge

Why go from SATA to PCI-X then to PCIe? Cost. These Silicon Image PCI-X controllers are dirt cheap compared to native PCIe SATA controllers, and the Pericom bridge chip doesn’t add much either. Bottom line? OCZ is able to offer a single card at very little premium compared to a standalone drive. A standard OCZ Vertex 2 E 120GB (13% spare area instead of 22%) will set you back $349.99. A 120GB RevoDrive will sell for $369.99 ($389.99 MSRP), but deliver much higher performance thanks to you having two SF-1200 controllers in RAID on the card.

You’ll also notice that at $369.99 a 120GB RevoDrive is barely any more expensive than a single SF-1200 SSD, and it’s actually cheaper than two smaller capacity drives in RAID. If OCZ is actually able to deliver the RevoDrive at these prices then the market is going to have a brand new force to reckon with. Do you get a standard SATA SSD or pay a little more for a much faster PCIe SSD? I suspect that many will choose the latter, especially because unlike the Z-Drive the RevoDrive is stupidly fast in desktop workloads.

If you’re wondering how this is any different than a pair of SF-1200 based SSDs in RAID-0 using your motherboard’s RAID controller, it’s not. The OCZ RevoDrive will offer lower CPU utilization than an on-board software based RAID solution thanks to its Silicon Image RAID controller, but the advantage isn’t huge. The only reason you’d opt for this over a standard RAID setup is cost and to a lesser extent, simplicity.

What’s that Connector?

When I first published photos of the Revo a number of readers wondered what the little connector next to the Silicon Image RAID controller was. Those who guessed it was for expansion were right: it is.

Unfortunately that connector won’t be present on the final RevoDrive shipped for mass production. At some point we may see another version of the Revo with that connector. The idea is to be able to add a daughterboard with another pair of SF-1200 controllers and NAND to increase capacity and performance of the Revo down the line. Remember that Silicon Image controller has four native SATA ports stemming off of it, only two are currently in use.

Installation and Early Issues
Comments Locked

62 Comments

View All Comments

  • MrBrownSound - Wednesday, August 25, 2010 - link

    Should I be worried putting my OS on this drive? Also I have two steamy hot graphics cards, will a fan be needed?
  • diqster - Sunday, September 26, 2010 - link

    While you claim these PCIe SSDs are aimed at the enterprise market (they are), you didn't hit very many enterprise benchmarks or concerns. I'd like to see these things reviewed in any PCIe SSD review:

    1) Form factor. Can they fit in a half height or half length PCI slot? Putting this in tandem with spinning metal HD's in a 1U server would be ideal. Flashcache setups come to mind. The previous OCZ offerings failed miserably in this department as they're as long as some GPU cards.

    2) You mentioned RAID controller, but no mention of a BBWC. A BBWC (like on the old OCZ R-Drive) would drastically speed up random writes. Enterprises are looking at flash to solve 2 problems, either random reads or random writes.

    3) Enterprises don't care much about sequential I/O here. Very few things in a datacenter environment would use sequential I/O. For things like databases or key value stores, it's all random. Sure, video editing is sequential but it's neither enterprise (in most senses) nor is it very popular (number of DB's installed worldwide dwarfs number of video editors).

    4) Addressing write lifetimes. Consumers can swap and replace these cards one at a time if they fail every 2 years. Doing that over installations of hundreds or thousands of these cards is rather hard. People want to know if they'll last. Again, a BBWC would help address some of these issues -- only letting the last write of 100 writes to a block go through.

    If you want to be taken seriously, start reviewing stuff in an enterprise manner. As of now, these are consumer-based reviews of enterprise gear.

Log in

Don't have an account? Sign up now