A few weeks ago a very smart friend of mine sent me an email asking why we haven’t seen more PCIe SSDs by now. While you can make the argument for keeping SATA around as an interface with traditional hard drives, it ends up being a bottleneck when it comes to SSDs. The move to 6Gbps SATA should alleviate that bottleneck for a short period, but it is easy enough to put NAND in parallel that you could quickly saturate it as well. So why not a higher bandwidth interface like PCIe?

The primary reason appears to be cost. While PCIe can offer much more bandwidth than SATA, the amount of NAND you’d need to get there and the controllers necessary would be cost prohibitive. The unfortunate reality is that good SSDs launched at the worst possible time. The market would’ve been ripe in 2006 - 2007, but in the post recession period getting companies to spend even more money on PCs wasn’t very easy. A slower than expected SSD ramp put the brakes on a lot of development on exotic PCIe SSDs.

We have seen a turnaround however. At last year’s IDF Intel showed off a proof of concept PCIe SSD that could push 1 million IOPS. And with the consumer SSD market dominated by a few companies, the smaller players turned to building their own PCIe SSDs to go after the higher margin enterprise market. Enterprise customers had the budget and the desire to push even more bandwidth. Throw a handful of Indilinx controllers on a PCB, give it a good warranty and you had something you could sell to customers for over a thousand dollars.

OCZ was one of the most eager in this space. We first met their Z-Drive last year:

The PCIe x8 card was made up of four Indilinx barefoot controllers configured in RAID-0, delivering up to four times the performance of a single Indilinx SSD but on a single card. That single card would set you back anywhere between $900 - $3500 depending on capacity and configuration.

With the SSD controllers behind a LSI Logic RAID controller there was no way to get TRIM commands to the data. OCZ instead relied on idle garbage collection to keep Z-Drive owners happy. Even today the company is still working on bringing a TRIM driver to Z-Drive owners.

The Z-Drive apparently sold reasonably well. Well enough for OCZ to create a follow on drive: the Z-Drive R2. This card uses custom NAND cards that would allow users to upgrade their drive capacity down the line. The cards are SO-DIMMs populated with NAND, available only through OCZ. The new Z-Drive still carries the hefty price tag of the original.

Ryan Petersen, OCZ’s CEO, hopes to change that with a new PCIe SSD: the OCZ RevoDrive. Announced at Computex 2010, the RevoDrive uses SandForce controllers instead of the Indilinx controllers of the Z-Drives. The first incarnation uses two SandForce controllers in RAID-0 on a PCIe x4 card. As far as attacking price: how does $369 for 120GB sound? And it is of course bootable.

OCZ sent us the more expensive $699.99 240GB version but the sort of performance scaling we'll show here today should apply to the smaller, more affordable card as well. Below is a shot of our RevoDrive sample:

The genius isn’t in the product, but in how OCZ made it affordable. Looking at the RevoDrive you’ll see the two SandForce SF-1200 controllers that drive the NAND, but you’ll also see a Silicon Image RAID controller and a Pericom PI7C9X130 bridge chip.

The Silicon Image chip is a SiI3124 PCI-X to 4 port 3Gbps SATA controller. The controller supports up to four SATA devices, which means OCZ could make an even faster version of the RevoDrive with four SF-1200 controllers in RAID.

Astute readers will note that I said the Sil3124 chip is a PCI-X to SATA controller. The Pericom bridge converts PCI-X to a PCIe x4 interface which is what you see at the bottom of the card.



The Pericom PCI-X to PCIe Bridge

Why go from SATA to PCI-X then to PCIe? Cost. These Silicon Image PCI-X controllers are dirt cheap compared to native PCIe SATA controllers, and the Pericom bridge chip doesn’t add much either. Bottom line? OCZ is able to offer a single card at very little premium compared to a standalone drive. A standard OCZ Vertex 2 E 120GB (13% spare area instead of 22%) will set you back $349.99. A 120GB RevoDrive will sell for $369.99 ($389.99 MSRP), but deliver much higher performance thanks to you having two SF-1200 controllers in RAID on the card.

You’ll also notice that at $369.99 a 120GB RevoDrive is barely any more expensive than a single SF-1200 SSD, and it’s actually cheaper than two smaller capacity drives in RAID. If OCZ is actually able to deliver the RevoDrive at these prices then the market is going to have a brand new force to reckon with. Do you get a standard SATA SSD or pay a little more for a much faster PCIe SSD? I suspect that many will choose the latter, especially because unlike the Z-Drive the RevoDrive is stupidly fast in desktop workloads.

If you’re wondering how this is any different than a pair of SF-1200 based SSDs in RAID-0 using your motherboard’s RAID controller, it’s not. The OCZ RevoDrive will offer lower CPU utilization than an on-board software based RAID solution thanks to its Silicon Image RAID controller, but the advantage isn’t huge. The only reason you’d opt for this over a standard RAID setup is cost and to a lesser extent, simplicity.

What’s that Connector?

When I first published photos of the Revo a number of readers wondered what the little connector next to the Silicon Image RAID controller was. Those who guessed it was for expansion were right: it is.

Unfortunately that connector won’t be present on the final RevoDrive shipped for mass production. At some point we may see another version of the Revo with that connector. The idea is to be able to add a daughterboard with another pair of SF-1200 controllers and NAND to increase capacity and performance of the Revo down the line. Remember that Silicon Image controller has four native SATA ports stemming off of it, only two are currently in use.

Installation and Early Issues
Comments Locked

62 Comments

View All Comments

  • Demon-Xanth - Friday, June 25, 2010 - link

    That connector is called a MICTOR (not sure on the spelling). It's made to hook a logic analyzer up to and generally not useful for most people.
  • Trisagion - Saturday, June 26, 2010 - link

    Doesn't look like a MICTOR to me. A MICTOR has contacts aligned along the center, this one has contacts that are on aligned on opposite sides along the center.
  • flgt - Saturday, June 26, 2010 - link

    It looks like the Samtec version of the MICTOR. QSH series maybe. Same concept though. High speed, impedance controlled debug or board-to-board connector.
  • Trisagion - Sunday, June 27, 2010 - link

    Yes, I think you're right. Thanks!
  • mrmike_1949 - Friday, June 25, 2010 - link

    whenever you test ssd, you should still include a fast hdd as a reference point!
  • mckirkus - Friday, June 25, 2010 - link

    Seconded. A VRaptor would have been a good idea. Also, can you RAID two of these like SLI vid cards?

    Intel clearly has RAID figured out. I'm guessing they're going to drop their on version of this thing in Q4 with 22nm flash and blow everybody else out of the water. I also wonder what the latency is like going through all of those bridges and controllers. PCI-e is supposed to be lower latency than SATA right?
  • Voo - Saturday, June 26, 2010 - link

    The problem with that is, that even the fastest 15k rpm SCSI drive would still be nothing more than a bar in most benchmarks, so not really that usefull and if you're interested in it you could always use bench.

    Though you have a point that it'd be a helpful reminder of the huge difference between HHDs and SSDs and would show that the differences even between the fastest/slowest SSDs aren't that important if compared to HDDs.
  • chemist1 - Friday, June 25, 2010 - link

    OK, when can they shrink one of these onto an express card, so I can plug it into the PCIe slot on my early-2008 MacBook Pro (whose SATA interface is limited to 150 MB/s)?
  • aya2work - Friday, June 25, 2010 - link

    Anand,

    Your storage bench are very interesting and looks like most adequate storage test. Do you have any plans to make it available for other users? (for personal use)

    ps: sorry for poor English
  • Breit - Friday, June 25, 2010 - link

    Is it possible to change the stripe size on the RevoDrive's internal RAID-0 in the SI BIOS? I did a little research myself regarding stripe sizes in SSD-RAID-0's and found that a 16kb stripe size is ideal for overall performance instead of the default 64kb (at least on Intel ICH10-R). With that configured a Vertex LE RAID-0 (x2) could easily come from around 40K to 80-90K in the Vantage HDD Suite.

Log in

Don't have an account? Sign up now