Garbage Collection & The TRIM Bug

Like any RAID array of SSDs, there's currently no way to pass TRIM along to member drives. Once data is written to the RAID array it remains tracked and accounted for until the addresses are overwritten. This is the performance over time degradation issue we talked about at the beginning of our SSD coverage a couple of years ago.

SSD controller manufacturers have avoided the TRIM issue by equipping drives with idle garbage collection. The principle behind idle garbage collection is simple. When the SSD controller detects a period of no activity, it can query the file system for available addresses, and then internally mark those addresses for cleaning.

The RevoDrive x2 features idle garbage collection, which for most users should be sufficient to keep the drive running at full speed. There is a problem however. Current SandForce drives have a bug that impacts TRIM (and idle garbage collection). If you fill all available NAND on a SandForce drive by writing incompressible data to all user accessible LBAs and all spare area, the drive will no longer be able to restore itself to full performance via TRIM or any idle garbage collection. This impacts the RevoDrive x2 as well as standard SandForce SSDs.

I'm still in the early stages of measuring the real world impact of this bug and I made SandForce aware of it two weeks ago. At this point I'd simply caution against operating a SandForce drive near capacity filled with primarily incompressible data (e.g. compressed videos, photos). The problem is even more sensitive on a card like the RevoDrive x2 since there's no way to secure erase the drive to fully recover from a reduced performance state.

Expect to hear more about this in the next major SSD article on AnandTech.

Final Words

The RevoDrive x2, like its predecessor, continues to be a very niche product. Both sequential and random performance are measured in the multiple hundreds of megabytes per second. Like all SandForce drives, the RevoDrive x2's performance is highly dependent on the type of data you're moving. Highly compressible data will see speeds as high as 800MB/s, while incompressible data may behave more like it would on a single SSD from Crucial. The beauty of SandForce's technology is you get great performance on traditional desktop workloads, but as always you need to be mindful of what you're storing on the drive when you decide to go with SF.

The reality for OCZ however is there’s no performance benefit to purchasing a RevoDrive x2 over four Vertex 2s and creating your own RAID array. We’re talking about a simple RAID-0 here, something any performance mainstream motherboard supports right out of the box. To make for an even tougher sell, the RevoDrive x2 will likely cost more than four Vertex 2s. The original Revo was supposed to be slightly more affordable than an array of Vertex 2s, but a quick look at Newegg shows us that you pay a $40 premium for the Revo.

 

What OCZ (and other companies) ultimately need to do is introduce a SSD controller with a native PCI Express interface (or something else other than SATA). SandForce’s recent SF-2000 announcement showed us that SATA is an interface that simply can’t keep up with SSD controller evolution. At peak read/write speed of 500MB/s, even 6Gbps SATA is barely enough. It took us years to get to 6Gbps SATA, yet in about one year SandForce will have gone from maxing out 3Gbps SATA on sequential reads to nearing the limits of 6Gbps SATA. 

On the surface, OCZ is offering us what we want - a high performance PCIe SSD. However the RevoDrive needs to offer something more than ease of installation. 

All of the major controller players are hard at work on PCIe based SSDs, but I haven’t been made aware of immediate plans to release any of them. The focus continues to be winning 2.5” SATA SSD market share at this point. 

Random & Sequential Read/Write Performance
Comments Locked

46 Comments

View All Comments

  • jav6454 - Thursday, November 4, 2010 - link

    800MB/s Sequential and almost 600MB/s random?! I now wonder where my piggy bank is?

    In all seriousness, OCZ has got a winner here, the only thing I do regret is having few PCIe ports... hopefully the HD6900s series will help open up a port.
  • DanNeely - Saturday, November 6, 2010 - link

    AFAIK A 32 deep IO Queue isn't something you're going to see outside a heavily loaded server. The 150/200 on random and 320/380 on sequential are more in line with what a typical end user will get.
  • Out of Box Experience - Wednesday, November 17, 2010 - link

    I think these Indilynx controllers might be faster than Sandforce for REAL workloads like COPY & PASTE under XP!

    I wish Anand would directly compare copy/paste speeds of both SATA and PCIe SSD's under XP as that IS the Number One Operating System for the forseeable future!

    I think that how an SSD handles non-compressible data or data already on the drive are the most enlightening tests one could do to directly compare SSD Controllers under common workloads

    Now, if OCZ could just make their stuff plug and play under XP without all the endless tweaks, or OS upgrades, we'd have a winner untill Intel starts making PCIe SSD's

    Chant in unison....
    Plug & Play Plug & Play Plug & Play Plug & Play Plug & Play
  • mr woodstock - Friday, November 19, 2010 - link

    XP is still number one .... for now.
    Windows 7 is selling very fast, and people are upgrading constantly.
    Within 3-4 years XP will be all but a memory.
  • boe - Thursday, November 18, 2010 - link

    I agree about PCIe ports. I could swing one x4 or faster PCIe port however since I need about 2TB of storage I'll be needing a lot more slots!
  • mianmian - Thursday, November 4, 2010 - link

    Using such a small connector to mount the daughter card seems not that reliable. I looks going to fall apart someday.
  • puplan - Thursday, November 4, 2010 - link

    There is nothing wrong with the connector. The daughter board is held by 4 screws.
  • GeorgeH - Thursday, November 4, 2010 - link

    I don't see any real reason to doubt that 4x Vertex 2s would perform identically, especially with a discrete RAID card of reasonable quality, but has it actually been verified on the integrated RAID that comes with a "performance mainstream motherboard", both AMD and Intel?

    It wouldn't be incredibly surprising to me to see some previously unknown performance reducing bugs crop up when you start pushing the kinds of numbers we're seeing here with integrated RAID solutions.
  • Minion4Hire - Thursday, November 4, 2010 - link

    Yea, I thought that the max combined bandwidth from ICH10 was something like 660MB/s...?
  • disappointed1 - Thursday, November 4, 2010 - link

    "ICH10 implements the 10Gbit/s bidirectional DMI interface to the "northbridge" device."

    That's 1.25GB/s bidirectional

Log in

Don't have an account? Sign up now