Random Read/Write Performance

Our random tests use Iometer to sprinkle random reads/writes across an 8GB space of the entire drive for 3 minutes, somewhat approximating the random workload a high end desktop/workstation would see. 

We present our default results at a queue depth of 3, as well as more stressful results at a queue depth of 32. The latter is necessary to really stress a four-way RAID 0 of SF-1200s, and also quite unrealistic for a desktop (more of a workstation/server workload at this point). 

We also use Iometer's standard pseudo random data for each request as well as fully random data to show the min and max performance for SandForce based drives. The type of performance you'll see really depends on the nature of the data you're writing. 

Iometer - 4KB Random Read

At best a single RevoDrive x2 (or four SF-1200 drives in RAID-0) can achieve over 500MB/s of 4KB random reads/writes. At worst? 286MB/s of random writes. 

Iometer - 4KB Random Write

Sequential Read/Write Performance

Our standard sequential tests write 128KB blocks to the drive, with a queue depth of 1, for 60 seconds straight. As was the case above, we present default as well as results with a 32 deep I/O queue. Pseudo random as well as fully random data is used to give us an idea of min and max performance.

Iometer - 128KB Sequential Read

The RevoDrive x2, like the IBIS, can read at up to 800MB/s. Write speed is an impressive 677MB/s. That's peak performance - worst case performance is down at 196MB/s for light workloads and 280MB/s for heavy ones. With SandForce so much of your performance is dependent on the type of data you're moving around. Highly compressible data won't find a better drive to live on, but data that's already in reduced form won't move around anywhere near as quickly.

Iometer - 128KB Sequential Write

The Test & Desktop Performance Garbage Collection & Final Words
Comments Locked

46 Comments

View All Comments

  • disappointed1 - Thursday, November 4, 2010 - link

    Of course, DMI itself is "a (perhaps modified) PCI-E x4 v1.1 interface"
  • disappointed1 - Thursday, November 4, 2010 - link

    oops, Intel block diagrams depict that as 2GB/s
  • Stuka87 - Thursday, November 4, 2010 - link

    That bandwidth isn't dedicated to the drives though. Its shared with other devices.
  • leexgx - Friday, November 5, 2010 - link

    intel ICH max's out at around 700-800MB/s with RAID 0 more ssds does not add muh more data rate {adds maybe 20-50MB/s per extra ssd}(random access speed mite be higher), it has been tested
  • Googer - Thursday, November 4, 2010 - link

    Fusion IO seems to have the right idea by (according to them) getting rid of the raid controller (aka the middle man) and feeding the data directly to the bus. Perhaps it's time OCz and others start looking at their approach to things.

    http://www.fusionio.com/
  • disappointed1 - Thursday, November 4, 2010 - link

    They're not bootable and cost thousands of dollars - totally different market segment.
  • disappointed1 - Thursday, November 4, 2010 - link

    Thanks for a more straightforward review Anand. Since this is largely the same as the previously-reviewed IBIS, I think in addition to rehashing those benchmarks, it would be more interesting to get hard data on your claim that...

    "The reality for OCZ however is there’s no performance benefit to purchasing a RevoDrive x2 over four Vertex 2s and creating your own RAID array"

    I'd love to see what 4x SF-1200 drives do on an ICH10R, or a discreet RAID card running off the IOH.
  • RealMurphy - Thursday, November 4, 2010 - link

    * Drive is able to sustain massive read bandwidth - check
    * Drive is able to sustain massive write bandwidth - check
    * a lot of IOps - check
    * but why the heck PCI-X? Unless I'm totally mistaken and this is one of the few PCI-X 2.0 devices, you have NO error correction what so ever on this path and only puny even parity checking which will eventually fail and let's you read/write junk :( (caveat, maybe I misinterpret the PCI(-X) specs, if so please correct me)...
  • Powerlurker - Thursday, November 4, 2010 - link

    Supposedly, it's because OCZ realized that a PCI-X controller plus PCI-X to PCIe bridge cost significantly less then a native PCIe RAID controller.
  • ypsylon - Thursday, November 4, 2010 - link

    Bad investment and misguided. You can easily achieve these results with basic software RAID card (like Highpoint Rocket RAID 6xx) and 4 cheap SSDs. And if you buy proper RAID controller (say Areca 1261ML) and plug anything between 4 and 16 small SSDs in RAID 0 or 10 you will simply trounce Revo in every possible way. While it is more expansive short term, you can always freely expand storage space with bigger SSDs without too much hassle. You can't do that with Revo.

    Honestly SSDs are still a premium product. And for a premium product Revo is not good enough. I would certainly buy Photofast GM-Powerdrive, not Revo. Of course it is more expansive (2.5x) but it offers twice the performance (PCI-Ex x8 not x4 like in RevoDrive), and full hardware RAID support from R0 to 60 (not some software rubbish from SilImg, brrr).

    I like the idea of the FusionIO, but it is not for normal PC users. It is fine investment tho if you own gold mine, oil derrick, first name is $heikh, you are footballer, work for a mafia, or you are a banker. Take your pick! ;)

Log in

Don't have an account? Sign up now