Random Read/Write Performance

Our random tests use Iometer to sprinkle random reads/writes across an 8GB space of the entire drive for 3 minutes, somewhat approximating the random workload a high end desktop/workstation would see. 

We present our default results at a queue depth of 3, as well as more stressful results at a queue depth of 32. The latter is necessary to really stress a four-way RAID 0 of SF-1200s, and also quite unrealistic for a desktop (more of a workstation/server workload at this point). 

We also use Iometer's standard pseudo random data for each request as well as fully random data to show the min and max performance for SandForce based drives. The type of performance you'll see really depends on the nature of the data you're writing. 

Iometer - 4KB Random Read

At best a single RevoDrive x2 (or four SF-1200 drives in RAID-0) can achieve over 500MB/s of 4KB random reads/writes. At worst? 286MB/s of random writes. 

Iometer - 4KB Random Write

Sequential Read/Write Performance

Our standard sequential tests write 128KB blocks to the drive, with a queue depth of 1, for 60 seconds straight. As was the case above, we present default as well as results with a 32 deep I/O queue. Pseudo random as well as fully random data is used to give us an idea of min and max performance.

Iometer - 128KB Sequential Read

The RevoDrive x2, like the IBIS, can read at up to 800MB/s. Write speed is an impressive 677MB/s. That's peak performance - worst case performance is down at 196MB/s for light workloads and 280MB/s for heavy ones. With SandForce so much of your performance is dependent on the type of data you're moving around. Highly compressible data won't find a better drive to live on, but data that's already in reduced form won't move around anywhere near as quickly.

Iometer - 128KB Sequential Write

The Test & Desktop Performance Garbage Collection & Final Words
Comments Locked


View All Comments

  • Chloiber - Thursday, November 4, 2010 - link

    IMHO the RevoDrives are useless products. You gain nothing except high sequential bandwith, which most users never need.
    In REAL world applications, the CPU limits anyway in high IOPS scenarios. You won't see a big gain (if any) if you move from 1 Vertex 2 to 4 Vertex 2 in typical situations.
  • jonup - Thursday, November 4, 2010 - link

    Anand, this is not directly related with the article, but when do you expect the SSD prices to take a big hit? With the next generation of drives around the corner and talks of increased flash manufacturing capacities do you think it is reasonable to by and SSD (regardless of the interface) now simply from a $/GB prospective?

  • theagentsmith - Thursday, November 4, 2010 - link

    Hey Anand
    could you shed light on a annoying bug that's plaguing several but not all owners of Sandforce based SSDs?
    It happens when there is not a lot of I/O activity, like when idle or light usage. The drive disappears and you see all the programs opened failing one at a time, until a couple of minutes later windows gives up with a BSOD. As the drive disappeared the kernel can't even write a memory dump, and if you press reset the drive isn't recognized by the BIOS, you have to cycle power to see it working again.
    There is also a resume from sleep bug that however it's tolerable as you can use hibernation instead of sleep.
    Here there is a topic on Corsair forums about this, they just released a 2.0 firmware but there is no change log and of course no word from Sandforce.
  • mark53916 - Thursday, November 4, 2010 - link

    How do this and other SSDs handle the container files of encrypted
    and other virtual disks?

    Typically, for best performance the container files should be stored
    "densely" on the underlying device, but the space is always in
  • Shadowmaster625 - Thursday, November 4, 2010 - link

    It is not on companies like OCZ to release a faster SSD controller. As I been saying for ages now, it is up to AMD/Intel to release an SSD controller integrated directly into the CPU. It makes as much sense as having an integrated memory controller. It's actually pretty much the same exact thing, except the memory is nonvolatile. It should be in the same form factor to reduce costs (keyed differently of course). ie, a 64GB SSD DIMM would cost half of a 64GB SSD. Perhaps even less.
  • FunBunny2 - Thursday, November 4, 2010 - link

    The maths of SSD controllers aren't yet settled, which is why SandForce is different from Intel which is different from Indilinx and so forth. If the SSD controller is in the CPU case, you're stuck with it, unless you buy a new CPU. Hmmm. Frequent, planned, obsolescence; may be Intel will do it, then.
  • larijoona - Thursday, November 4, 2010 - link


    I'm also intrested in seeing some benchmarks of virtual pc performance run from ssd!
  • jhbodle - Thursday, November 4, 2010 - link

    I am not aware of any motherboard-integrated RAID controller that can handle the bandwidth of these 4 Sandforce SSDs. I use 3 X-25E's in RAID0 on the ICH10R, generally regarded as the best integrated RAID controller, and it is maxed at 660MB/sec.

    So I like this card and am pleased that companies such as OCZ are working on this kind of thing!
  • Chloiber - Thursday, November 4, 2010 - link

    Yep. ICH10R gets to 500-650MB/s - never seen more.
    It may be, that the ICH10R is connected via a 2GB/s-Bus or whatever, but that's theory. Or, the controller itself cannot handle more (which is a reasonable explanation - if you look at controllers from areca, they also max out at 1-2GB/s - and they are way more expensive).
  • sonofgodfrey - Thursday, November 4, 2010 - link

    The LSI SAS controllers (which are on some server boards) can easily hit 1GB/s with 4 SSDs. Did this with the first generation Intel X-25M drives.

Log in

Don't have an account? Sign up now