Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read (4K Aligned)

As we saw in our RevoDrive 3 X2 review, low queue depth random read performance doesn't really show much of an advantage on these multi-controller PCIe RAID SSDs. The Z-Drive R4 comes in a little faster than the RevoDrive 3 X2 but not by much at all. Even a single Vertex 3 does just fine here.

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Random write performance tells a similar story, at such low queue depths most of the controllers aren't doing any work at all. Let's see what happens when we start ramping up queue depth however:

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Surprisingly enough, even at a queue depth of 32 the Z-Drive R4 is no faster than the RevoDrive 3 X2. In fact, it's a bit slower (presumably due to the extra overhead of having to split the workload between 8 controllers vs just 4). In our RevoDrive review we ran a third random write test with two QD=32 threads in parallel, it's here that we can start to see a difference between these drives:

It's only at ultra high queue depths that the Z-Drive can begin to distance itself from the RevoDrive 3 X2. It looks like we may need some really stressful tests to tax this thing. The chart below represents the same data as above but in IOPS instead of MB/s:

271K IOPS...not bad.

The Card Sequential Read/Write Speed
Comments Locked

57 Comments

View All Comments

  • lorribot - Wednesday, September 28, 2011 - link

    You cant raid it in software but you could raid several of them if you have deep pockets.
    The point is why buy a 1.6 or 3.2 TB SSD when you can buy 10 x 320 gb SSDs and (possibly) get better performance for less cost?
  • bjacobson - Tuesday, September 27, 2011 - link

    I think I've mentioned this before but can you load up a Windows 7 installation with 30 or so startup programs and compare the startup time difference between this and a harddrive?
    A video of this would be be even impressive.
  • ckryan - Tuesday, September 27, 2011 - link

    I've been going through some issues with a 2281 drive with Toggle nand. I'm basically writing 11TB a day to it and under these conditions I can't get too many hours in between crashes. I'm of the opinion that the latest FW has helped most out, but clearly my experience shows that the 2281, when perfected, will be unstoppable in certain workloads, but for now all SF users are going to have some problems. If the problems are predictable you can compensate, but if they're random, well SF controllers aren't the only things that have problem with randomness. I knew it was a possibility, and that normal users won't abuse their drives as much, but I have to wonder if OCZ can make an enterprise drive problem free, why can't they make consumer SF drives better? The SF problem is the OCZ problem... OWC doesn't have the same perception issues, but is using the same hardware (Mushkin,Patriot, etc). As much as I like OCZ, they've done some questionable things in the past, and not just swapping cheap flash in SF1200 drives. Hopefully they can overcome the problems they're having with Sandforce and their Arrowana stuff, release a problem free next gen Indilinx controller, and then call it a day. Oh yeah, quit using those stupid plastic chassis.
  • jalexoid - Tuesday, September 27, 2011 - link

    Considering these devices are more likely to find themselves in a machine running something else than a desktop system, why not test them on another OS?
  • sanguy - Wednesday, September 28, 2011 - link

    OCZ's standard line "It's only affecting 0.01% of the deployed units and we're working on a fix....." doesn't work in the enterprise market.
  • josephjpeters - Wednesday, September 28, 2011 - link

    These are PCIe. Most of the "issues" come from SATA drives because mobo makers are having issues with their SATA ports.
  • p05esto - Wednesday, September 28, 2011 - link

    I'll admit, I'm now too lazy to even read....it's getting bad. I just want to push the "play" button while I sit back eating Cheetos and rubbing my tummy. Get into my tummy little Cheeto, get into my brain little ssd review,... same line of thinking really, whatever is easiest.

    Great review though, seriously.
  • alpha754293 - Wednesday, September 28, 2011 - link

    If you want to really test it'and validate it's long term reliability, you pretty much need to do what enterprise customers do. Run the SSD, but always keep a backup of it somewhere, like you said.

    That being said though, if you've got TWO backup copies of it, you can actually run a parity check on it (pseudo-checksum) and determine its error rate.

    Also, you didn't run HDTach on it. Given that it's tied together with a Marvell SAS controller, NOT being able to run TRIM on it, I would presume, will give it performance issues in the long run.

    To do the error checking, you'll probably have to put this thing in a Solaris system running ZFS so you can mimic the CERN test. And if you actually read/write continuously to it, at the same level in terms of the sheer volume of data, other SSD/NAND-specific issues might start to pop up like wear levelling, etc. I would probably just run the read/write cycle for an entire month, where it periodically deletes some data, rewrite new data, etc. At the end of the month, make the two mirror backups of it. And then run it again. Hopefully you'd be able to end up at some identical endpoint after PBs of read/write ops that you can run both the block level and volume level checksum on.

    But as a swap drive, this would be BLAZINGLY fast.
  • perrydoell - Wednesday, September 28, 2011 - link

    You say "We have a preproduction board that has a number of stability & compatibility issues."

    This is the enterprise space. Things MUST WORK RELIABLY. How can you even review unstable products? I expect better from Anandtech.

    I cannot take OCZ seriously either. An unstable product is NOT for the enterprise. Also, check the negative reviews at NewEgg. Ouch.
  • josephjpeters - Wednesday, September 28, 2011 - link

    Where is the R4 listed on Newegg?

Log in

Don't have an account? Sign up now