Making Random Performance Look Sequential

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our random tests write 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). Our random read test spans the entirety of the drive. I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.

Iometer - 4KB Random Write

In case you didn't do the math in your head, 510MB/s of 4KB random writes translates to 130,000 IOPS. That's insane. The IBIS can deliver faster random writes than the RevoDrive can manage sequential writes. A fully taxed SandForce drive manages 200MB/s, the performance advantage here is huge. Again I can't stress enough how fast four of these things must be.

Iometer - 4KB Random Read

Random read performance across the drive's entire LBA space drops the peak performance a bit but we're still well beyond what 3Gbps SATA can deliver, although technically 6Gbps SATA would be enough here. You'll note that in a desktop workload (QD=3) there's no advantage to the IBIS drive. This thing really only makes sense for very I/O intensive workloads.

High Queue Depth Sequential Performance, Just Amazing No TRIM, but Garbage Collection
Comments Locked

74 Comments

View All Comments

  • Johnsy - Wednesday, September 29, 2010 - link

    I would like to echo the comments made by disappointed1, particularly with regard to OCZ's attempt to introduce a proprietry standard when a cabling spec for PCIe already exists.

    It's all well and good having intimate relationships with representatives of companies who's products you review, but having read this (and a couple of other) articles, I do find myself wondering who the real beneficiary is. . .
  • 63jax - Wednesday, September 29, 2010 - link

    although i am amazed by those numbers you should put the ioDrive there as a standard.
  • iwodo - Wednesday, September 29, 2010 - link

    I recently posted on Anandtech forum about SSD - When we hit the laws of diminishing returns

    http://forums.anandtech.com/showthread.php?t=21068...

    It is less then 10 days Anand seems to have answer every question we have discussed in the thread. From Connection Port to Software usage.

    The review pretty much prove my point. After current Gen Sandforce SSD, we are already hitting the laws of diminishing returns. A SATA 6Gbps SSD, or even a Quad Sandforce SSD like IBIS wont give as any perceptible speed improvement in 90% of out day to day usage.

    Until Software or OS takes advantage of massive IOs from SSD. Current Sandforce SSD would be the best investment in terms of upgrades.
  • iwodo - Wednesday, September 29, 2010 - link

    I forgot to mention, with next gen SSD that will be hitting 550MB/s and even slightly more IOPS, there is absolutely NO NEED for HSDL in consumer space.

    While SATA is only Half Duplex, benchmarks shows no evidence such limitation has causes any latency problem.
  • davepermen - Thursday, September 30, 2010 - link

    Indeed. the next gen intel ssd on sata3 will most likely deliver the same as this ssd, but without all the proprietary crap. sure, numbers will be lower. but actual performance will most likely be the same, much cheaper, and very flexible (just raid them if you want, or jbod them, or what ever).

    this stuff is bullshit for customers. it sounds like some geek created a funky setup to combine it's ssds to great performance, and that's it.

    oh, other than that, i bet the latency will be higher on these ocz just because of all the indirections. and latency are the nr. one thing that make you feel the difference of different ssds.

    in short, that product is absolute useless crap.

    so far, i'm still happy on my intel gen1 and gen2. i'll wait a bit to find a new device that gives me a real noticable difference. and does not take away any of the flexibility i have right now with my simple 1-sata-drive setups.

    anand and ocz, always a strange combination :)
  • viewwin - Wednesday, September 29, 2010 - link

    I wonder what Intel thinks about a new competing cable design?
  • davepermen - Thursday, September 30, 2010 - link

    i bet they don't even know. not that they care. their ssds will deliver much more for the customer. easy, standards based connection existing in ANY actual system, raidability, trim, and most likely about the same performance experience as this device, but at a much much lower cost.
  • tech6 - Wednesday, September 29, 2010 - link

    Since this is really just a cable attached SSD card, I don't see the need for yet another protocol/connection standard. Also the concept of RAID upon RAID also seems somewhat redundant.

    I am also unclear as to what market this is aimed for. The price excludes the mass desktop market and yet it also isn't aimed at the enterprise data center - that only leaves workstation power users which are not a large market. Given the small target audience, motherboard makers will most likely not invest their resources in supporting HSDL on their motherboards.
  • Stuka87 - Wednesday, September 29, 2010 - link

    Its a very interesting concept, and the performance is of course incredible. But like you mentioned, I just can't see the money being worth it at this point. It is simpler than building your own RAID, as you just plug it in and be done with it.

    But if motherboard makers can get on board, and the interface gains some traction, then I could certainly see it taking over SAS/SATA as the interface of choice in the future. I think OCZ is smart to offer it as a free and open standard. Offering a new standard for free has worked very well for other companies in the past. Especially when they are small.
  • nirolf - Wednesday, September 29, 2010 - link

    <<Note that peak low queue-depth write speed dropped from ~233MB/s down to 120MB/s. Now here’s performance after the drive has been left idle for half an hour:>>

    Isn't this a problem in a server environment? Maybe some servers never get half an hour of idle.

Log in

Don't have an account? Sign up now