Meet the IBIS

OCZ sent us the basic IBIS kit. Every IBIS drive will come with a free 1-port PCIe card. Drive capacities range from 100GB all the way up to 960GB:

OCZ IBIS Lineup
Part Number Capacity MSRP
OCZ3HSD1IBS1-960G 960GB $2799
OCZ3HSD1IBS1-720G 720GB $2149
OCZ3HSD1IBS1-480G 480GB $1299
OCZ3HSD1IBS1-360G 360GB $1099
OCZ3HSD1IBS1-240G 240GB $739
OCZ3HSD1IBS1-160G 160GB $629
OCZ3HSD1IBS1-100G 100GB $529

Internally the IBIS is a pretty neat design. There are two PCBs, each with two SF-1200 controllers and associated NAND. They plug into a backplane with a RAID controller and a chip that muxes the four PCIe lanes that branch off the controller into the HSDL signal. It's all custom OCZ PCB-work, pretty impressive.


This is the sandwich of PCBs inside the IBIS chassis


Pull the layers apart and you get the on-drive RAID/HSDL board (left) and the actual SSD cards (right)


Four SF-1200 controllers in parallel, this thing is fast

There’s a standard SATA power connector and an internal mini-SAS connector. The pinout of the connector is proprietary however, plugging it into a SAS card won’t work. OCZ chose the SAS connector to make part sourcing easier and keep launch costs to a minimum (designing a new connector doesn’t make things any easier).

The IBIS bundle includes a HSDL cable, which is a high quality standard SAS cable. Apparently OCZ found signal problems with cheaper SAS cables. OCZ has validated HSDL cables at up to half a meter, which it believes should be enough for most applications today. There obviously may be some confusion caused by OCZ using the SAS connector for HSDL but I suspect if the standard ever catches on OCZ could easily switch to a proprietary connector.

The 1-port PCIe card only supports PCIe 1.1, while the optional 4-port card supports PCIe 1.1 and 2.0 and will auto-negotiate speed at POST.


The bundled 1-port PCIe card

The Need for Speed The Vision and The Test
Comments Locked

74 Comments

View All Comments

  • jwilliams4200 - Wednesday, September 29, 2010 - link

    Anand:

    I suspect your resiliency test is flawed. Doesn't HD Tach essentially write a string of zeros to the drive? And a Sandforce drive would compress that and only write a tiny amount to flash memory. So it seems to me that you have only proved that the drives are resilient when they are presented with an unrealistic workload of highly compressible data.

    I think you need to do two things to get a good idea of resiliency:

    (1) Write a lot of random (incompressible) data to the drive to get it "dirty"

    (2) Measure the write performance of random (incompressible) data while the SSD is "dirty"

    It is also possible to combine (1) and (2) in a single test. Start with a "clean" SSD, then configure IO meter to write incompressible data continuously over the entire SSD span, say random 4KB 100% write. Measure the write speed once a minute and plot the write speed vs. time to see how the write speed degrades as the SSD gets dirty. This is a standard test done by Calypso system's industrial SSD testers. See, for example, the last graph here:

    http://www.micronblogs.com/2010/08/setting-a-new-b...

    Also, there is a strange problem with Sandforce-controlled "dirty" SSDs having degraded write speed which is not recovered after TRIM, but it only shows up with incompressible data. See, for example:

    http://www.bit-tech.net/hardware/storage/2010/08/1...
  • Anand Lal Shimpi - Wednesday, September 29, 2010 - link

    It boils down to write amplification. I'm working on an article now to quantify exactly how low SandForce's WA is in comparison to other controller makers using methods similar to what you've suggested. In the case of the IBIS I'm simply trying to confirm whether or not the background garbage collection works. In this case I'm writing 100% random data sequentially across the entire drive using iometer, then peppering it with 100% random data randomly across the entire drive for 20 minutes. HDTach is simply used to measure write latency across all LBAs.

    I haven't seen any issues with SF drives not TRIMing properly when faced with random data. I will augment our HDTach TRIM test with another iometer pass of random data to see if I can duplicate the results.

    Take care,
    Anand
  • jwilliams4200 - Wednesday, September 29, 2010 - link

    What I would like to see is SSDs with a standard mini-SAS2 connector. That would give a bandwidth of 24 Gbps, and it could be connected to any SAS2 HBA or RAID card. Simple, standards-compliant, and fast. What more could you want?

    Well, inexpensive would be nice. I guess putting a 4x SAS2 interface in an SSD might be expensive. But at high volume, I would guess the cost could be brought down eventually.
  • LancerVI - Wednesday, September 29, 2010 - link

    I found the article to be interesting. OCZ introducing a new interconnect that is open for all is interesting. That's what I took from it.

    It's cool to see what these companies are trying to do to increase performance, create new products and possibly new markets.

    I think most of you missed the point of the article.
  • davepermen - Thursday, September 30, 2010 - link

    problem is, why?

    there is NO use of this. there are enough interconnects existing. enough fast, they are, too. so, again, why?

    oh, and open and all doesn't matter. there won't be any products besides some ocz stuff.
  • jwilliams4200 - Wednesday, September 29, 2010 - link

    Anand:

    After reading your response to my comment, I re-read the section of your article with HD Tach results, and I am now more confused. There are 3 HD Tach screenshots that show average read and write speeds in the text at the bottom right of the screen. In order, the avg read and writes for the 3 screenshots are:

    read 201.4
    write 233.1

    read 125.0
    write 224.3

    "Note that peak low queue-depth write speed dropped from ~233MB/s down to 120MB/s"

    read 203.9
    write 229.2

    I also included your comment from the article about write speed dropping. But are the read and write rates from HD Tach mixed up?
  • Anand Lal Shimpi - Wednesday, September 29, 2010 - link

    Ah good catch, that's a typo. On most drives the HDTach pass shows impact to write latency, but on SF drives the impact is actually on read speed (the writes appear to be mostly compressed/deduped) as there's much more data to track recover since what's being read was originally stored in its entirety.

    Take care,
    Anand
  • jwilliams4200 - Wednesday, September 29, 2010 - link

    My guess is that if you wrote incompressible data to a dirty SF drive, that the write speed would be impacted similarly to the impact you see here on the read speed.

    In other words, the SF drives are not nearly as resilient as the HD Tach write scans show, since, as you say, the SF controller is just compressing/deduping the data that HD Tach is writing. And HD Tach's writes do not represent a realistic workload.

    I suggest you do an article revisiting the resiliency of dirty SSDs, paying particular attention to writing incompressible data.
  • greggm2000 - Wednesday, September 29, 2010 - link

    So how will Lightpeak factor into this? Is OCZ working on a Lightpeak implementation of this? One hopes that OCZ and Intel are communicating here..
  • jwilliams4200 - Wednesday, September 29, 2010 - link

    The first lightpeak cables are only supposed to be 10 Gbps. A mini-SAS2 cable has four lanes of 6 Gbps for a total of 24 Gbps. lightpeak loses.

Log in

Don't have an account? Sign up now