The Vision

I spoke with OCZ’s CEO Ryan Petersen and he outlined his vision for me. He wants HSDL and associated controllers to be present on motherboards. Instead of using PCIe SSDs, you’ll have HSDL connectors that can give you the bandwidth of PCIe. Instead of being limited to 3Gbps or 6Gbps as is the case with SATA/SAS today you get gobs of bandwidth. We’re talking 2GB/s of bandwidth per drive (1GB/s up and 1GB/s down) on a PCIe 2.0 motherboard. To feed that sort of bandwidth all OCZ has to do is RAID more SSD controllers internal to each drive (or move to faster drive controllers). Eventually, if HSDL takes off, controller makers wouldn’t have to target SATA they could simply build native PCIe controllers. It’d shave off some component cost and some latency.


You can even have a multi-port IBIS drive

The real win for HSDL appears to be the high end workstation or server markets. The single port HSDL/IBIS solution is interesting for those who want a lot of performance in a single drive, but honestly you could roll your own with a RAID controller and four SandForce drives for less money. The potential is once you start designing systems with multiple IBIS drives. With four of these drives you should be able to push multiple gigabytes per second of data which is just unheard of in something that’s still relatively attainable.

The Test

Note our AnandTech Storage Bench doesn't always play well with RAIDed drives and thus we weren't able to run it on the IBIS.

CPU Intel Core i7 975 running at 3.33GHz (Turbo & EIST Disabled)
Motherboard: Intel DX58SO (Intel X58)
Chipset: Intel X58 + Marvell SATA 6Gbps PCIe
Chipset Drivers: Intel 9.1.1.1015 + Intel IMSM 8.9
Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64
Meet the IBIS Desktop Performance
Comments Locked

74 Comments

View All Comments

  • MRFS - Wednesday, September 29, 2010 - link

    > A mini-SAS2 cable has four lanes of 6 Gbps for a total of 24 Gbps.

    That was also my point (posted at another site today):

    What happens when 6G SSDs emerge soon to comply
    with the current 6G standard? Isn't that what many of
    us have been waiting for?

    (I know I have!)

    I should think that numerous HBA reviews will tell us which setups
    are best for which workloads, without needing to invest
    in a totally new cable protocol.

    For example, the Highpoint RocketRAID 2720 is a modest SAS/6G HBA
    we are considering for our mostly sequential workload
    i.e. updating a static database that we upload to our Internet website,
    archiving 10GB+drive images + lots of low-volume updates.

    http://www.newegg.com/Product/Product.aspx?Item=N8...

    That RR2720 uses an x8 Gen2 edge connector, and
    it provides us with lots of flexibility concerning the
    size and number of SSDs we eventually will attach
    i.e. 2 x SFF-8087 connectors for a total of 8 SSDs and/or HDDs.

    If we want, we can switch to one or two SFF-8087 cables
    that "fan out" to discrete HDDs and/or discrete SSDs
    instead of the SFF-8087 cables that come with the 2720.
    $20-$30 USD, maybe?

    Now, some of you will likely object that the RR2720
    is not preferred for highly random high I/O environments;
    so, for a little more money there are lots of choices that will
    address those workloads nicely e.g. Areca, LSI etc.

    Highpoint even has an HBA with a full x16 Gen2 edge connector.

    What am I missing here? Repeating, once again,
    this important observation already made above:

    "A mini-SAS2 cable has four lanes of 6 Gbps for a total of 24 Gbps."

    If what I am reading today is exactly correct, then
    the one-port IBIS is limited to PCI-E 1.1: x4 @ 2.5 Gbps = 10 Gbps.

    p.s. I would suggest, as I already have, that the SATA/6G protocol
    be modified to do away with the 10/8 transmission overhead, and
    that the next SATA/SAS channel support 8 Gbps -- to sync it optimally
    with PCI-E Gen3's 128/130 "jumbo frames". At most, this may
    require slightly better SATA and SAS cables, which is a very
    cheap marginal cost, imho.

    MRFS
  • blowfish - Wednesday, September 29, 2010 - link

    I for one found this to be a great article, and I always enjoy Anand's prose - and I think he surely has a book or two in him. You certainly can't fault him for politeness, even in the face of some fierce criticism. So come on critics, get real, look what kind of nonsense you have to put up with at Tom's et al!
  • sfsilicon - Wednesday, September 29, 2010 - link

    Hi Anand,

    Thank you for writing the article and shedding some light into this new product and standard from OCZ. Good to see you addressing the objectivity questions posted by some of the other posters. I enjoyed reading the additional information and analysis from some of the more informed poster. I do wonder if the source of the bias of your article is based on your not being aware of some of the developments in the SSD space or just being exposed to too much vendor coolaid. Not sure which one is the case, but hopefully my post will help generate a more balanced article.

    ---

    I argee with several of the previous posters that this both the IBIS drives and the HDSL interface are nothing new (no matter how hard OCZ marketing might want to make it look like). As one of the previous posters said it is a SSD specific implementation of a PCIe extender solution.

    I'm not sure how HDSL started, but I see it as a bridging solution because OCZ does not have 6G technology today. Recently released 6G dual port solutions will allow single SSD drives to transfer up to a theoretical 1200 MB/s per drive.

    It does allow higher scaling beyond 1200 MB/s per drive through the channel bundling, but the SAS standardization commity is already looking into that option in case 12Gbps SAS ends up becoming too difficult to do. Channel bundling is inherent to SAS and address the bandwidth threat brought up by PCIe.

    The PCIe channel bundling / IBIS Drive solution from OCZ also looks a bit like a uncomfortable balancing act. Why do you need to extend the PCIe interface to the the drive level? Is it just to maintain the more familiar "Drive" based use model? Or is it really a way to package 2 or more 3Gbps drives to get higher performance? Why not stick with a pure PCIe solution?

    Assuming you don't buy into the SAS channel bundling story or you need a drive today that has more bandwidth. Why another propriatary solution? The SSD industry is working on NVMHCI which will address the concern of proprietary PCIe card solutions and will allow addressing of PCIe based cards as a storage device (Intel backed and evolved from ACHI).

    While OCZ's efforts are certainly to be applauded especially given their aggresive roadmap plans a more balanced article should include references to the above developments to put the OCZ solution into perspective. I'd love to see some follow-up articles on the multi-port SAS and NVMHCI as a primer on what how the SSD industry is addressing technology limitations of today's SSDs. In addition it might be interesting to talk about the recent SNIA performance spec (soon to include client specific workloads) and Jedec's endurance spec.

    ---

    I continue to enjoy your indepth reporting on the SSD side.
  • don_k - Wednesday, September 29, 2010 - link

    "..or have a very fast, very unbootable RAID."

    Not quite true actually. PCIe drives of this type, meaning drives that are essentially multiple ssd drives attached to a raid controller on a single board, show up on *nix OS's as individual drives as well as the array device itself.

    So under *nix you don't have to use the onboard raid (which does not provide any performance benefits in this case, there is no battery backed cache) and you can then create a single RAID0 for all the individual drives on all your PCIe cards, how many those are.
  • eva2000 - Thursday, September 30, 2010 - link

    would love to see some more tests on non-windows platform i.e. centos 5.5 64bit with mysql 5.1.50 database benchmarks - particularly on writes.
  • bradc - Thursday, September 30, 2010 - link

    With the PCI-X converter this is really limited to 1066mb/s, but minus overhead is probably in the 850-900mb/s range which is what we see on one of the tests, just above 800mb/s.

    While this has the effect of being a single SSD for cabling and so on, I really don't see the appeal? Why didn't they just use something like this http://www.newegg.com/Product/Product.aspx?Item=N8...

    I hope the link works, if it doesn't, I linked to an LSI 8204ELP for $210. It has a single 4x SAS connector for 1200mb/s, OCZ could link that straight into a 3.5" bay device with 4x Sandforce SSD's on it. This would be about the same price or cheaper than this IBIS device, while giving 1200mb/s which is going to be about 40% faster than the IBIS.

    It would make MUCH MUCH more sense to me to simply have a 3.5" bay device with a 4x SAS port on it which can connect to just about any hard drive controller. The HSDL interface thing is completely unnecessary with the current PCI-X converter chip on the SSD.
  • sjprg2 - Friday, October 1, 2010 - link

    Why are we concerning ourselfs withs memory controllers instead of just going directly to DMA? Just because these devices are static rams doesen't change the fact they are memory. Just treat them as such. Its called KISS it.
  • XLV - Friday, October 1, 2010 - link

    One clarification: these devices and controllers use multi-lane-sas internal ports and cables, but are electrically incompatible.. what wil happen if one by mistake attaches an IBIS to a SAS controller, or a SAS device to the HSDL controller? the devices get fried, or is there atleast some keying to the connectors, so such a mistake can be avoided?
  • juhatus - Sunday, October 3, 2010 - link

    "Even our upcoming server upgrade uses no less than fifty two SSDs across our entire network, and we’re small beans in the grand scheme of things."

    Why didn't you go for real SAN ? Something like EMC clariion?

    You just like pedling with disks, now do you? :)
  • randomloop - Tuesday, October 5, 2010 - link

    In the beginning, we based our aerial video and imaging recording system on OCZ SSD drives, based on their specs.

    They've all failed after several months of operation.

    Aerial systems endure quite a bit of jostling. Hence the desire to use SSD's.

    We had 5 of 5 OCZ 128G SSD drives fail during our tests.

    We now use other SSD drives.

Log in

Don't have an account? Sign up now