POST A COMMENT

9 Comments

Back to Article

  • nutgirdle - Wednesday, February 26, 2014 - link

    Why couldn't it be used as an interconnect? If this is such a low-latency physical layer, why wouldn't an MPI-enabled stack be possible? I suppose MPI is tuned for ethernet and/or infiniband, and the overhead might be a bit high, but it still seems like a rather obvious application. I'd appreciate continued coverage of this particular technology. Reply
  • xakor - Wednesday, February 26, 2014 - link

    I think that they are implying interconnection is an easy problem with respect to their technology, not that you can't do it. Reply
  • antoanto - Wednesday, February 26, 2014 - link

    Absolutely yes, RONNIEE Express can be used as interconnect and we have the native MPI interface ready to use.
    The OSU 4.0 benchmark shows a point to point latency of 1 us.
    Some Universities are using RONNIEE Express for specific applications and soon we can provide that result too
    Reply
  • BMNify - Wednesday, February 26, 2014 - link

    antoanto, i realize that a3cube are there to make profits ,but it would be nice if you industrial vendors finally made the effort to provide some affordable mass market kits for the home/SOHO markets where the masses of end consumers with an average of 4 machines on site, are desperately asking for faster than antiquated 1GbE at a reasonable all in price for more than a decade Reply
  • antoanto - Wednesday, February 26, 2014 - link

    BMNify, one of the reason because we started to develop RONNIEE Express is to provide a disruptive solution with a affordable price also for small solutions ( 4 machines).
    One of our consideration was that small teams of researchers, engineers, graphics have not enough money to buy faster solutions, RONNIEE may be the answer to that need.
    Reply
  • Kevin G - Wednesday, February 26, 2014 - link

    I'm not understanding how they're able to exceed an 8x PCI-e 3.0 speed link. With a bit of network tunneling hardware to create a virtual TCP/IP interface and an IOMMU, network transfers between two nodes accross the 8x PCI-e 3.0 link should operating at near DMA speeds. For generic IO, it doesn't get faster than that (well other than adding more PCI-e lanes to the setup).

    I do have to give credit to a3cube for the true mesh topology. That could be the secret to how they're able to reach such performance claims. It would also come with all the negatives of a true mesh: extensive cabling and limited scalability as the node count increases.
    Reply
  • antoanto - Wednesday, February 26, 2014 - link

    The article simplify too much what we explain.
    Please, watch http://www.youtube.com/watch?v=YIGKks78Cq8
    And if you want more clarification feel free to contact us ( www.a3cube-inc.com)
    Reply
  • gsvelto - Wednesday, February 26, 2014 - link

    The use of sockperf to compare this interconnect performance with InfiniBand solutions is shoddy at best. sockperf uses regular IP sockets to perform its tests which is most likely to run using the IPoIB layer in an InfiniBand setup. Using the native InfiniBan APIs (verbs) in my personal experience usually yields 2-4x higher effective bandwidth and 10x higher message throughput (which for small messages is mostly bottlenecked by the CPU used, not the interconnect). Reply
  • antoanto - Wednesday, February 26, 2014 - link

    We know, but we want compare the TCP/UDP socket performance because we want to run completely unmodified application with the maximum performance as possible.
    If you want to use IB verbs you need to use a modified application not a TCP/IP and of course you will have extraordinary performance, but if you don't have the source code of the application or the money and the time to do that you can not.
    In any case also RONNIEE Express has a powerful native API that if you want to port your application give you more and more performance. What we show is our in memory TCP/IP socket compared with the standard one, so sockperf and netperf ( for UDP) are good to show very easily the difference.

    We don't want to compete with IB, we want to use standard socket based unmodified application at the maximum speed as possible with no code changes.
    Reply

Log in

Don't have an account? Sign up now