So, lots of people were asking really good questions about Lucid and their Hydra engine after we posted the initial story on it. We had the opportunity to sit down with them and ask some of those and other questions. And they had quite a lot of really interesting things to say.

From a mile high and the promise of hardware not this year but next, it is tough to really get a good understanding of exactly what's going on and what the implications of this hardware could be if they can deliver on what they say they can. We'll do our best to explain what we know and also what the pitfalls could be.

First, let's address the issue of the box we showed off in the previous coverage. No it will not need an external device. Lucid has designed this to be solution that can be dropped onto a motherboard or a graphics card so integration and user experience should be seamless.

This would be even more transparent than SLI and CrossFire because not even an internal bridge would be needed. Just plug any two cards from the same vendor (and i think they also need to use the same driver version though this is less than clear) and performance will scale linearly with the capabilities of each card.

They did mention the fact that they can implement a solution in an external box for notebooks. For those who need something portable but want high end graphics at home, they could just attach the graphics cards linked with a Hyrda 100 (via PCIe over cables) to the notebook. Not ideal, but it still offers some advantages over high end internal cards (especially in the area of heat) that you might not need when you're on the road.

Sound too good to be true? Yes. Did we see it working? Sure. Do we have performance numbers? Not yet. So there's the rub for us. We really want to put this thing through its paces before we sign off on it. Running on both UT3 and Crysis (DX9 only for now -- DX10 before the product ships though) is cool, but claiming application agnostic linear scaling over an arbitrary number of GPUs of differing capability is a tough pill to swallow without independent confirmation.

We asked them for hardware, and we really hope they'll get us some sooner rather than later. They seemed interested in letting us test it as well. Even if we can't publish numbers on it, it would go a long way for us being more excited about the product if we could run our own benchmarks on it just to see for ourselves.

Let's Talk About Applications
Comments Locked

57 Comments

View All Comments

  • jeff4321 - Sunday, August 24, 2008 - link

    If you think that NVIDIA and AMD have been stagnant, you haven't seen the graphics industry change. The basic graphics pipeline hasn't changed. It simply got smaller. A current NVIDIA or ATI GPU probably has as much computation power as an SGI workstation from the 90's. GPGPU is a natural extension of graphics hardware. Once the graphics hardware becomes powerful enough, it starts to resemble a general purpose machine, so you build it that way. It's possible because the design space for the GPU can do more (Moore's Law).

    Since it's early in the deployment of using a GPU as an application-defined co-processor, I would expect there to be competing APIs. Believe it or not, in the late eighties, x87 wasn't the only floating point processor available for x86's. Intel's 387 was slower than Weitek's floating point unit. Weitek lost because the next generation CPUs at the time started integrating floating point. Who will win? The team that has better development tools or the team that exclusively runs the next killer app.

    Dynamically changing between AFR and splitting the scene is hard to do. I'm sure that ATI and NVIDIA have experimented w/ this in-house and they are either doing it now, or they have decided that it kills performance because of the overhead to change it on the fly. How Lucid can do better than the designers of the device drivers and ASICs, I don't know.

    Lucid Hydra is not competition for either NVIDIA or ATI. The Lucid Hydra chip is a mechanism for the principals of the company to get rich when Intel buys them to get access to Multi-GPU software for Larrabee. It'll be a good deal for the principals, but probably a bad deal for Intel.

    Licensing Crossfire and SLI is a business decision. Both technologies cost a bundle to develop. Both companies want to maximize return.
  • AnnonymousCoward - Saturday, August 23, 2008 - link

    I'm afraid this solution will cause unacceptable lag. If the lag isn't inherent, maybe the solution will require a minimum "max frames to render ahead / Prerender limit". I don't buy their "negligible" BS answer.

    Does SLI require a minimum? I got the impression it does, from what I've read in the past. I don't have SLI, and use RivaTuner to set mine to "1".
  • Aethelwolf - Saturday, August 23, 2008 - link

    Lets pretend, if only for a moment, that I was a GPU company interested giving a certain other GPU company a black eye. And lets say I have this strategy where I design for the middle range and then scale up and down. I would be seriously haggling lucid right now to become a partner in supplying me, and pretty much only me, besides intel, with their hydra engine.
  • DerekWilson - Saturday, August 23, 2008 - link

    that'd be cool, but lucid will sell more parts if they work with everyone.

    they're interested in making lots of money ... maybe amd and intel could do that for them, but i think the long term solution is to support as much as possible.
  • Sublym3 - Saturday, August 23, 2008 - link

    Correct me if i am wrong but isn’t this technology still depending on making the hardware specifically for each DirectX version?

    So when a new DirectX or OpenGL version comes out not only will we have to update our videos cards but also our motherboard at the same time?

    Not to mention this will probably jack up the price on already expensive motherboards.

    Seems like a step backwards to me...
  • DerekWilson - Saturday, August 23, 2008 - link

    you are both right and wrong --

    yes the need to update the technology for each new directx and opengl release.

    BUT

    they don't need to update the hardware at all. the hardware is just a smart switch with a compositor.

    to support a new directx or opengl version, you would only need to update the driver / software for the hydra 100 ...

    just like a regular video card.
  • magao - Saturday, August 23, 2008 - link

    There seems to be a strong correlation between Intel's claims about Larrabee, and Lucid's claims about Hydra.

    This is pure speculation, but I wouldn't be surprised if Hydra is the behind-the-scenes technology that makes Larrabee work.
  • Aethelwolf - Saturday, August 23, 2008 - link

    I think this is the case. Hydra and Larrabee appear to be made for each other. I won't be surprised if they end up mating.

    From a programmers view, Larrabee is very, very exciting tech. If it fails in the PC space, it might be resurrected when next-gen consoles come along, since it is fully programmable and claims linear performance (thanks to hydra?).
  • DerekWilson - Saturday, August 23, 2008 - link

    i'm sure intel will love hydra for allowing their platforms to support linear scaling with multigpu solutions.

    but larrabee won't have anything near the same scaling issues that nvidia and amd have in scaling to multi-gpu -- larrabee may not even need this to get near linear scaling in multigpu situation.

    essentially they just need to build an smp system and it will work -- shared mem and all ...

    their driver would need to optimize differently, but that would be about it.
  • GmTrix - Saturday, August 23, 2008 - link

    If larrabee doesn't need hydra to get near linear scaling isn't hydra just providing a way for amd and nvidia to compete with it?

Log in

Don't have an account? Sign up now