Let's Talk About Applications

Obviously it'll accelerate games. What about GPGPU? That's not the focus of Lucid right now. They said they want to look at the largest market for the part and target that first, and gaming is certainly where that is at. It is physically possible that the hardware and software could load balance other tasks across the hardware, but this isn't something that is currently being explored or developed.

It will also accelerate games using multiple GPUs while outputting to multiple displays. Imagine 4 GPUs sharing the load over 3 monitors for a flight sim. Neither NVIDIA nor AMD can pull something like this off right now with their technology.

This can end up on both GPUs and on motherboards, and they can be cascaded. There is a limit to how many you can cascade because you will start introducing latency (but Lucid didn't define that limit). But 1 level deep is reasonable apparently. And this means it seems like it would be possible (except for the power requirements) to build a motherboard with 4 slots that had 4 cards each with 2 GPUs (let's say GTX 280s) connected by a Hyrda 100 chip.

And if scaling is really linear, 8x GTX 280 would certainly deliver way more than we could possibly need for a pretty good while. We'd be CPU and system limited until the cows come home (or at least a good 2 or 3 generations of hardware out into the future). Well, either that or developers would catch on that they could allow ridiculous features to be enabled for the kind of super ultra mega (filthy rich) users that would pick up such a crazy solution.

Upgrading hardware would be stupidly simple. Forget PhysX or anything like that: leave your older card in the system and upgrade to the latest generation and they'll both contribute equally to the rendering of frames (and since graphics is usually the largest bottleneck in the system, this will improve performance more than any other solution anyway). If we added a GTX 280 to a card with half it's performance, we'd see a 50% performance improvement over a single GTX 280. Not bad at all. There would be less downside in buying a high end part because it could continue to serve you for much longer than usual. And low end parts would still contribute as well (with a proportionally smaller gain, but a gain nonetheless).

Lucid also makes what seems like a ridiculous claim. They say that in some cases they could see higher than linear scaling. The reason they claim this should be possible is that the CPU will be offloaded by their hardware and doesn't need to worry about as much so that overall system performance will go up. We sort of doubt this, and hearing such claims makes us nervous. They did state that this was not the norm, but rather the exception. If it happens at all it would have to be the exception, but it still seems way too out there for me to buy it.

Aside from utterly invalidating SLI and CrossFire, this thing opens up a whole realm of possibilities. If Intel adopts it for their high end motherboards, they would have the ultimate solution for gaming. Period. If it's up to board vendors, chipset will still be less relevant in at least multi-GPU performance than the inclusion or exclusion of the Lucid Hydra 100.

But can they really do it? And how do they even attempt to do it? They've told us a little bit, and we'll brainstorm a bit and see what we can come up with.

Index What Does This Thing Actually Do?
Comments Locked

57 Comments

View All Comments

  • jeff4321 - Sunday, August 24, 2008 - link

    If you think that NVIDIA and AMD have been stagnant, you haven't seen the graphics industry change. The basic graphics pipeline hasn't changed. It simply got smaller. A current NVIDIA or ATI GPU probably has as much computation power as an SGI workstation from the 90's. GPGPU is a natural extension of graphics hardware. Once the graphics hardware becomes powerful enough, it starts to resemble a general purpose machine, so you build it that way. It's possible because the design space for the GPU can do more (Moore's Law).

    Since it's early in the deployment of using a GPU as an application-defined co-processor, I would expect there to be competing APIs. Believe it or not, in the late eighties, x87 wasn't the only floating point processor available for x86's. Intel's 387 was slower than Weitek's floating point unit. Weitek lost because the next generation CPUs at the time started integrating floating point. Who will win? The team that has better development tools or the team that exclusively runs the next killer app.

    Dynamically changing between AFR and splitting the scene is hard to do. I'm sure that ATI and NVIDIA have experimented w/ this in-house and they are either doing it now, or they have decided that it kills performance because of the overhead to change it on the fly. How Lucid can do better than the designers of the device drivers and ASICs, I don't know.

    Lucid Hydra is not competition for either NVIDIA or ATI. The Lucid Hydra chip is a mechanism for the principals of the company to get rich when Intel buys them to get access to Multi-GPU software for Larrabee. It'll be a good deal for the principals, but probably a bad deal for Intel.

    Licensing Crossfire and SLI is a business decision. Both technologies cost a bundle to develop. Both companies want to maximize return.
  • AnnonymousCoward - Saturday, August 23, 2008 - link

    I'm afraid this solution will cause unacceptable lag. If the lag isn't inherent, maybe the solution will require a minimum "max frames to render ahead / Prerender limit". I don't buy their "negligible" BS answer.

    Does SLI require a minimum? I got the impression it does, from what I've read in the past. I don't have SLI, and use RivaTuner to set mine to "1".
  • Aethelwolf - Saturday, August 23, 2008 - link

    Lets pretend, if only for a moment, that I was a GPU company interested giving a certain other GPU company a black eye. And lets say I have this strategy where I design for the middle range and then scale up and down. I would be seriously haggling lucid right now to become a partner in supplying me, and pretty much only me, besides intel, with their hydra engine.
  • DerekWilson - Saturday, August 23, 2008 - link

    that'd be cool, but lucid will sell more parts if they work with everyone.

    they're interested in making lots of money ... maybe amd and intel could do that for them, but i think the long term solution is to support as much as possible.
  • Sublym3 - Saturday, August 23, 2008 - link

    Correct me if i am wrong but isn’t this technology still depending on making the hardware specifically for each DirectX version?

    So when a new DirectX or OpenGL version comes out not only will we have to update our videos cards but also our motherboard at the same time?

    Not to mention this will probably jack up the price on already expensive motherboards.

    Seems like a step backwards to me...
  • DerekWilson - Saturday, August 23, 2008 - link

    you are both right and wrong --

    yes the need to update the technology for each new directx and opengl release.

    BUT

    they don't need to update the hardware at all. the hardware is just a smart switch with a compositor.

    to support a new directx or opengl version, you would only need to update the driver / software for the hydra 100 ...

    just like a regular video card.
  • magao - Saturday, August 23, 2008 - link

    There seems to be a strong correlation between Intel's claims about Larrabee, and Lucid's claims about Hydra.

    This is pure speculation, but I wouldn't be surprised if Hydra is the behind-the-scenes technology that makes Larrabee work.
  • Aethelwolf - Saturday, August 23, 2008 - link

    I think this is the case. Hydra and Larrabee appear to be made for each other. I won't be surprised if they end up mating.

    From a programmers view, Larrabee is very, very exciting tech. If it fails in the PC space, it might be resurrected when next-gen consoles come along, since it is fully programmable and claims linear performance (thanks to hydra?).
  • DerekWilson - Saturday, August 23, 2008 - link

    i'm sure intel will love hydra for allowing their platforms to support linear scaling with multigpu solutions.

    but larrabee won't have anything near the same scaling issues that nvidia and amd have in scaling to multi-gpu -- larrabee may not even need this to get near linear scaling in multigpu situation.

    essentially they just need to build an smp system and it will work -- shared mem and all ...

    their driver would need to optimize differently, but that would be about it.
  • GmTrix - Saturday, August 23, 2008 - link

    If larrabee doesn't need hydra to get near linear scaling isn't hydra just providing a way for amd and nvidia to compete with it?

Log in

Don't have an account? Sign up now