A year ago Lucid announced the Hydra 100: a physical chip that could enable hardware multi-GPU without any pesky SLI/Crossfire software, game profiles or anything like that.

At a high level what Lucid's technology does is intercept OpenGL/DirectX commands from the CPU to the GPU and load balance them across any number of GPUs. The final buffers are read back by the Lucid chip and sent to primary GPU for display.

The technology sounds flawless. You don't need to worry about game profiles or driver support, you just add more GPUs and they should be perfectly load balanced. Even more impressive is Lucid's claim that you can mix and match GPUs of different performance levels. For example you could put a GeForce GTX 285 and a GeForce 9800 GTX in parallel and the two would be perfectly load balanced by Lucid's hardware; you'd get a real speedup. Eventually, Lucid will also enable multi-GPU configurations from different vendors (e.g. one NVIDIA GPU + one AMD GPU).

At least on paper, Lucid's technology has the potential to completely eliminate all of the multi-GPU silliness we've been dealing with for the past several years. Today, Lucid is announcing the final set of hardware that will be shipping within the next ~30 days.


The MSI Big Bang, a P55 motherboard with Lucid's Hydra 200

It's called the Hydra 200 and it will first be featured on MSI's Big Bang P55 motherboard. Unlike the Hydra 100 we talked about last year, 200 is built on a 65nm process node instead of 130nm. The architecture is widely improved thanks to much more experience with the chip on Lucid's part.

There are three versions of the Hydra 200: the LT22114, the LT22102 and the LT22114. The only difference between the chips are the number of PCIe lanes. The lowest end chip has a x8 connection to the CPU/PCIe controller and two x8 connections to GPUs. The midrange LT22102 has a x16 connection to the CPU and two x16 connections for GPUs. And the highest end solution, the one being used on the MSI board, has a x16 to the CPU and then a configurable pair of x16s to GPUs. You can operate this controller in 4 x8 mode, 1 x16 + 2 x8 or 2 x16. It's all auto sensing and auto-configurable. The high end product will be launching in October, with the other two versions shipping into mainstream and potentially mobile systems some time later.

Lucid wouldn't tell us the added cost on a motherboard but Lucid gave us the guidance of around $1.50 per PCIe lane. The high end chip has 48 total PCIe lanes, which puts the premium at $72. The low end chip has 24 lanes, translating into a $36 cost for the Hydra 200 chip. Note that since the Hydra 200 has an integrated PCIe switch, there's no need for extra chips on the motherboard (and of course no SLI licensing fees). The first implementation of the Hydra 200 will be on MSI's high end P55 motherboard, so we can expect prices to be at the upper end of the spectrum. With enough support, we could see that fall into the upper mainstream segment.

Lucid specs the Hydra 200 at a 6W TDP.

Also unlike last year, we actually got real seat time with the Hydra 200 and MSI's Big Bang. Even better: we got to play on a GeForce GTX 260 + ATI Radeon HD 4890 running in multi-GPU mode.

Of course with two different GPU vendors, we need Windows 7 to allow both drivers to work at the same time. Lucid's software runs in the background and lets you enable/disable multi-GPU mode:

If for any reason Lucid can't run a game in multi-GPU mode, it will always fall back to working on a single GPU without any interaction from the end user. Lucid claims to be able to accelerate all DX9 and DX10 games, although things like AA become easier in DX10 since all hardware should resolve the same way.


NVIDIA and ATI running in multi-GPU mode on a single system

There are a lot of questions about performance and compatibility, but honestly we can't say much on that until we get the hardware ourselves. We were given some time to play with the system and can say that it at least works.

Lucid only had two games installed on the cross-vendor GPU setup: Bioshock and FEAR 2. There are apparently more demos at the show floor, we'll try and bring more impressions from IDF later this week.

POST A COMMENT

94 Comments

View All Comments

  • ValiumMm - Thursday, September 24, 2009 - link

    But doesnt ATI and Nvidia use different colours? Like two greens arent exactly the same colour, we see it as the same, but its technically two diff colours, how would this work with both? and does it mean if an older card only supports dx9 and a new dx11 card will be only doing dx9 to work with the older card ? Reply
  • bh192012 - Wednesday, September 23, 2009 - link

    72$ will almost buy you a ATI 4850

    using your old X850 with a 4850 = no DX10 and no shader 3.0 (no bioshock?) Plus the proportion of extra electricity v.s. extra FPS = meh

    Throwing a pair of 5870's for perfect scaling might still be a no go, as I'd think because you're limited to 16x to the cpu, it's still effectively 8x by 8x. Plus on an X58 mobo, you actually get 16x by 16x. I'd hope the "high end model" would actually by 32x to the cpu. Maybe I'm wrong, benchies will be interesting.
    Reply
  • hamunaptra - Wednesday, September 23, 2009 - link

    I may consider this, when prices come down and if / when it becomes mainstream technology for multi gpu! It sounds amazing so far! the only thing I dont want it stuttering / microstuttering that SLI / Xfire has! Reply
  • araczynski - Wednesday, September 23, 2009 - link

    if this doesn't bring out the AMD/Nvidia lawyers out I don't know what will. Cuz you know, they have to protect the consumer and all that... Reply
  • scooterlibby - Wednesday, September 23, 2009 - link

    Still confused as to why there are two monitors plugged in but only one shown. Does Hydra have some sort of proprietary bridge to connect the two cards? Reply
  • MrRuckus - Wednesday, September 23, 2009 - link

    Thats my question. If you use SLI or have ever browsed slizone forums, you'd notice SLI can be used without a bridge, but the downside is that it runs over the PCIe bus and can run slow. On cards like the 8600GT and such, they sometimes recommend or need to run without a bridge and it runs fine, but on quicker cards, like a 9800GTX or 200 series, a bridge is required because its too much to push over the PCIe bus without a large performance hit.

    I'll just be curious to see when more information comes out how it actually works. Also using only 2 somewhat older games to demonstrate it is questionable too. It seems like there's going to be A LOT of different variables and MANY different configurations that people could use. I can see this being hit and miss depending on what hardware you use and what they support.
    Reply
  • IcePickFreak - Wednesday, September 23, 2009 - link

    It's at least something to be excited about, but of course I think everyone has their doubts. It's been quite a long time since anything big came across the PC scene.

    What gives me a good feeling about this is they haven't been hyping the hell out of it - a company of few words and a lot of action maybe?

    At any rate, whether this thing flops or not, at least for now it's nice to see something exciting in the immediate future. If it does indeed work as they claim it'll be a major milestone in PC gaming hardware.
    Reply
  • werfu - Wednesday, September 23, 2009 - link

    If it scale linearly then it would be NVidia and ATI own multi-GPU solution. This claim is IHMO way too ambitious. ATI and NVidia haven't been able to do so without having to work on the constraint that the Hydra gets. Reply
  • JonnyDough - Wednesday, September 23, 2009 - link

    You not want to run it in 4x8 mode? I mean, I could take a few of my old cards and put them in ONE PC. I guess we'll finally find a use for these ridiculous PSUs. :) Reply
  • JonnyDough - Wednesday, September 23, 2009 - link

    I love Lucy.

    Oops, Lucid. :)
    Reply

Log in

Don't have an account? Sign up now