A year ago Lucid announced the Hydra 100: a physical chip that could enable hardware multi-GPU without any pesky SLI/Crossfire software, game profiles or anything like that.

At a high level what Lucid's technology does is intercept OpenGL/DirectX commands from the CPU to the GPU and load balance them across any number of GPUs. The final buffers are read back by the Lucid chip and sent to primary GPU for display.

The technology sounds flawless. You don't need to worry about game profiles or driver support, you just add more GPUs and they should be perfectly load balanced. Even more impressive is Lucid's claim that you can mix and match GPUs of different performance levels. For example you could put a GeForce GTX 285 and a GeForce 9800 GTX in parallel and the two would be perfectly load balanced by Lucid's hardware; you'd get a real speedup. Eventually, Lucid will also enable multi-GPU configurations from different vendors (e.g. one NVIDIA GPU + one AMD GPU).

At least on paper, Lucid's technology has the potential to completely eliminate all of the multi-GPU silliness we've been dealing with for the past several years. Today, Lucid is announcing the final set of hardware that will be shipping within the next ~30 days.


The MSI Big Bang, a P55 motherboard with Lucid's Hydra 200

It's called the Hydra 200 and it will first be featured on MSI's Big Bang P55 motherboard. Unlike the Hydra 100 we talked about last year, 200 is built on a 65nm process node instead of 130nm. The architecture is widely improved thanks to much more experience with the chip on Lucid's part.

There are three versions of the Hydra 200: the LT22114, the LT22102 and the LT22114. The only difference between the chips are the number of PCIe lanes. The lowest end chip has a x8 connection to the CPU/PCIe controller and two x8 connections to GPUs. The midrange LT22102 has a x16 connection to the CPU and two x16 connections for GPUs. And the highest end solution, the one being used on the MSI board, has a x16 to the CPU and then a configurable pair of x16s to GPUs. You can operate this controller in 4 x8 mode, 1 x16 + 2 x8 or 2 x16. It's all auto sensing and auto-configurable. The high end product will be launching in October, with the other two versions shipping into mainstream and potentially mobile systems some time later.

Lucid wouldn't tell us the added cost on a motherboard but Lucid gave us the guidance of around $1.50 per PCIe lane. The high end chip has 48 total PCIe lanes, which puts the premium at $72. The low end chip has 24 lanes, translating into a $36 cost for the Hydra 200 chip. Note that since the Hydra 200 has an integrated PCIe switch, there's no need for extra chips on the motherboard (and of course no SLI licensing fees). The first implementation of the Hydra 200 will be on MSI's high end P55 motherboard, so we can expect prices to be at the upper end of the spectrum. With enough support, we could see that fall into the upper mainstream segment.

Lucid specs the Hydra 200 at a 6W TDP.

Also unlike last year, we actually got real seat time with the Hydra 200 and MSI's Big Bang. Even better: we got to play on a GeForce GTX 260 + ATI Radeon HD 4890 running in multi-GPU mode.

Of course with two different GPU vendors, we need Windows 7 to allow both drivers to work at the same time. Lucid's software runs in the background and lets you enable/disable multi-GPU mode:

If for any reason Lucid can't run a game in multi-GPU mode, it will always fall back to working on a single GPU without any interaction from the end user. Lucid claims to be able to accelerate all DX9 and DX10 games, although things like AA become easier in DX10 since all hardware should resolve the same way.


NVIDIA and ATI running in multi-GPU mode on a single system

There are a lot of questions about performance and compatibility, but honestly we can't say much on that until we get the hardware ourselves. We were given some time to play with the system and can say that it at least works.

Lucid only had two games installed on the cross-vendor GPU setup: Bioshock and FEAR 2. There are apparently more demos at the show floor, we'll try and bring more impressions from IDF later this week.

Comments Locked

94 Comments

View All Comments

  • ainarssems - Monday, September 28, 2009 - link

    I do not think it will use memory like You want it. Because lot of data needs to be preloaded locally on each card for fast access and high bandwidth. That would require the same set of data on both cards like with Crossfire or SLI. Look at the HD5870 memory bandwidth- 153GB/s or 159 GB/s for GTX 285, PCIe bus with its 16GB/s bandwidth does not come even close to be able to feed that data on demand not even talking about increased latency. I do not think they can predict which data will be needded on each card and even if they could that would mean lot of loading and unloading data on each card as circumstances change and that would require a lot of bandwidth as well.

    If You want to use all of video memory efectively that would require one memory card to be able to directly acces data on other cards memory like in multi CPU setups. That would mean change in video cards themselves. And even then I expect it to appear on X2 cards first if ever because that access would still need to be done over PCIe with low bandwidth and increased latency for two physical cards. On the X2 cards tey could introduce another connection between GPU's kind of like QPI which would allow access to other GPU's memory. This later could become part of PCIe connection for multiple card interconnection.
    But honestly I think the answer is with the 2 GPU dies in the same package kind of like Q6600 is two E6600 in one package to be able to use all memory efectively.

    All of this is just my somewhat educated guess, I am not a GPU engineer so I could be wrong.
  • shin0bi272 - Thursday, October 1, 2009 - link

    that could be true but we will have to wait and see I guess. Its not like they are releasing this info or anything LOL. On another site I saw them playing ut3 and one of the cards was plugged into one monitor and the other was plugged into a second monitor and each of them was drawing a portion of the screen. Ive not used SLI or crossfire (too expensive for my blood) so Im not sure if you can do that with the current sli/crossfire tech or not. If not then it could be that the cards are working independently and sending the data through the hydra and out the primary card to the display. But you could be right too I have no clue... The biggest tell that it wont work at all like everyone is hoping is that they arent waving their arms in the air saying "look at what we built! It doubles your FPS! Buy it buy it buy it!" you know?
  • shin0bi272 - Friday, September 25, 2009 - link

    Can we Please please please see some proof of the framerate increase? Im happy they are putting out actual silicon but if it doesnt do what its intended to do I dont wanna have to go out and get an entirely new motherboard to replace my asus p6t deluxe v2 one to find out. Youve got a working one with an ati and an Nvidia card in it and bioshock... show us the FPS count with it on and with it off PLEEEEEASE???
  • mutarasector - Thursday, September 24, 2009 - link

    I'm curious as to just how PAP enabled gfx cards will handle HD audio on hydra equipped systems. Will hydra be transparent to this?
  • mindless1 - Thursday, September 24, 2009 - link

    ... but geeze, yet another thing to add an additional $75 onto the price of motherboards? It's getting pretty crazy, the gulf in prices between what a high-spec system costs and the box your average gamer uses (which has what, a 3 year old single GPU and an early C2D?).

    Personally I think more work on this front needs to be done in the drivers, not dedicating another hunk of silicon and board real-estate to the task, and $75 seems a bit o the high side even then, I suppose we're paying for a large chunk of R&D at the onset.
  • Hiravaxis - Thursday, September 24, 2009 - link

    My biggest concern is that the technology is somehow dependent on the ability of the monitor to accept dual input.

    Now I've heard that SLI/x-fire synchronization can occur over the PCIe bus. Is this a possibility with this technology? If so, does it have any drawbacks I.E. reduced bandwidth?
    What about the new QPI bus? If the chip was put on a X58 motherboard?
  • haplo602 - Thursday, September 24, 2009 - link

    The only credible piece of information is that MSI got involved. I don't think they'd give in to a hoax. So the technology does something.

    However I do not think linear scaling is possible, and there are some other limits I can think about.

    So basicaly let's wait for the first detailed reviews before jumping to conclusions.
  • ilinithili - Thursday, September 24, 2009 - link

    I'd be very surprised if this does not do near enough what it says on the box. For a start Intel was the main investor in this technology, and as mentioned previously MSI have now got involved too. The fact that Nvidia/AMD have got their own (but inefficient) multi-gpu methods already, there'd be no place for this if it wasn't any better than SLI/Crossfire so I really don't think they'd have bothered. 100% scaling may be a little optimistic, but around ~90% is probably more likely. Also this tech was already demo'd working last year (anyone remember the photo's of the two Unreal III scenes split across 2 monitors?) and this is now an improved version so I do have fairly high hopes that it's going to work.
  • LevelUser - Thursday, September 24, 2009 - link

    Wouldn't this add massive input lag ?
  • Mugur - Thursday, September 24, 2009 - link

    There is so many questions regarding how this thing will work in various situations...

    I think we'll see. But the tests should better be thoroughly :-).

Log in

Don't have an account? Sign up now