A year ago Lucid announced the Hydra 100: a physical chip that could enable hardware multi-GPU without any pesky SLI/Crossfire software, game profiles or anything like that.

At a high level what Lucid's technology does is intercept OpenGL/DirectX commands from the CPU to the GPU and load balance them across any number of GPUs. The final buffers are read back by the Lucid chip and sent to primary GPU for display.

The technology sounds flawless. You don't need to worry about game profiles or driver support, you just add more GPUs and they should be perfectly load balanced. Even more impressive is Lucid's claim that you can mix and match GPUs of different performance levels. For example you could put a GeForce GTX 285 and a GeForce 9800 GTX in parallel and the two would be perfectly load balanced by Lucid's hardware; you'd get a real speedup. Eventually, Lucid will also enable multi-GPU configurations from different vendors (e.g. one NVIDIA GPU + one AMD GPU).

At least on paper, Lucid's technology has the potential to completely eliminate all of the multi-GPU silliness we've been dealing with for the past several years. Today, Lucid is announcing the final set of hardware that will be shipping within the next ~30 days.

The MSI Big Bang, a P55 motherboard with Lucid's Hydra 200

It's called the Hydra 200 and it will first be featured on MSI's Big Bang P55 motherboard. Unlike the Hydra 100 we talked about last year, 200 is built on a 65nm process node instead of 130nm. The architecture is widely improved thanks to much more experience with the chip on Lucid's part.

There are three versions of the Hydra 200: the LT22114, the LT22102 and the LT22114. The only difference between the chips are the number of PCIe lanes. The lowest end chip has a x8 connection to the CPU/PCIe controller and two x8 connections to GPUs. The midrange LT22102 has a x16 connection to the CPU and two x16 connections for GPUs. And the highest end solution, the one being used on the MSI board, has a x16 to the CPU and then a configurable pair of x16s to GPUs. You can operate this controller in 4 x8 mode, 1 x16 + 2 x8 or 2 x16. It's all auto sensing and auto-configurable. The high end product will be launching in October, with the other two versions shipping into mainstream and potentially mobile systems some time later.

Lucid wouldn't tell us the added cost on a motherboard but Lucid gave us the guidance of around $1.50 per PCIe lane. The high end chip has 48 total PCIe lanes, which puts the premium at $72. The low end chip has 24 lanes, translating into a $36 cost for the Hydra 200 chip. Note that since the Hydra 200 has an integrated PCIe switch, there's no need for extra chips on the motherboard (and of course no SLI licensing fees). The first implementation of the Hydra 200 will be on MSI's high end P55 motherboard, so we can expect prices to be at the upper end of the spectrum. With enough support, we could see that fall into the upper mainstream segment.

Lucid specs the Hydra 200 at a 6W TDP.

Also unlike last year, we actually got real seat time with the Hydra 200 and MSI's Big Bang. Even better: we got to play on a GeForce GTX 260 + ATI Radeon HD 4890 running in multi-GPU mode.

Of course with two different GPU vendors, we need Windows 7 to allow both drivers to work at the same time. Lucid's software runs in the background and lets you enable/disable multi-GPU mode:

If for any reason Lucid can't run a game in multi-GPU mode, it will always fall back to working on a single GPU without any interaction from the end user. Lucid claims to be able to accelerate all DX9 and DX10 games, although things like AA become easier in DX10 since all hardware should resolve the same way.

NVIDIA and ATI running in multi-GPU mode on a single system

There are a lot of questions about performance and compatibility, but honestly we can't say much on that until we get the hardware ourselves. We were given some time to play with the system and can say that it at least works.

Lucid only had two games installed on the cross-vendor GPU setup: Bioshock and FEAR 2. There are apparently more demos at the show floor, we'll try and bring more impressions from IDF later this week.



View All Comments

  • Moopiz - Wednesday, September 23, 2009 - link

    So will this affect X2 cards in any way? Reply
  • poohbear - Wednesday, September 23, 2009 - link

    wow, i remember asking about this chip a few weeks back as it completely fell off the radar after they announced it. If it works as advertised, it'll make SLI/Crossfire obsolete. Reply
  • InternetGeek - Wednesday, September 23, 2009 - link

    i hope it works and that they plan to put it on laptops as well... Reply
  • james jwb - Wednesday, September 23, 2009 - link

    IF this ends up working with almost optimal scaling, it'll be the most exciting thing to happen in gaming (and others) for a long, long time.

    I wonder how much CPU horsepower you need to really get something like optimal.
  • ninjakamster - Wednesday, September 23, 2009 - link

    I suppose we can assume that we can never upgrade our current motherboards with this exciting chip.

    Right now, it is only slated to be available on this particular MSI P55 board? What about the higher end X58 boards?

    It would be a shame to purchase a new board but this technology sounds so exciting.

    My brother upgraded from a Radeon X300 to a Geforce 7900GT to his current Radeon HD4850. It would be very nice if he could use his two older cards collecting dust in the closet to gain a few extra precious frames per second.

    I wonder how this will compare to CrossFire and SLI?
  • andrewaggb - Friday, September 25, 2009 - link

    Honestly I think everybody needs to settle down and wait. I see all sorts of reasons why this won't work the way everybody hopes.

    Memory sizes? DirectX feature sets? Image filtering, hue, etc? I mean if I mixed and ati and nvidia card how would my colors look? If I mixed a 5870 and a 2900 how would my AA or AF look?

    What are the overheads? I don't buy linear scaling, sorry people. Half of the work is being done on another card (in reality some of work is going to be duplicated to keep the same textures and scene data in both cards), and then you have to copy it from one card to the other for rendering over the pcie bus, that's latency and parts of that operation could be blocking and definately require synchronization. That means slower. I would guess pairing an x300 and 4850 would probably be slower than just using the 4850.

    But we'll see. I hope I'm wrong.
  • MamiyaOtaru - Wednesday, September 23, 2009 - link

    I'm picturing a 5870 plus a geforce 8*00 for Physx. awesome Reply
  • Lakku - Wednesday, September 23, 2009 - link

    My thoughts exactly. I already have a gtx 260 216, and was thinking of upgrading to a P55 and Lynnfield anyway. I can get all that plus a 5870 for single GPU DX11, or get free performence boost using two different cards for DX10 titles, plus PhysX! Reply
  • faxon - Wednesday, September 23, 2009 - link

    would be interested in this myself. i just ordered an HD5870 to replace my 9800GTX so physx support would be leet if it works, since it would fix that whole physx not working if an ATI card is detected thing, since disabling it if it's running in this type of a mode would be shooting themselves in the foot, especially if it scales better than their own technology, at no cost to them other than simply optimizing their drivers for it instead. Reply
  • silverblue - Tuesday, September 29, 2009 - link

    Hehe I just read your post properly; I suppose it takes a few days to filter through from website to website.

    That, and I've not woken up yet :)

Log in

Don't have an account? Sign up now