What Does This Thing Actually Do?From a high level, Lucid's technology intercepts DirectX or OpenGL API calls, analyzes them, organizes them into distinct tasks, and based on the analysis combined with the historical performance of various cards handling of previous frames' workload, it evenly distributes the tasks across all the GPUs in the system.

After the workload is distributed, the buffers are read back to the Hydra chip and composited before the final scene is sent to the proper graphics card for display. Looking a bit deeper, here is a block diagram of the process itself from Lucid's whitepaper.

The current implementation can take x16 PCIe in and can switch it to either 2x x16 PCIe channels or up to 4x x16 PCIe channels. This gives it support for 1 to 4 cards depending on how the motherboard or graphics card handles things. They do have the flexibility to scale down to x8 in and 2x x8 out, making lower cost motherboards feasible as well. Future products may support more graphics cards and more PCIe lanes, but right now 4 is what makes sense. Lucid says the hardware can scale up to any number of cards with linear performance improvement.

Some of the implications of this process are that if any graphics card in the system has other work being done on it (say maybe physics or video or something), the load will be dynamically balanced and you'll still be able to squeeze as much juice out of all the hardware in your system as possible. Pretty cool huh? If it works as advertised that is.

The demo we saw behind closed doors with Lucid did show a video playing on one 9800 GT while the combination of it and one other 9800 GT worked together to run Crysis DX9 with the highest possible settings at 40-60 fps (in game) with a resolution of 1920x1200. Since I've not tested Crysis DX9 mode on 9800 GT I have no idea how good this is, but it at least sounds nice.

Since Lucid is analyzing the data, they can even do things like not draw hidden "tasks" (if an entire object is occluded, rather than send it to a graphics card, it just doesn't send it down). I asked about dependent texturing and shader modification of depth, and apparently they also build something like a dependency graph and if something modified affects something else they are able to adjust that on the fly as well.

In theory, tracking and adjusting to dependencies on the fly will completely avoid the issues that keep NVIDIA and AMD from running AFR in all games. And they even claim that this can help give you higher than linear scaling when using their hardware with more than one card.

We asked what the latency of their implementation is, and they said it is negligible. Of course, that's not a real answer, especially for guys like us who want to know the details so we can understand what's going on better. We don't just want to see the end result, we want to know how we get there. Playing Crysis didn't feel laggy, but there is no way this solution doesn't introduce processing time.

An explanation for this is the fact that the Hydra software can keep requesting and queuing up tasks beyond what graphics cards could do, so that the CPU is able to keep going and send more graphics API calls than it would normally. This seems like it would introduce more lag to us, but they assured us that the opposite is true. If the Hydra engine speeds things up over all, that's great. But it certainly takes some time to do its processing and we'd love to know what it is.

Let's Talk About Applications Moving Machine Code Around
Comments Locked

57 Comments

View All Comments

  • pool1892 - Saturday, August 23, 2008 - link

    i think it is possible to build a solution like this, but this thing has a lot to do, on-the-fly qos and scheduling and optimizing and so on. with data in the gigabits/s. sounds like a heavy duty cisco switch.
    i can imagine this working, but the chip will be a heavyweight - and it will be power consuming and expensive.
    and it only has potential in the marketplace if the price premium for a mainboard with hydra beats the faster graphics you can buy for this premium. that will be tough.
    larrabee is as usual a totally different animal, hydra could very well be a software feature for it (esp. with qpi in gen 2)
  • pool1892 - Saturday, August 23, 2008 - link

    gotta correct myself - after a little diggin: the hydra is a tensilica diamond based programmable risc controller with custom logic around it running at 225mhz. it uses about 5watt. this is a tiny chip, it might be affordable. (but how is liquid going to earn money? and: they have to optimize their driver and the the programmable parts of the chip for different rendering techniques in different games - who is paying for that?)
  • Goty - Saturday, August 23, 2008 - link

    I don't see this as a bad thing for GPU makers, personally. Since ATI no longer has anything like the "master card" for crossfire, as long as they're selling two GPUs to people running multi-card systems, they're not losing out. Sure, they may lose a bit of money on the mainboard side of things since consumers will be able to use any chipset they want with this technology, but the margin on the GPU silicon is probably higher than that on the chipset side, anyhow.
  • yyrkoon - Saturday, August 23, 2008 - link

    "Lucid also makes what seems like a ridiculous claim. They say that in some cases they could see higher than linear scaling. The reason they claim this should be possible is that the CPU will be offloaded by their hardware and doesn't need to worry about as much so that overall system performance will go up. We sort of doubt this, and hearing such claims makes us nervous. They did state that this was not the norm, but rather the exception. If it happens at all it would have to be the exception, but it still seems way too out there for me to buy it."

    Come now guys . . . if a CPU dependent game such as World in Conflict could offload the CPU 10%, would it not make sense that the CPU could do an additional 10%, thus offering more performance ? I am not saying I believe this is possible myself, but taking Lucid at their word, this just makes sense to me.

    "The demo we saw behind closed doors with Lucid did show a video playing on one 9800 GT while the combination of it and one other 9800 GT worked together to run Crysis DX9 with the highest possible settings at 40-60 fps (in game) with a resolution of 1920x1200. Since I've not tested Crysis DX9 mode on 9800 GT I have no idea how good this is, but it at least sounds nice."

    Just going from this review, and assuming you meant a 9800GTX/GTX+: 47-41 FPS average with 16x AF/ 0x AA.

    "An explanation for this is the fact that the Hydra software can keep requesting and queuing up tasks beyond what graphics cards could do, so that the CPU is able to keep going and send more graphics API calls than it would normally. This seems like it would introduce more lag to us, but they assured us that the opposite is true. If the Hydra engine speeds things up over all, that's great. But it certainly takes some time to do its processing and we'd love to know what it is."

    Wait a minute . . . did you not just mention on a previous page somewhere that the number of cards implemented were limited due to latency implications ? . . .

    "Of course, while it seems like an all or nothing situation that would serve no purpose but to destroy the experience of end users, NVIDIA and ATI have lots of resources to work on this sort of "problem" and I'm sure they'll try their best to come up with something. Maybe one day they'll wake up and realize (especially if one starts to dominate over the other other) that Microsoft and Intel got slammed with antitrust suits for very similar practices."

    OR, they could just purchase the company outright, which seems to me what Lucid may have been aiming for to begin with. After that the buying company could do whatever they please, such as kill the project. or completely decimate the opposite camp *if* the hardware truely does what it claims. At least where gaming is concerned . . . and we all know that IGP's make up for a very large portion of home systems.

    Now what I have to say is that this totally smells like the gaming Physics "fiasco". Buy the hardware now, and the hardware is dead in a year or two. Sure a few games implemented features that leveraged these cards, but do you think developers are going to write code for hardware that has gone way of the dodo ? Probably not.

    The idea is interesting yes, but I will believe it when I see the hardware on sale at the egg . . .
  • DerekWilson - Saturday, August 23, 2008 - link

    it was not 9800 gtx cards -- they were GT cards ... lower performance, single slot.

    also game devs wont have to optimize for it, so there is no problem with them ignoring the situation -- if it works it works
  • yyrkoon - Saturday, August 23, 2008 - link

    9800GTX/GTX+ benchmarks ---> http://www.guru3d.com/article/geforce-9800-gtx-512...">http://www.guru3d.com/article/geforce-9800-gtx-512...
  • JarredWalton - Saturday, August 23, 2008 - link

    http://www.newegg.com/Product/ProductList.aspx?Sub...">9800 GT FTW!

    Basically, performance is closer (identical) to that of 8800 GT. You know, this goes along with the whole "let's rename 8800 GT and 8800 GTS 512MB to 9800 parts, because after all G92 is GeForce 9 hardware." Why the 8800 GT was ever launched with that name remains something of a mystery... well, except that performance was about the same as 8800 GTX.
  • yyrkoon - Saturday, August 23, 2008 - link

    So basically just a 8800GTS with fewer ROPs ? nVidias naming convention definitely leaves a lot to be desired : /
  • Lakku - Saturday, August 23, 2008 - link

    Who are nVidia and AMD/ATi supposed to strong arm in this situation? I don't think they would be in any kind of position to strong arm ANYONE, if this works as advertised. Why? Because they'd have to strong arm Intel (apparently a very big investor into this tech and company) to do so, and that's just not going to happen. Intel only need put this on their own Intel branded gaming or consumer boards, and/or Intel can strong arm Asus and the others into putting this chip onto their motherboards if they want Intel chipsets, still by far the best selling PC chipsets. If this works as advertised, it's probably Intel who will be the biggest winner... and maybe us end users in some way, provided Intel and this company don't charge outrageous prices for this tech.
  • djc208 - Monday, August 25, 2008 - link

    Easy, like the author stated nVidia just writes in some code that looks for the Hydra software or hardware and shuts down parts of the driver. Therefore you can't use their hardware on a system running or equiped with Hydra. If it was a unified front then Intel will have only Larabee to use with this for gaming.

    Problem I see is that it could upset the market if the boycot isn't universal. If ATI let their hardware work with this and nVidia didn't then it could seriously hurt nVidia, as there would be even less reason to go with their chipsets or graphics cards at the high end, where nVidia likes to play.

    More likely is that ATI/nVidia will quickly push out something along the same lines and now we'll have three competing solutions, and then ATI and nVidia will lock out Hydra since they offer an alternative, just like now.

    All this assumes that Hydra works the way it's said to, if not then all bets are off.

Log in

Don't have an account? Sign up now