Well the title is a little misleading. It isn't enabling SLI or CrossFire: it is making them irrelevant and useless by replacing them with a better GPU independent multi-GPU solution. Lucid is a company with lots of funding from Intel, and they are a chip designer that is gearing up to sell hardware that enables truly scalable multi-GPU rendering.

While graphics is completely scalable and there are native solutions to extract near linear performance gains in every case, NVIDIA and AMD have opted not to go down this path yet. It is quite difficult, as it involves the sharing of resources and more fine grained distribution of the workload. SLI and CrossFire are bandaid solutions that don't scale well past 3 GPUs. Their very course grained load balancing and the tricks that must be done to handle certain rendering techniques really hobbles the inherent parallelism of graphics.

Lucid, with their Hydra Engine and the Hydra 100 chip, are going in a different direction. With a background in large data set analysis, these guy are capable of intercepting the DirectX or OpenGL command stream before it hits the GPU, analyzing the data, and dividing up scene at an object level. Rather than rendering alternating frames, or screens split on the horizontal, this part is capable of load balancing things like groups of triangles that are associated with a single group of textures and sending these tasks to whatever GPU it makes the most sense to render on. The scene is composited after all GPUs finish rendering their parts and send the data back to the Lucid chip.

The tasks that they load balance are even dynamically defined, and they haven't gone into a lot of detail with us at this point. But we have another meeting scheduled with them today. So we'll see what happens there.

These guys say they always get near linear scaling regardless of application, and that the scaling is not limited to a number of GPUs. Meaning that 4x GPUs would actually see nearly 4x scaling. Or that 10 GPUs would see 10x scaling. The implications, if this actually works as advertised are insane.

So why is Intel interested in this? Well, they could offer a platform solution through motherboards with this chip on them that delivers better multi-GPU scaling than either NVIDIA or AMD are capable of offering natively on their own platforms. With the issues in getting NVIDIA SLI on Intel systems, this will really be a slap in the face for them.

We will definitely bring you more when we know it. At this point it seems like a great idea, but the theory doesn't always line up with the execution. If they pull of what they say they can it will revolutionize multi-GPU. We'll see what happens.

POST A COMMENT

35 Comments

View All Comments

  • MrHanson - Thursday, August 21, 2008 - link

    Having a separete box with it's own power supply(s) is ideal. That way if you want to add 2 or more 3 gpu's to your hydra system, you don't have to rip apart your computer and put in a different motherboard and power supply. I imagine this system will probably come with it's own mainboard and power supply with several separate pcie x16 slots for scalablity. Time to put that external pci express specification to good use! Reply
  • Goty - Wednesday, August 20, 2008 - link

    Course =/= Coarse Reply
  • steveyballme - Wednesday, August 20, 2008 - link

    They haven't sent our labs an evaluation unit!
    This may be another BigFoot in a freezer!


    http://fakesteveballmer.blogspot.com">http://fakesteveballmer.blogspot.com
    Reply
  • TonyB - Wednesday, August 20, 2008 - link

    need benchmarks on Crysis on a quad 280 setup. Reply
  • hooflung - Wednesday, August 20, 2008 - link

    There is next to no latency according to Lucid. The problem is that Vista doesn't allow 2 graphics card drivers to work at one time so you cannot mix ATI and Nvidia cards in a hydra setup until microsoft fixes that, if possible.

    The catch is how much is the hydra going to run. If you look on Dailytech's preview you will see how the graphics are rendered. It splits up the scene before it ever hits the graphics driver, thus, there is no latency that AFR or bandwidth issues.

    Reply
  • Ananke - Wednesday, August 20, 2008 - link

    Lucid chip recognizes the instruction pattern, i.e. rendering triangles, textures etc. What about recognizing CUDA or other computational applications designed to run on graphics? If it can't split those instructions effectively, than the Nvidia idea of heavy computations done on multiple GPUs is compromized, and this is exactly what Intel wants :), thus they fund the research in Lucid. Intel needs to wash away competition in heavy computing and eventually make dominant market space for Larabee, i.e. multiple x86 Intel chips. Reply
  • UnlimitedInternets36 - Friday, August 22, 2008 - link

    Given Intel's orientation with Larrabee being just compute in HW and DX&GL being implemented in API. I think they have this in mind. Gaming graphics are always a useful display example, but you can bet they want GPGPU sales as well. Reply
  • legoman666 - Wednesday, August 20, 2008 - link

    There has to be some catch. There's no way ATI or nVidia with their respective R&D budgets wouldn't have implemented this if it works as promised. Both companies I'm sure have put many times more money into researching this sort of thing than this small company that no one has heard of. Reply
  • Dark Legion - Wednesday, August 20, 2008 - link

    Yes, maybe more money than a small company that nobody has heard of (until now), but not a company that has "loads of funding from Intel" (3rd sentence of the article, come on) Reply
  • Rev1 - Wednesday, August 20, 2008 - link

    "Lucid is a company with lots of funding from Intel,"

    Which means they have plenty of money for R&D.
    Reply

Log in

Don't have an account? Sign up now