Well the title is a little misleading. It isn't enabling SLI or CrossFire: it is making them irrelevant and useless by replacing them with a better GPU independent multi-GPU solution. Lucid is a company with lots of funding from Intel, and they are a chip designer that is gearing up to sell hardware that enables truly scalable multi-GPU rendering.

While graphics is completely scalable and there are native solutions to extract near linear performance gains in every case, NVIDIA and AMD have opted not to go down this path yet. It is quite difficult, as it involves the sharing of resources and more fine grained distribution of the workload. SLI and CrossFire are bandaid solutions that don't scale well past 3 GPUs. Their very course grained load balancing and the tricks that must be done to handle certain rendering techniques really hobbles the inherent parallelism of graphics.

Lucid, with their Hydra Engine and the Hydra 100 chip, are going in a different direction. With a background in large data set analysis, these guy are capable of intercepting the DirectX or OpenGL command stream before it hits the GPU, analyzing the data, and dividing up scene at an object level. Rather than rendering alternating frames, or screens split on the horizontal, this part is capable of load balancing things like groups of triangles that are associated with a single group of textures and sending these tasks to whatever GPU it makes the most sense to render on. The scene is composited after all GPUs finish rendering their parts and send the data back to the Lucid chip.

The tasks that they load balance are even dynamically defined, and they haven't gone into a lot of detail with us at this point. But we have another meeting scheduled with them today. So we'll see what happens there.

These guys say they always get near linear scaling regardless of application, and that the scaling is not limited to a number of GPUs. Meaning that 4x GPUs would actually see nearly 4x scaling. Or that 10 GPUs would see 10x scaling. The implications, if this actually works as advertised are insane.

So why is Intel interested in this? Well, they could offer a platform solution through motherboards with this chip on them that delivers better multi-GPU scaling than either NVIDIA or AMD are capable of offering natively on their own platforms. With the issues in getting NVIDIA SLI on Intel systems, this will really be a slap in the face for them.

We will definitely bring you more when we know it. At this point it seems like a great idea, but the theory doesn't always line up with the execution. If they pull of what they say they can it will revolutionize multi-GPU. We'll see what happens.

Comments Locked

33 Comments

View All Comments

  • ImmortalZ - Wednesday, August 20, 2008 - link

    All that back and forth action is bound to create some latency. Will be interesting to see how they work around that.
  • JonnyDough - Thursday, August 21, 2008 - link

    You know, the reason that NVidia and ATI aren't too enthused about this idea is that once implemented with a wide bus, you wouldn't have to buy new architecture. You could just use old architecture and scale it. Of course, newer architecture uses less energy overall, but it won't be that long before the energy use will be so low that it becomes insignificant.

    In other words, this could REALLY hurt the sales of new graphics. ATI/NVidia may soon have to shift towards becoming motherboard manufacturers.

    What's so strange is that generally the technology companies have a larger scope of the future when it comes to drumming up ideas to make money, while the manufacturers are slightly more limited in what they can innovate on. The chip designers are on a shakier tight rope perhaps, because if they fail to perform or innovate they die. On the other hand, manufacturers have a harder time dealing with economic fluctuations.

    This little dazzler could change all this, forcing manufacturers to become even stronger innovators, while making the GPU market a bit more obsolete. If you have 3 solid chip designs that use very little power and they can scale linearly, then what wide demand would there be for new tech? Just add another GPU, forget buying a new expensive one with the costs of the recent r&d figured in.
  • Inkjammer - Thursday, August 21, 2008 - link

    Yep. After all, why would you pay $600 for the hot new gaming graphics card when you can drop in another $250 card and get greater performance cheaper than the new card? Some people would be willing to spend the money, sure, but there does come a time when buying big yields increasingly lesser performance per dollar (think "Extreme Edition").

Log in

Don't have an account? Sign up now