Barriers to Entry and Final Words

Depending on the patents Lucid has, neither NVIDIA nor ATI may be able to build a competing bit of hardware / software for use in their own solutions. And then there is the quesetion: what will NVIDIA and ATI attempt to do in order to be anticompetitive (err, I mean to continue to promote their own solutions to or platforms surrounding multi-GPU).

Because of the fact that both NVIDIA and ATI already participate in anti-competitive practices by artificially limiting the functionality of their hardware on competing platforms, it doesn't seem like a stretch to think they'll try something here as well. But can they break it?

Maybe and maybe not. At a really crappy level they could detect whether or not the hardware is in the system and refuse to do anything 3D. If they're a little nicer they could detect whether the Hydra driver is running and refuse to play 3D while it is active. Beyond that it doesn't seem like there is really much room to do anything like they've been doing. The Lucid software and hardware is completely transparent to the game, the graphics driver and the hardware. None of those components need to know anything for this to work.

As AMD and NVIDIA have to work closely with graphics card and motherboard vendors, they could try and strong arm Lucid out of the market by threatening either (overtly or not) the supply of their silicon to certain OEMs. This could be devastating to Lucid, as we've already see what the fear of an implication can do to software companies in the situation with Assassin's Creed (when faced with the option of applying an already available fix or pulling support for DX10.1 which only AMD supports, they pulled it). This type of thing seems the largest unknown to us.

Of course, while it seems like an all or nothing situation that would serve no purpose but to destroy the experience of end users, NVIDIA and ATI have lots of resources to work on this sort of "problem" and I'm sure they'll try their best to come up with something. Maybe one day they'll wake up and realize (especially if one starts to dominate over the other other) that Microsoft and Intel got slammed with antitrust suits for very similar practices.

Beyond this, they do still need to get motherboard OEMs to place the Hydra 100 on their boards. Or they need to get graphics hardware vendors to build boards with the hardware on them. This increases cost, and OEMs are really sensitive to cost increases. At the same time, a platform that can run both AMD and NVIDIA solutions in multi-GPU configurations has added value. As does a single card multi-GPU solution that gets better performance than even the ones from AMD and NVIDIA.

The parts these guys sell will still have to compete in the retail market, so they can't price themselves out of competition. More performance is great, but they have to worry about price/performance and their own cost. We think this will be more attractive to high end motherboard vendors than anyone else. And we really hope Intel adopts it and uses instead of nForce 100 or nForce 200 chips to enable flexible multi-GPU. Assuming it works of course.

Anyway, Lucid's Hyrda 100 is a really cool idea. And we really hope it works like Lucid says it will. Most of the theory seems sound, and while we've seen it in action, we need to put it to the test and look hard at latency and scaling. And we really really want to get excited. So we really really need hardware.

Moving Machine Code Around
Comments Locked

57 Comments

View All Comments

  • pool1892 - Saturday, August 23, 2008 - link

    i think it is possible to build a solution like this, but this thing has a lot to do, on-the-fly qos and scheduling and optimizing and so on. with data in the gigabits/s. sounds like a heavy duty cisco switch.
    i can imagine this working, but the chip will be a heavyweight - and it will be power consuming and expensive.
    and it only has potential in the marketplace if the price premium for a mainboard with hydra beats the faster graphics you can buy for this premium. that will be tough.
    larrabee is as usual a totally different animal, hydra could very well be a software feature for it (esp. with qpi in gen 2)
  • pool1892 - Saturday, August 23, 2008 - link

    gotta correct myself - after a little diggin: the hydra is a tensilica diamond based programmable risc controller with custom logic around it running at 225mhz. it uses about 5watt. this is a tiny chip, it might be affordable. (but how is liquid going to earn money? and: they have to optimize their driver and the the programmable parts of the chip for different rendering techniques in different games - who is paying for that?)
  • Goty - Saturday, August 23, 2008 - link

    I don't see this as a bad thing for GPU makers, personally. Since ATI no longer has anything like the "master card" for crossfire, as long as they're selling two GPUs to people running multi-card systems, they're not losing out. Sure, they may lose a bit of money on the mainboard side of things since consumers will be able to use any chipset they want with this technology, but the margin on the GPU silicon is probably higher than that on the chipset side, anyhow.
  • yyrkoon - Saturday, August 23, 2008 - link

    "Lucid also makes what seems like a ridiculous claim. They say that in some cases they could see higher than linear scaling. The reason they claim this should be possible is that the CPU will be offloaded by their hardware and doesn't need to worry about as much so that overall system performance will go up. We sort of doubt this, and hearing such claims makes us nervous. They did state that this was not the norm, but rather the exception. If it happens at all it would have to be the exception, but it still seems way too out there for me to buy it."

    Come now guys . . . if a CPU dependent game such as World in Conflict could offload the CPU 10%, would it not make sense that the CPU could do an additional 10%, thus offering more performance ? I am not saying I believe this is possible myself, but taking Lucid at their word, this just makes sense to me.

    "The demo we saw behind closed doors with Lucid did show a video playing on one 9800 GT while the combination of it and one other 9800 GT worked together to run Crysis DX9 with the highest possible settings at 40-60 fps (in game) with a resolution of 1920x1200. Since I've not tested Crysis DX9 mode on 9800 GT I have no idea how good this is, but it at least sounds nice."

    Just going from this review, and assuming you meant a 9800GTX/GTX+: 47-41 FPS average with 16x AF/ 0x AA.

    "An explanation for this is the fact that the Hydra software can keep requesting and queuing up tasks beyond what graphics cards could do, so that the CPU is able to keep going and send more graphics API calls than it would normally. This seems like it would introduce more lag to us, but they assured us that the opposite is true. If the Hydra engine speeds things up over all, that's great. But it certainly takes some time to do its processing and we'd love to know what it is."

    Wait a minute . . . did you not just mention on a previous page somewhere that the number of cards implemented were limited due to latency implications ? . . .

    "Of course, while it seems like an all or nothing situation that would serve no purpose but to destroy the experience of end users, NVIDIA and ATI have lots of resources to work on this sort of "problem" and I'm sure they'll try their best to come up with something. Maybe one day they'll wake up and realize (especially if one starts to dominate over the other other) that Microsoft and Intel got slammed with antitrust suits for very similar practices."

    OR, they could just purchase the company outright, which seems to me what Lucid may have been aiming for to begin with. After that the buying company could do whatever they please, such as kill the project. or completely decimate the opposite camp *if* the hardware truely does what it claims. At least where gaming is concerned . . . and we all know that IGP's make up for a very large portion of home systems.

    Now what I have to say is that this totally smells like the gaming Physics "fiasco". Buy the hardware now, and the hardware is dead in a year or two. Sure a few games implemented features that leveraged these cards, but do you think developers are going to write code for hardware that has gone way of the dodo ? Probably not.

    The idea is interesting yes, but I will believe it when I see the hardware on sale at the egg . . .
  • DerekWilson - Saturday, August 23, 2008 - link

    it was not 9800 gtx cards -- they were GT cards ... lower performance, single slot.

    also game devs wont have to optimize for it, so there is no problem with them ignoring the situation -- if it works it works
  • yyrkoon - Saturday, August 23, 2008 - link

    9800GTX/GTX+ benchmarks ---> http://www.guru3d.com/article/geforce-9800-gtx-512...">http://www.guru3d.com/article/geforce-9800-gtx-512...
  • JarredWalton - Saturday, August 23, 2008 - link

    http://www.newegg.com/Product/ProductList.aspx?Sub...">9800 GT FTW!

    Basically, performance is closer (identical) to that of 8800 GT. You know, this goes along with the whole "let's rename 8800 GT and 8800 GTS 512MB to 9800 parts, because after all G92 is GeForce 9 hardware." Why the 8800 GT was ever launched with that name remains something of a mystery... well, except that performance was about the same as 8800 GTX.
  • yyrkoon - Saturday, August 23, 2008 - link

    So basically just a 8800GTS with fewer ROPs ? nVidias naming convention definitely leaves a lot to be desired : /
  • Lakku - Saturday, August 23, 2008 - link

    Who are nVidia and AMD/ATi supposed to strong arm in this situation? I don't think they would be in any kind of position to strong arm ANYONE, if this works as advertised. Why? Because they'd have to strong arm Intel (apparently a very big investor into this tech and company) to do so, and that's just not going to happen. Intel only need put this on their own Intel branded gaming or consumer boards, and/or Intel can strong arm Asus and the others into putting this chip onto their motherboards if they want Intel chipsets, still by far the best selling PC chipsets. If this works as advertised, it's probably Intel who will be the biggest winner... and maybe us end users in some way, provided Intel and this company don't charge outrageous prices for this tech.
  • djc208 - Monday, August 25, 2008 - link

    Easy, like the author stated nVidia just writes in some code that looks for the Hydra software or hardware and shuts down parts of the driver. Therefore you can't use their hardware on a system running or equiped with Hydra. If it was a unified front then Intel will have only Larabee to use with this for gaming.

    Problem I see is that it could upset the market if the boycot isn't universal. If ATI let their hardware work with this and nVidia didn't then it could seriously hurt nVidia, as there would be even less reason to go with their chipsets or graphics cards at the high end, where nVidia likes to play.

    More likely is that ATI/nVidia will quickly push out something along the same lines and now we'll have three competing solutions, and then ATI and nVidia will lock out Hydra since they offer an alternative, just like now.

    All this assumes that Hydra works the way it's said to, if not then all bets are off.

Log in

Don't have an account? Sign up now