Back to Article

  • etriky - Saturday, October 31, 2009 - link

    10/29 has come and gone and still no board.... Reply
  • maomao0000 - Sunday, October 11, 2009 - link">

    Quality is our Dignity; Service is our Lift. commodity is credit guarantee, you can rest assured of purchase, myyshop will

    provide service for you all, welcome to

    Air Jordan 7 Retro Size 10 Blk/Red Raptor - $34

    100% Authentic Brand New in Box DS Air Jordan 7 Retro Raptor colorway

    Never Worn, only been tried on the day I bought them back in 2002

    $35Firm; no trades"> (Jordan)"> (Nike shox)
  • Canadian87 - Tuesday, October 06, 2009 - link

    is on lock down.... I'm not spending a dime, waiting for a new christmas build, I'll just survive with my E5300, GTS250, 2gb ddr2 800 for now.

    Come chrimuh *inside joke* I'm gonna have enough dough to drop on core i7, 8gb ddr3, a multi gpu board, and any 2 graphics cards around. it's go time, hope this isht works.
  • YOMO - Thursday, October 01, 2009 - link

    it looks to me like there is 2 cables coming out the back of the monitor one for each video card so whats the point of having it with one monitor Reply
  • MonicaS - Tuesday, September 29, 2009 - link

    I hate to knock a great idea, but this is something that should have been invented 3-4 years ago and not now. As far as I know, there's quite literally no reason to SLI anything. Single cards with multiple cores do the job more then well enough and nothing out there is going to require some crazy multiple card set up.

    Beyond that, I can't see any use for this. Its a great idea, but no very useful.

    Monica S
    Computer Repair Los Angeles">
  • shin0bi272 - Wednesday, September 30, 2009 - link

    What this is supposed to do is bring up the fps from what you get on sli or crossfire. Since there are dual gpu cards running in sli (4 gpus total) to run games like crysis at a better frame rate than they could with a single dual gpu card there is definitely a reason for this chip... IF (and thats a BIG if) it performs as its claimed to or even close to it. Your average SLi or Crossfire bump is 40% or so and thats IF the game supports the solution you have (if not you have up to 600 dollars worth of paperweight in your rig). This is supposed to get you double (or close to it) the fps if you put a second card in rather than 40% more. So yeah there's definitely a need for this technology. Now if we can just get games that have more replay value.... Reply
  • ValiumMm - Wednesday, September 30, 2009 - link

    crysis, kthnxbye Reply
  • ValiumMm - Tuesday, September 29, 2009 - link

    Whats the point having two x16 lanes with 2 gfx cards when the upstream goes to x16, wouldnt it be exactly the same if u had x8 on two lanes cos the upstream is exactly the same ??? Reply
  • Sunagwa - Saturday, September 26, 2009 - link

    With this setup are both cards memory going to be utilized?

    Honestly the only thing that kept me from going multi-GPU so far is that I'm paying for 1Gig of video memory that isn't going to be used.
  • shin0bi272 - Sunday, September 27, 2009 - link

    If I'm reading the schematics correctly the threads are sent to the video card as they would be if the hydra wasnt there except the hydra only sends a thread to the card if its ready for one. So it would use the memory as far and I can tell of both cards (which would really be an improvement and probably where they get the scalability claim). Think of the hydra chip as a traffic cop. It sends data down the appropriate channel when that channel is ready for more data and lets the card handle the rendering using all of its tools. Reply
  • ainarssems - Monday, September 28, 2009 - link

    I do not think it will use memory like You want it. Because lot of data needs to be preloaded locally on each card for fast access and high bandwidth. That would require the same set of data on both cards like with Crossfire or SLI. Look at the HD5870 memory bandwidth- 153GB/s or 159 GB/s for GTX 285, PCIe bus with its 16GB/s bandwidth does not come even close to be able to feed that data on demand not even talking about increased latency. I do not think they can predict which data will be needded on each card and even if they could that would mean lot of loading and unloading data on each card as circumstances change and that would require a lot of bandwidth as well.

    If You want to use all of video memory efectively that would require one memory card to be able to directly acces data on other cards memory like in multi CPU setups. That would mean change in video cards themselves. And even then I expect it to appear on X2 cards first if ever because that access would still need to be done over PCIe with low bandwidth and increased latency for two physical cards. On the X2 cards tey could introduce another connection between GPU's kind of like QPI which would allow access to other GPU's memory. This later could become part of PCIe connection for multiple card interconnection.
    But honestly I think the answer is with the 2 GPU dies in the same package kind of like Q6600 is two E6600 in one package to be able to use all memory efectively.

    All of this is just my somewhat educated guess, I am not a GPU engineer so I could be wrong.
  • shin0bi272 - Thursday, October 01, 2009 - link

    that could be true but we will have to wait and see I guess. Its not like they are releasing this info or anything LOL. On another site I saw them playing ut3 and one of the cards was plugged into one monitor and the other was plugged into a second monitor and each of them was drawing a portion of the screen. Ive not used SLI or crossfire (too expensive for my blood) so Im not sure if you can do that with the current sli/crossfire tech or not. If not then it could be that the cards are working independently and sending the data through the hydra and out the primary card to the display. But you could be right too I have no clue... The biggest tell that it wont work at all like everyone is hoping is that they arent waving their arms in the air saying "look at what we built! It doubles your FPS! Buy it buy it buy it!" you know? Reply
  • shin0bi272 - Friday, September 25, 2009 - link

    Can we Please please please see some proof of the framerate increase? Im happy they are putting out actual silicon but if it doesnt do what its intended to do I dont wanna have to go out and get an entirely new motherboard to replace my asus p6t deluxe v2 one to find out. Youve got a working one with an ati and an Nvidia card in it and bioshock... show us the FPS count with it on and with it off PLEEEEEASE??? Reply
  • mutarasector - Thursday, September 24, 2009 - link

    I'm curious as to just how PAP enabled gfx cards will handle HD audio on hydra equipped systems. Will hydra be transparent to this? Reply
  • mindless1 - Thursday, September 24, 2009 - link

    ... but geeze, yet another thing to add an additional $75 onto the price of motherboards? It's getting pretty crazy, the gulf in prices between what a high-spec system costs and the box your average gamer uses (which has what, a 3 year old single GPU and an early C2D?).

    Personally I think more work on this front needs to be done in the drivers, not dedicating another hunk of silicon and board real-estate to the task, and $75 seems a bit o the high side even then, I suppose we're paying for a large chunk of R&D at the onset.
  • Hiravaxis - Thursday, September 24, 2009 - link

    My biggest concern is that the technology is somehow dependent on the ability of the monitor to accept dual input.

    Now I've heard that SLI/x-fire synchronization can occur over the PCIe bus. Is this a possibility with this technology? If so, does it have any drawbacks I.E. reduced bandwidth?
    What about the new QPI bus? If the chip was put on a X58 motherboard?
  • haplo602 - Thursday, September 24, 2009 - link

    The only credible piece of information is that MSI got involved. I don't think they'd give in to a hoax. So the technology does something.

    However I do not think linear scaling is possible, and there are some other limits I can think about.

    So basicaly let's wait for the first detailed reviews before jumping to conclusions.
  • ilinithili - Thursday, September 24, 2009 - link

    I'd be very surprised if this does not do near enough what it says on the box. For a start Intel was the main investor in this technology, and as mentioned previously MSI have now got involved too. The fact that Nvidia/AMD have got their own (but inefficient) multi-gpu methods already, there'd be no place for this if it wasn't any better than SLI/Crossfire so I really don't think they'd have bothered. 100% scaling may be a little optimistic, but around ~90% is probably more likely. Also this tech was already demo'd working last year (anyone remember the photo's of the two Unreal III scenes split across 2 monitors?) and this is now an improved version so I do have fairly high hopes that it's going to work. Reply
  • LevelUser - Thursday, September 24, 2009 - link

    Wouldn't this add massive input lag ? Reply
  • Mugur - Thursday, September 24, 2009 - link

    There is so many questions regarding how this thing will work in various situations...

    I think we'll see. But the tests should better be thoroughly :-).
  • ValiumMm - Thursday, September 24, 2009 - link

    But doesnt ATI and Nvidia use different colours? Like two greens arent exactly the same colour, we see it as the same, but its technically two diff colours, how would this work with both? and does it mean if an older card only supports dx9 and a new dx11 card will be only doing dx9 to work with the older card ? Reply
  • bh192012 - Wednesday, September 23, 2009 - link

    72$ will almost buy you a ATI 4850

    using your old X850 with a 4850 = no DX10 and no shader 3.0 (no bioshock?) Plus the proportion of extra electricity v.s. extra FPS = meh

    Throwing a pair of 5870's for perfect scaling might still be a no go, as I'd think because you're limited to 16x to the cpu, it's still effectively 8x by 8x. Plus on an X58 mobo, you actually get 16x by 16x. I'd hope the "high end model" would actually by 32x to the cpu. Maybe I'm wrong, benchies will be interesting.
  • hamunaptra - Wednesday, September 23, 2009 - link

    I may consider this, when prices come down and if / when it becomes mainstream technology for multi gpu! It sounds amazing so far! the only thing I dont want it stuttering / microstuttering that SLI / Xfire has! Reply
  • araczynski - Wednesday, September 23, 2009 - link

    if this doesn't bring out the AMD/Nvidia lawyers out I don't know what will. Cuz you know, they have to protect the consumer and all that... Reply
  • scooterlibby - Wednesday, September 23, 2009 - link

    Still confused as to why there are two monitors plugged in but only one shown. Does Hydra have some sort of proprietary bridge to connect the two cards? Reply
  • MrRuckus - Wednesday, September 23, 2009 - link

    Thats my question. If you use SLI or have ever browsed slizone forums, you'd notice SLI can be used without a bridge, but the downside is that it runs over the PCIe bus and can run slow. On cards like the 8600GT and such, they sometimes recommend or need to run without a bridge and it runs fine, but on quicker cards, like a 9800GTX or 200 series, a bridge is required because its too much to push over the PCIe bus without a large performance hit.

    I'll just be curious to see when more information comes out how it actually works. Also using only 2 somewhat older games to demonstrate it is questionable too. It seems like there's going to be A LOT of different variables and MANY different configurations that people could use. I can see this being hit and miss depending on what hardware you use and what they support.
  • IcePickFreak - Wednesday, September 23, 2009 - link

    It's at least something to be excited about, but of course I think everyone has their doubts. It's been quite a long time since anything big came across the PC scene.

    What gives me a good feeling about this is they haven't been hyping the hell out of it - a company of few words and a lot of action maybe?

    At any rate, whether this thing flops or not, at least for now it's nice to see something exciting in the immediate future. If it does indeed work as they claim it'll be a major milestone in PC gaming hardware.
  • werfu - Wednesday, September 23, 2009 - link

    If it scale linearly then it would be NVidia and ATI own multi-GPU solution. This claim is IHMO way too ambitious. ATI and NVidia haven't been able to do so without having to work on the constraint that the Hydra gets. Reply
  • JonnyDough - Wednesday, September 23, 2009 - link

    You not want to run it in 4x8 mode? I mean, I could take a few of my old cards and put them in ONE PC. I guess we'll finally find a use for these ridiculous PSUs. :) Reply
  • JonnyDough - Wednesday, September 23, 2009 - link

    I love Lucy.

    Oops, Lucid. :)
  • Moopiz - Wednesday, September 23, 2009 - link

    So will this affect X2 cards in any way? Reply
  • poohbear - Wednesday, September 23, 2009 - link

    wow, i remember asking about this chip a few weeks back as it completely fell off the radar after they announced it. If it works as advertised, it'll make SLI/Crossfire obsolete. Reply
  • InternetGeek - Wednesday, September 23, 2009 - link

    i hope it works and that they plan to put it on laptops as well... Reply
  • james jwb - Wednesday, September 23, 2009 - link

    IF this ends up working with almost optimal scaling, it'll be the most exciting thing to happen in gaming (and others) for a long, long time.

    I wonder how much CPU horsepower you need to really get something like optimal.
  • ninjakamster - Wednesday, September 23, 2009 - link

    I suppose we can assume that we can never upgrade our current motherboards with this exciting chip.

    Right now, it is only slated to be available on this particular MSI P55 board? What about the higher end X58 boards?

    It would be a shame to purchase a new board but this technology sounds so exciting.

    My brother upgraded from a Radeon X300 to a Geforce 7900GT to his current Radeon HD4850. It would be very nice if he could use his two older cards collecting dust in the closet to gain a few extra precious frames per second.

    I wonder how this will compare to CrossFire and SLI?
  • andrewaggb - Friday, September 25, 2009 - link

    Honestly I think everybody needs to settle down and wait. I see all sorts of reasons why this won't work the way everybody hopes.

    Memory sizes? DirectX feature sets? Image filtering, hue, etc? I mean if I mixed and ati and nvidia card how would my colors look? If I mixed a 5870 and a 2900 how would my AA or AF look?

    What are the overheads? I don't buy linear scaling, sorry people. Half of the work is being done on another card (in reality some of work is going to be duplicated to keep the same textures and scene data in both cards), and then you have to copy it from one card to the other for rendering over the pcie bus, that's latency and parts of that operation could be blocking and definately require synchronization. That means slower. I would guess pairing an x300 and 4850 would probably be slower than just using the 4850.

    But we'll see. I hope I'm wrong.
  • MamiyaOtaru - Wednesday, September 23, 2009 - link

    I'm picturing a 5870 plus a geforce 8*00 for Physx. awesome Reply
  • Lakku - Wednesday, September 23, 2009 - link

    My thoughts exactly. I already have a gtx 260 216, and was thinking of upgrading to a P55 and Lynnfield anyway. I can get all that plus a 5870 for single GPU DX11, or get free performence boost using two different cards for DX10 titles, plus PhysX! Reply
  • faxon - Wednesday, September 23, 2009 - link

    would be interested in this myself. i just ordered an HD5870 to replace my 9800GTX so physx support would be leet if it works, since it would fix that whole physx not working if an ATI card is detected thing, since disabling it if it's running in this type of a mode would be shooting themselves in the foot, especially if it scales better than their own technology, at no cost to them other than simply optimizing their drivers for it instead. Reply
  • silverblue - Tuesday, September 29, 2009 - link

    Hehe I just read your post properly; I suppose it takes a few days to filter through from website to website.

    That, and I've not woken up yet :)
  • silverblue - Tuesday, September 29, 2009 - link

    Unfortunately, there's no chance of that...">

    Although Toms didn't touch on it, I expect it's partly in response to Lucid.
  • ValiumMm - Tuesday, September 22, 2009 - link

    So if you have two 4870's for example, would you not put it as crossfired in ATI settings and just let the hydra chip do its work, or would you still say to put it in crossfire? Reply
  • petteyg359 - Tuesday, September 22, 2009 - link

    [quote]There are three versions of the Hydra 200: the LT22114, the LT22102 and the LT22114[/quote]

    I count two plus an evil twin.
  • chizow - Tuesday, September 22, 2009 - link

    Anand, how were you able to verify Bioshock was running in mixed-GPU mode? From this bit from the article:

    If for any reason Lucid can't run a game in multi-GPU mode, it will always fall back to working on a single GPU without any interaction from the end user.

    It seems it would've been difficult to determine if you were running in single-GPU or mixed-mode without comparing to single-GPU performance for either Nvidia part. Not to mention Bioshock does run quite well on any single GT200 or RV670 part. Just seems VERY misleading to claim you saw Bioshock running in mixed-mode without expanding on how you came to that conclusion.


    Lucid claims to be able to accelerate all DX9 and DX10 games, although things like AA become easier in DX10 since all hardware should resolve the same way.

    Beyond vendor specific custom AA modes, they also handle texture filtering differently. Big question marks here imo.

    Which brings us to price....$72 premium for what is provided already for free or for a very small premium is a lot to ask. My main concern besides compatibility would be of course latency and input lag. I'd love to see the comparisons there, especially given many LCDs already suffer 1-2 frame input lag.
  • AnnonymousCoward - Friday, September 25, 2009 - link

    All good points, chizow. Lag is my main concern, and if this adds 1 frame I won't even consider it. Reply
  • LTG - Tuesday, September 22, 2009 - link

    You said Anand was "VERY misleading to claim you saw Bioshock running in mixed-mode".

    For all the losers who think they detect moments of misleading journalism on Anand's part here's a clue:

    If you ever did actually find "VERY misleading" journalism here then many other people would echo your sentiment. The fact that no one else is agreeing with your charges most like means you are wrong.

    How old are you?
  • youjinbou - Thursday, September 24, 2009 - link

    What an ugly and overused troll.
    Someone detected an issue, but since he's the only one, this has to be an error on his part.
  • chizow - Tuesday, September 22, 2009 - link

    No, it just means they were mislead to believe the configuration was properly running in mixed-GPU mode, which is my point.

    I'm not saying Anand was purposefully misleading, its quite possible he was also mislead to believe multi-GPU was functioning properly when there's really no way he could've known otherwise without doing some validation of his own.

    Now grow up and stop worrying about MY age. Heh.

  • glennpratt - Tuesday, September 22, 2009 - link

    This isn't a review, it's a preview of unreleased hardware. At some level, Anand can accept their word for it. If they are lying, they'll be found out soon enough. Reply
  • chizow - Tuesday, September 22, 2009 - link

    I never claimed it was a review or anything comprehensive, but if a product is highly anticipated for a few features, say:

    1) Vendor Agnostic Multi-GPU


    2) Close to 100% scaling

    And the "preview" directly implies they've observed one of those major selling points functioning properly without actually verifying that's the case, that'd be misleading, imo, especially given the myriad questions regarding the differences in vendor render outputs.

    But getting back to the earlier fella's question, I guess I'm old enough to engage in critical thinking and know better than to take everything I read on the internet at face value, even on a reputable site like Anandtech. As people who seem genuinely interested in the technology I'd think you'd want these questions answered as well, or am I wrong again? ;)
  • petteyg359 - Tuesday, September 22, 2009 - link

    Both of you grow up and learn to use the proper tense of misled, please. Reply
  • Dante80 - Tuesday, September 22, 2009 - link

    ...until the fat lady sang...XD

    I'd written this off, thinking that it was nothing more than smoke and mirrrors...from the looks of it, I'm wrong..and also glad about it...^^

    A couple of questions:

    1> Will we be able to see this in AMD systems in the future, or is it an intels' exclusive?
    2> Regarding the optional monitors in the pics, would this work with eyefinity? (I assume it does)
    3> If this catches on (and it will if it delivers linear graphic acceleration without latency), what would this mean for driver development in nvidia and ATI? I know hydra is supposed to be driver agnostic, but

    a> It would render most sli-Xfire work as an exercise in futility. (?)
    b> Drivers could be optimized to take advantage of it. (?)

  • F3R4L - Thursday, September 24, 2009 - link

    Well, seeing how Intel is an investor in the company I would take a gamble and say it will be Intel only... Reply
  • Mumrik - Tuesday, September 22, 2009 - link

    Oh my! Reply
  • prophet001 - Tuesday, September 22, 2009 - link

    quit playin Reply
  • yacoub - Tuesday, September 22, 2009 - link

    Interesting but I wonder how much latency it adds to the process and how that impacts FPS games where low input latency is crucial. Reply
  • andrihb - Tuesday, September 29, 2009 - link

    This is my biggest worry. Latency is often a problem for me as it is. Reply
  • Ben90 - Tuesday, September 22, 2009 - link

    Im sure the latency isnt going to be that big of a problem...remember all the hype about 1156 having an integrated PCI-e controller and the massive frame rates from how the latency is going to be so low.

    Howabout 1336 vs 1156; 1336 has an IMC while 1156 doesnt, yet memory performance is basically the same if both are run on dual channel (yea; 1336 is slightly faster dual channel, but its like less than 1%)
  • adam92682 - Wednesday, September 23, 2009 - link

    1156 has an IMC. Reply
  • sc3252 - Tuesday, September 22, 2009 - link

    I don't remember any "massive" hype, but of course I am don't buy much into hype until I see something working. I really don't think this will turn into much. I bet a few people are going to buy it like those killer nic, and maybe there will be a slight speed increase over just using your one nvidia 8800gt compared to running a 8800gt and a 3870, but I think there wont be much benefit in Lucid Hydra over SLI or crossfire.

    Of courses all this speculation can end in 30 days, right? Of course though if it sucks they will say there is a driver issue that needs to be sorted out...
  • scarywoody - Tuesday, September 22, 2009 - link

    Errr, so you can run 2x16 with lucid and a socket 1156 mobo? if that's the case it's an interesting move and would explain a bit about why you can only run 2x8 or 1x16 with current p55 boards. Reply
  • Denithor - Wednesday, September 23, 2009 - link

    I was wondering the exact same thing - how do you get support for like 32 lanes on a board where there are only 16 lanes total hardwired into the cpu? Reply
  • Triple Omega - Wednesday, September 23, 2009 - link

    You don't. Just like the Nvidia Nforce splitter it is limited to the bandwidth coming from the CPU(x16 or x8 with these) and only creates extra overhead "creating" more lanes.

    Only difference is that the Nvidia solution is completely useless as all it does is create more overhead, while this isn't as it has more then just a splitter function.
  • GeorgeH - Wednesday, September 23, 2009 - link

    Because this isn't SLI or Crossfire. :)

    At a very simple level, this is essentially a DIY video card that you plug your own Frankenstein GPU combos into. For example, instead of the "old way" of slapping two 4890's together in Crossfire to have them render alternate frames (which means you "need" an x16 connection for each card), here you plug two 4850s and a 4770 into the Hydra to get one 5870 (minus the DX11) that only requires a single x16 connection.

    Now we just need to find out if it works or not.
  • sprockkets - Tuesday, September 22, 2009 - link

    Both cards have a monitor cable attached to them, but you only showed one monitor. Was a dual monitor setup? Reply
  • yakuza78 - Tuesday, September 22, 2009 - link

    Taking a look to the pictures you can see both the video cables (one vga and one dvi) go to the monitor, it's just the easiest way to enable both cards, connect them to a multi input monitor. Reply
  • jimhsu - Tuesday, September 22, 2009 - link

    Maybe I don't know that much about parallelization, but isn't compartmentalizing complicated scenes a very difficult problem?

    For example, most modern games have surfaces that are at least partially reflective (mirrors, metal, water, etc). Would that not mean that the reflecting surface and the object it's reflecting needs to be rendered on the same GPU? Say you have many such surfaces (a large chrome object). Isn't it a computationally hard problem to decide which surfaces will be visible to each other surface to effectively split up that workload between GPUs of different performance characteristics without "losing anything", every 1/FPS of a second?

    How do they do this exactly?
  • emboss - Saturday, September 26, 2009 - link

    This is pretty much the problem, yes. Modern graphics engines do a *lot* of render-to-texture stuff, which is the crux of the problem. If one operation writes to the texture on one GPU, and then the other operation writes to the texture on the other GPU, there's a delay while the texture is transferred between GPUs. Minimizing these transfers is the big problem, and it's basically impossible to do so since there's no communication between the game and the driver as to how the texture is going to be used in the future.

    SLI/CrossFire profiles deal with this by having the developers at NV/ATI sit down and analyse the operations the game is doing. They then write up some rules from these results, specific to that game, on how to distribute the textures and operations.

    Lucid are going to run into the same problem. Maybe their heuristics for dividing up the work will be better than NV/ATI's, maybe they won't. But the *real* solution is to fix the graphics APIs to allow developers to develop for multiple GPUs in the same way that they develop for multiple CPUs.
  • andrihb - Tuesday, September 29, 2009 - link

    Wait.. How does it work now? Do they have to develop a different renderering engine for each GPU or GPU family? I thought APIs like DX actually took care of that shit and standardized everything :S. Reply
  • emboss - Tuesday, September 29, 2009 - link

    DirectX and OpenGL provide a standard *software* interface. The actual hardware has a completely different (and more general) structure. The drivers take the software commands and figures out what the hardware needs to do to draw the triangle or whatever. The "problem" is that DirectX and OpenGL are too general, and the driver has to allow for all sorts of possibilities that will probably never occur.

    So, there's a "general" path in the drivers. This is sort of a reference implementation that follows the API specification as closely as possible. Obviously there's one of these per family. This code isn't especially quick because of having to take into account all the possibilities.

    Now, if a game is important, NV and ATI driver developers will either analyze the calls the game makes, or sit down and talk directly with the developers. From this, they will program up a whole lot of game-specific optimizations. Usually it'll be at the family level, but it's not unheard of for specific models to be targetted. Sometimes these optimizations are safe for general use and speed up everything. These migrate back into the general path.

    Much more often, these optimizations violate the API spec in some way, but don't have any effect on this specific game. For example, the API spec might require that a function does a number of things, but in the game only a portion of this functionality is required. So, a game-specific implementation of this function is made that only does the bare minimum required. Since this can break other software that might rely on the removed functionality, these are put into a game-specific layer that sits on top of the general layer and is only activated when the game is detected.

    This is partially why drivers are so huge nowadays. There's not just one driver, but a reference implementation plus dozens or even hundreds of game- and GPU-specific layers.

    So from the game developer point of view, yes DirectX and OpenGL hide all the ugly details. But in reality, all is does is shift the work from the game developers to the driver developers.
  • Akrovah - Thursday, September 24, 2009 - link

    Generally speaking (off the top of my head, I realy have no idea if thsi is true or not) I think this is accomplished by having all data needed to render a scene, geometry, textures, shaders, etc... on both cards, but each card is only given half of the scene to actually render. Reply
  • snarfbot - Tuesday, September 22, 2009 - link

    i cant believe they actually are coming to market, everyone was talking about how this was too good to be true and here it is.

    i cant wait for the benchmarks, if this is as good as it says it is, im buying.

    its time for an upgrade anyway.
  • formulav8 - Tuesday, September 22, 2009 - link

    Even besides the performance aspect, which they apparently didn't give any real-world figures to ponder over (I may have overlooked it though), a great many people will not buy it with such a high premium. Over $70 for the higher lane part is nutty. I will be perfectly content with AMD's and NVidias approach thankyou....

  • JonnyDough - Wednesday, September 23, 2009 - link

    Fool. Reply
  • wumpus - Wednesday, September 23, 2009 - link

    I'm having trouble seeing what is so great about this. It looks like paying $75 to $150 just to cut your graphics bandwidth in half. If you throw it on the P55 motherboard, you aren't losing anything in SLI mode (only one x16 lane anyway), but why pay extra money for less bandwidth anyway?

    I will say this. My hat is off to anyone who buys this and can succeeds in making at least one fanboy's head explode (ATI and NVidia in the same board???!!!EleventyOne!!!????).
  • tamalero - Thursday, September 24, 2009 - link

    who cares about lanes, if they acomplish what they promised of almost linear scaling, its a hell lot more acomplish than the 60% average performance increase from single to dual in both nvidia and ATI.
  • formulav8 - Wednesday, September 23, 2009 - link

    Thanks! Reply
  • DigitalFreak - Tuesday, September 22, 2009 - link

    $72? That's ridiculous. Even an entire chipset rarely cost that much. Reply
  • inighthawki - Tuesday, September 22, 2009 - link

    true, but keep in mind, assuming it works, it is quite an amazing chip and does far more than a lot of other things can, plus the demand of the chip is going to be fairly low. Unlike a chipset which is needed on EVERY computer, this will only be features in the higher end motherboards. Basic economics... Reply
  • Chlorus - Tuesday, September 22, 2009 - link

    I don't think their target market really cares about price - after all, running four cards in SLI tends to be pretty expensive. Reply
  • Chlorus - Tuesday, September 22, 2009 - link

    I had written this off as vaporware, seeing as how Intel seemed to completely forget about it after last year. Reply
  • prophet001 - Tuesday, September 22, 2009 - link

    wow... just wow

    and :)
  • JAG87 - Tuesday, September 22, 2009 - link

    If it works this will be the holy grail of pc gaming. Reply
  • gigahertz20 - Wednesday, September 23, 2009 - link

    It's like momma always said, if it's too good to be true, it probably is.

    But if it works as advertised, hydra will be going on all kinds of motherboards. Who would ever want to do SLI or crossfire, when you can buy a motherboard with this chip and get linear performance scaling with each additional video card you add.

    I hope the hype lives up to the expectations, but I'm prepared for disappointment.
  • GeorgeH - Tuesday, September 22, 2009 - link


    That noise you hear is the sound of countless wallets slamming shut for the next 30 days. You'd have to be a fool to buy or build a new gaming PC until we find out how well this actually works (and exactly how much it will cost.)
  • ianken - Wednesday, September 23, 2009 - link

    Not mine.

    We know that not all GPUs render the same way. AMD AA looks different than nVidia AA. They have different modes and make different trade offs for perf.

    I suspect best results will still be matched GPUs. The win is (possibly) the ability to get better load balancing than you get with SLI or Crossfire. Some games scale poorly beyond two GPUs, will this fix that? If so then THAT is the win.

    What I do forsee is a gina tpile of app compat issues. Tons of forums posts that are "Hey, how to you get X working with GPUs FOO and BAR? Anyone? BUMP"
  • MadMan007 - Wednesday, September 23, 2009 - link

    Countless is way overstating it but I'm sure some people will wait for it. Ultimately it's still a multi-GPU configuration which is a small niche. Reply
  • Ratinator - Wednesday, September 23, 2009 - link

    This technology would convert me to go multi GPU. In the past I would just buy one card and by the time I needed another I would just upgrade to the latest which was usually significantly faster. I didn't want the hassle of having to get the exact same brand and model. Now I could just buy the latest and greatest and put that one in as well. Reply
  • YGDRASSIL - Wednesday, September 23, 2009 - link

    So you get a new card which is three times as fast as your old. You keep the old juicesucking basterd in to get maybe a 20% speedup. Reply
  • Ratinator - Friday, September 25, 2009 - link

    PS: The old Juice sucking bastard??? Have you seen the new ATI card and how much juice it sucks. Just because it is old doesn't mean it sucks more juice. Reply
  • Ratinator - Friday, September 25, 2009 - link

    Last time I checked speeds didn't triple every year which is about how often I would get a new card. Reply
  • donjuancarlos - Wednesday, September 23, 2009 - link

    I don't know about that. I have an 8800GT that I am planning to upgrade to a DX11 card when they come out. I would have never considered an SLI setup using an older card, but with this, I would. Reply
  • faxon - Wednesday, September 23, 2009 - link

    lol not mine. i just ordered an HD58701GB. with that said, if it isnt up to par, i wont have any issues just getting GT300 as well when it comes out and throwing it on there if it works properly lmao. i was originally planning on getting a second card in a month anyway but that perfectly coincides with the expected launch of the big bang, so i will know whether to spend my cash there instead or not, since i was gonna get an i7 after anyway to replace my EP45-UD3P Reply
  • faxon - Wednesday, September 23, 2009 - link

    ooh also, how are single card multigpu solutions handled? they run SLI/CF on the card so wouldnt that pose some interesting issues? or could ATI/NV just use this instead of whatever silicon they are using now to bridge the cards? the low end model would be well suited to what they need to do for a fair bit cheaper and it would also solve the scaling issues some of these cards can have as well in some games lol Reply

Log in

Don't have an account? Sign up now