We continue to hear new details about G72 and G73 here in Taiwan, and the latest batch of info from our vendors is that G72 and G73 will be pin compatible with NV40 and NV43. In other words, your next NVIDIA video card might have the same PCB from the 6600GT, but with a different GPU.

This means lower cost to the manufacturer - there is no need for new R+D or board designs. It also means G72 and G73 will launch very fast when the decision comes from NVIDIA as vendors can very easily switch production from the older chips to the new ones. Two vendors confirmed with us that they are already retooling their PCB for six pin 12V molex with the anticipation that G72 and G73 SLI might need the additional power, but even NVIDIA won't comment to the manufacturers at this point.

A lot seems to hinge on ATI's future choices with R580, X1600 and X1300. As of now, the launch date for X1600 is still late November and NVIDIA isn't exactly hurting for new value and midrange SKUs with the success of 6600GT. The X800GTO and X800GTO2 really give 6600GT a run for its money, but we digress.

NVIDIA's Secret Flip Chip GPU

Manufacturers seem to think G72 and G73 will be an easy tool over from NV40/43, but another vendor claims NVIDIA has bigger plans. They claim that NVIDIA is working on flip chip GPU sockets for motherboards. Apparently, inside NVIDIA engineering teams have several prototypes where the GPU, rather than the CPU, is the main focus of a motherboard with two sockets: one for the GPU and another for the CPU. Whether or not such a machine will ever see the light of day is difficult to say right now. However, the idea of pin compatible GPUs already suggests that we are halfway there when it comes to buying GPUs the same way we buy CPUs: in flip chips. We have plenty of questions, like how the memory interface will work and how that will affect performance, but GPU sockets are likely less a question of "if", but rather "when".

POST A COMMENT

38 Comments

View All Comments

  • DigitalFreak - Wednesday, October 19, 2005 - link

    I would rather see something like this in a laptop. Reply
  • KristopherKubicki - Wednesday, October 19, 2005 - link

    quote:

    I would rather see something like this in a laptop.


    It's called MXM. LOL :)

    Kristopher
    Reply
  • A554SS1N - Wednesday, October 19, 2005 - link

    I quite like the idea of it myself - it could hopefully give people a chance to use larger CPU sized coolers - imagine also how this would help the graphics companies to increase frequency of the GPU's by lowering heat... Ok it might not work out like that.

    For a GPU socket, there would have to be video RAM sockets, where fast video RAM could be swapped in. This could spell more user choice for the consumer (or simply make things even more confusing) as a user could potentially choose individual memory and GPU upgrades - in the long run it would mean not having to upgrade everything at once, for example, if you had fast enough video memory installed but could only afford the GPU, you could choose to stick in whatever suits your needs. Basically more customisation. The biggest downsides are potential cost (not if you intended to buy a new system anyway) and also importanty mentioned here already - space on the motherboard PCB, although in one sense, space required by the GPU slot is slightly gained from removal of a standard GPU slot.

    One potential problem with having different video memory and GPU combinations could likely be in the form of more complex drivers required, and/or more potential for driver error.

    The more I think about the GPU socket, it seems like it could be a good idea in theory, but in practice it may/will just be another headache.
    Reply
  • Visual - Wednesday, October 19, 2005 - link

    this goes against the whole idea of having a separate graphic card...
    but in a way, it might be the right thing to do - do we still need separate cards?

    originally the benefit of separate graphic cards was the significantly faster memory access - system memory just wasn't fast enough, and having the graphic chip with a small amount of ridiculously expencive and fast ram gave quite a boost. look at things now... we get technologies like TurboCache and whatnot to use system memory instead of dedicated memory, and it turns out actually faster than certain budget solutions used on videocards. sure, the high-end still has 1200mhz ram or 512bit membus, making it still quite faster than our system ram, but it cant last long - dualchannel (maybe even quadchannel) ddr2 800mhz, or something of rambus's quadpumped alternatives migh soon make our normal ram faster than the common videoram. lets face it - ram tech isnt growing fater quick now only because there is almost no demand for it. ddr3 prototypes are working at 1.3ghz and up already, but who needs this when the fastest FSB from intel is 1033mhz? we've offloaded the things that would benefit from fast ram off the cpu already, so we're not getting much development in this area :( for similar reason we're getting offered a physics card now, instead of better general-purpose cpus :(

    the graphic processor itself is getting more and more general-purpose functions. it only makes sense then, to stop looking at this processor as something graphic-specific at some point. it'll just be something like a co-processor, to help with intensive highly parallel computations alongside the main cpu... i.e. it may be used for graphics and physics. at some point or other we'll have to be able to upgrade this without the memory (it'll be using our superfast system memory after all) so it has to get socketed instead of on a slot with memory.

    a whole different question is if we need this co-processor to be physically separate from the main cpu (maybe for upgrade flexibility or whatever) or we'll get it integrated into the cpu - like amd's plans have indicated, something like the cell architecture on steroids.
    Reply
  • Calin - Wednesday, October 19, 2005 - link

    The leading class GPU are "better" than the CPUs whether considering thermal power, area or transistor count. If you would choose to put them together, you will get some monstruous result
    Leading edge will stay with different GPU and CPU, and on low end (trailing edge :D) the GPU is in the chipset. It is better for Intel architectures like that (and it is easier to mix and match), but for AMD architectures graphic stays closer to memory if it is in the processor.
    Would there be different processors based on the GPU capabilities included? I'm not sure about that.
    Reply
  • Saist - Wednesday, October 19, 2005 - link

    I think ATi's new RingBus could be the resolution for the memory. Since it's already compatible with everything from DDR to GDDR4, the RingBus should allow socket GPU's to make to market.

    Something I actually started wondering about on the hypermemory and turbocache boards, and something I recall ATi talking about during an E3 demo on Radeon Xpress, is the capability in integrated video mode to have one channel of the memory controller just dedicated to graphics and the other for the processor.

    Something else I've also wondered about is if we would ever see integrated boards that would have an open slot of memory specifically for the graphics.
    Reply
  • Beh - Wednesday, October 19, 2005 - link

    although it sounds like it may be more of a headache than its worth. motherboards seem fairly packed already these days, i cant imagine integrating something like a 7800gtx in with it. plus making it upgradable with the sockets and memory slots would take more room. and it would be a heck of a thing to keep all that cooled down. it would also drastically cut down on the number of expansion slots. for one, you wouldnt need as many and besides you simply would have the room! and what about upgrade paths? i already have to worry about upcoming socket M2 and the changover to ddr2, this would just add to my hairloss if i also have to take into account nvidia G70 socket or ati's R520 socket or gddr3/4 slots. factor in SLI/crossfire and were talking mobos the size of a pizza box. i dont know if i can take it... Reply
  • Griswold - Wednesday, October 19, 2005 - link

    Agreed. Such a move will cause mucho trouble for the customer, but nvidia doesnt care about that. It smells like they're in bed with some mobo makers, due to their chipset ties..

    "you want to use our new G5000? Sure can, but you'll have to use a board with GDDR10 RAM on it, otherwise the GPU will be memory bandwith limited... what? Your mobo only has GDDR8? Well, maybe you should buy our latest mainboard with GDDR10 for only $500!"

    I dont think we'll be swapping out video RAM like we do with normal RAM now. If you want todays vid RAM performance, you cant get that with some sort of DIMM socket.

    I for one, dont like that idea.
    Reply
  • bob661 - Wednesday, October 19, 2005 - link

    No more trouble than changing memory and CPU's. I don't see the problem. You can use the space previously occupied by the x16 slot or slots. One really only needs about 3 x1/x4 slots in a machine anyways (I would only need one or two max). Workstation users can buy a workstation motherboard (larger boards) if they need more slots. Reply
  • Frackal - Wednesday, October 19, 2005 - link

    . Reply

Log in

Don't have an account? Sign up now