Final Words

Isn't NVIDIA a Hardware Company?

So, rather than being about the PhysX hardware, this acquisition is all about the software. NVIDIA is relying on CUDA as a vehicle upon which to implement AGEIA's PhysX software. This doesn't require specialized hardware beyond a graphics card, we aren't going to be seeing a PPU stuck next to the next generation GPU, and we aren't even going to be seeing a lot of AGEIA's hardware IP inside of NVIDIA's GPU.

Tony Tamasi stated that, while NVIDIA may use bits and pieces of PhysX hardware technology, this is not the goal or focus of the acquisition. The idea is that, as DirectX and Shader Models evolve and as graphics becomes a problem that requires better handling of dependant code, GPUs will inherently become better at physics calculations. With CUDA at the center while shader hardware continues to become more capable, eventually everyone will have the hardware to handle a game built around complex physical interactions.  This puts NVIDIA's GPU Computing agenda in a position to effectively benefit the average gamer.

But What of the PPU?

By moving in the direction they are already moving, NVIDIA will eliminate the need for a dedicated physics processor. Especially if AMD can get on board and adopt PhysX. As Graphics cards evolve naturally, they will become better physics processors. In the meantime, the PPU isn't going to just completely vanish. After all, NVIDIA can't port PhysX to their architecture in one night. Our understanding is that current commitments will be met, but beyond that the future is to treat the GPU as a general purpose massively parallel floating point engine that can be used to process physics.

Unfortunately for AGEIA, Cell Factor didn’t turn out to be the GLQuake of the physics world. On the flip side, struggling to sell an overpriced product that didn't offer users a huge tangible incentive isn't a good business model. This move is a logical one for AGEIA as it keeps PhysX relevant even if the stand alone hardware doesn't have much of a future. It also really benefits NVIDIA because they have the opportunity to compete with Intel on physics. Here's to hoping AMD joins forces with NVIDIA on this one.

The Face of the Competition
Comments Locked

32 Comments

View All Comments

  • Soubriquet - Monday, February 18, 2008 - link

    Ageia was always a dog without a home. They were simply looking for an exit strategy and they must have made nVidia a tempting offer.

    IMHO nVidia didn't need Ageia, but thought it might come in handy for reasons mentioned hereabouts. Not nearly as expensive as some speculative acquisitions which spring to mind (AMD+ATI) but not likely to be particularly revolutionary either, IMHO.

    Physics processing for the mainstream has to be API lead. So we can expect that to come from MS so Intel, AMD and nVidia will all need to get in touch with them and develope their own hardware to suit.

    I get the sense that virtualisation may assist here, since physics on a GPU is not like physics on a CPU unless you have that layer in between software and hardware that can make the distinction insignificant to the software. In which case AMD have a decision to make, do they add their own version to the GPU (& compete with nVidia) or the CPU (and compete with Intel) or both ?

    In any case virtualisation is just jargon for the time being and we are heading down the multicore CPU route so the CPU seems the obvious place for physics. But nVidia dont make CPUs and I wonder if not a little of their motivation in getting Ageia was to prevent anyone else getting it. A dog in the manger, as it were!
  • goku - Saturday, February 16, 2008 - link

    Thanks for destroying the best thing that could've happened to gaming, now I'll be waiting 10 years for a feature to be added to a game while I could've had that feature in 2 years had there been dedicated physics cards.

    I don't want to have to buy an nvidia GPU just to get add on physics. At least with the PPU card, it didn't matter what video card I had, and if I don't want or need to improve the visual effects of the game I'm playing but would like more interactivity, all I have to do is buy a new PPU and not a whole new video card.
  • perzy - Thursday, February 14, 2008 - link

    Everyone knows that the CPU is dead, it's not developing beacuse of the frequency-/heat-wall.
    So the foreseeable future is the discret processors, gpu, fpu and maybe spu. Whatever x-pu you can imagine or come up with.
    The gpumakers whant the fpumarket also for sure but the want to do it on their product in the channel they know and trust.
    This is like when GM and Ford bought the bus-companies in the USA and closed them down.
    Kill the competition, and a cheap low-selling...no brainer.

    I'm just waiting for Intel to release their high-performance GPU's and later FPU's.
    They are in desperate need to branch out.
  • mlambert890 - Thursday, February 14, 2008 - link

    Everyone knows that the CPU is dead? Really? Have you sent a note off to Intel and AMD? I dont think they got the memo.

    A GPU, FPU or PPU are all processing units. Any efficiency implemented in those parts can be implemented in a CPU. Any physical challenges in terms of die size, heat, and signaling noise faced by the CPU are also faced by those other, transistor based, parts. What are you getting on about?

    All of these parts are a collection of a ton of transistors arranged on a die and coupled with some defined microcode. How they are arranged is a shell game where various sides basically bet on the most commercially viable packaging for any given market segment.

    There is no such thing as "dead" or "alive". With transistor based electronic ICs, there are simply various ways of solving various problems and an ever moving landscape target based on what end users want to do. Semiconductor firms can adapt pretty easily and the semantics dont matter at all.

    I remember similarly ridiculous comments with the advent of digital media when people were saying the "CPU is dead" and the future would be "all DSPs". Back then I was equally amazed at just how far some can be from "getting it"
  • forsunny - Wednesday, February 13, 2008 - link

    So basically from the article it appears that Intel wants PPU hardware to fail so that the CPU is needed for processing Physics and they can continue to push (sell) for newer CPUs for higher performance.

    Therefor it appears that Nvidia may want to compete in the CPU market or even in the PPU market so that extra performance can be gained out of the existing CPU power. Nvidia is already in competition with intel in the general chipset market. Then they could claim better performance than intel by adding the PPU technology; unless intel starts to do the same.

    I don't see where AMD fits into the picture?

  • FXi - Wednesday, February 13, 2008 - link

    I'm thinking there might be some near term benefit from adding a few secondary chips to the gpu card. There is bandwidth enough in pci-e 2.0 to handle some additional calculations.

    I'm thinking:
    phyics ppu
    sound apu

    Nvidia has experience (some a bit dated) in both, but they now own the harder of the two to provide. Now they may run into power and heat budgets that constrain them from pursuing this route, but there is plenty of pci-e bandwidth to handle all these things on a single or even dual (sli) style card(s).

    With even Asus going into the sound route that sounds like an easy one to cover (and one that has benefits in the home theater arena as well). Now that they have the physics I'd think the trio would work well. And when the gpu advances enough to cover one or both of the secondary chips you just remove them, let the gpu take over the calculations, lower the transistor count and move along. You continue to keep the same set of calculations all based on same card and all under the same roof.

    And with their experience in OpenGL acceleration, I'd take a reasonable bet they could do OpenAL acceleration as well. Who knows, will we see eventually OpenPL? :)

  • haplo602 - Wednesday, February 13, 2008 - link

    I think that the whole PPU market missed it's target. Instead of targeting the single-player FPS/RPG etc market, they should have targeted the MMO developers market.

    Any advanced physics needed for a single-player FPS can be handled by a multicore CPU or alternatively on the GPU shaders with a bit of work.

    However imagine a large non-instanced MMORPG with tens of thousands of players and NPCs interating in combat and other tasks. All the collision handling, hit calculations etc HAVE to be done on the server side to prevent cheating. This puts a large strain on the server hardware.

    Now imagine a server farm basicaly a multinode cluster handling a large world with each server hosting an area of the game. It has to handle all the interaction and in addition network traffic, backend database handling. A single or dual PPU setup with proper software could make worlds (if not universes) of difference to the player experience, offloading the server CPUs from a bulk of operations.

    Huge improvements to player experience here. Maybe I just miss information on these, but I have yet to hear about an MMO that actualy uses this kind of technology.
  • DigitalFreak - Tuesday, February 12, 2008 - link

    HA HA HA HA HA HA HA! Chumps!
  • Zan Lynx - Tuesday, February 12, 2008 - link

    Nvidia should start calling their next-gen cards with PhysX support "VRPU"s instead of GPUs. Call it a Virtual Reality Processing Unit.

    The CPU feeds it objects. The VRPU can use the vertex and texture data for both graphics and physics. Add some new data to the textures for physics and off it goes. The CPU can sit back and feed in user and network inputs to update the virtual world state.
  • cheburashka - Tuesday, February 12, 2008 - link

    Larrabee

Log in

Don't have an account? Sign up now