AGEIA PhysX Technology and GPU Hardware

First off, here is the low down on the hardware as we know it. AGEIA, being the first and only consumer-oriented physics processor designer right now, has not given us as much in-depth technical detail as other hardware designers. We certainly understand the need to protect intellectual property, especially at this stage in the game, but this is what we know.

PhysX Hardware:
125 Million transistors
130nm manufacturing process
128MB 733MHz Data Rate GDDR3 RAM
128-bit memory bus interface
20 giga-instructions per second
2 Tb/sec internal memory bandwidth
"Dozens" of fully independent cores


There are quite a few things to note about this architecture. Even without knowing all the ins and outs, it is quite obvious that this chip will be a force to be reckoned with in the physics realm. A graphics card, even with a 512-bit internal bus running at core speed, has less than 350 Gb/sec internal bandwidth. There are also lots of restrictions on the way data moves around in a GPU. For instance, there is no way for a pixel shader to read a value, change it, and write it back to the same spot in local RAM. There are ways to deal with this when tackling physics, but making highly efficient use of nearly 6 times the internal bandwidth for the task at hand is a huge plus. CPUs aren't able to touch this type of internal bandwidth either. (Of course, we're talking about internal theoretical bandwidth, but the best we can do for now is relay what AGEIA has told us.)

Physics, as we noted in last years article, generally presents itself in sets of highly dependant small problems. Graphics has become sets of highly independent mathematically intense problems. It's not that GPUs can't be used to solve these problems where the input to one pixel is the output of another (performing multiple passes and making use of render-to-texture functionality is one obvious solution); it's just that much of the power of a GPU is mostly wasted when attempting to solve this type of problem. Making use of a great deal of independent processing units makes sense as well. In a GPU's SIMD architecture, pixel pipelines execute the same instructions on many different pixels. In physics, it is much more often the case that different things need to be done to every physical object in a scene, and it makes much more sense to attack the problem with a proper solution.

To be fair, NVIDIA and ATI are not arguing that they can compete with the physics processing power AGEIA is able to offer in the PhysX chip. The main selling points of physics on the GPU is that everyone who plays games (and would want a physics card) already has a graphics card. Solutions like Havok FX which use SM3.0 to implement physics calculations on the GPU are good ways to augment existing physics engines. These types of solutions will add a little more punch to what developers can do. This won't create a revolution, but it will get game developers to look harder at physics in the future, and that is a good thing. We have yet to see Havok FX or a competing solution in action, so we can't go into any detail on what to expect. However, it is obvious that a multi-GPU platform will be able to benefit from physics engines that make use of GPUs: there are plenty of cases where games are not able to take 100% advantage of both GPUs. In single GPU cases, there could still be a benefit, but the more graphically intensive a scene, the less room there is for the GPU to worry about anything else. We are certainly seeing titles coming out like Oblivion which are able to bring everything we throw at it to a crawl, so balance will certainly be an issue for Havok FX and similar solutions.

DirectX 10 will absolutely benefit AGEIA, NVIDIA, and ATI. For physics on GPU implementations, DX10 will decrease overhead significantly. State changes will be more efficient, and many more objects will be able to be sent to the GPU for processing every frame. This will obviously make it easier for GPUs to handle doing things other than graphics more efficiently. A little less obviously, PhysX hardware accelerated games will also benefit from a graphics standpoint. With the possibility for games to support orders of magnitude more rigid body objects under PhysX, overhead can become an issue when batching these objects to the GPU for rendering. This is a hard thing for us to test for explicitly, but it is easy to understand why it will be a problem when we have developers already complaining about the overhead issue.

While we know the PhysX part can handle 20 GIPS, this measure is likely simple independent instructions. We would really like to get a better idea of how much actual "work" this part can handle, but for now we'll have to settle for this ambiguous number and some real world performance. Let's take a look a the ASUS card and then take a look at the numbers.

Index ASUS Card and Test Configuration
Comments Locked

101 Comments

View All Comments

  • iNsuRRecTiON - Saturday, May 6, 2006 - link

    Hey,

    the ASUS PhysX card does already have 256 MB RAM instead of 128 MB RAM, compared to BFG Tech. card..

    best regards,

    iNsuRRecTiON
  • fishbits - Friday, May 5, 2006 - link

    I want the physics card to be equivalent to a sound card in terms of standardization and how often I feel compelled to upgrade it. In other words, it would be upgraded far less often than graphics cards are. Putting the physics hardware on a graphics card means you would throw away (or sell at a loss) perfectly good physics capability just to get a faster GPU, or get a second card to go to SLI/Crossfire. This is a bad idea for all the same reasons you'd say putting sound card functionality on a graphics card is a bad idea.
  • Calin - Friday, May 5, 2006 - link

    Yes, you could do all kind of nice calculations on the physics boards. However, moving geometry data from the video card to the physics board to be calculated and moving them back to the video card would be shooting yourself in all feets.
    I think this could run well as an accelerator for rendering images or for 3D applications... how soon until 3DStudio, PhotoShop and so on take advantage?
  • tonjohn - Friday, May 5, 2006 - link

    quote:

    I hope that they won't need a respin to add pcie functionality but fear this may be the case.

    The pre-production cards had both PCI and PCIe support at the same time. You simply flipped the card depending on which interface you wanted to use. So I believe that the PPU offers native PCIe support and that BFG and ASUS could produce PCIe boards today if Ageia would give them permission to.
    quote:

    I agree with the post that in volume, this kind of chip could find its way onto 3d graphics cards for gaming.

    Bad idea. Putting the PPU onboard with a GPU means higher costs all around (longer PCBs, possibly more layers, more ram). Also, the two chips will be fighting for banwidth which is never a good thing.

    Higher costs and lower performance = a bad idea.

    FYI: I have a BFG PhysX card.
  • saratoga - Friday, May 5, 2006 - link

    Actually, putting this on the GPU core would be much cheaper. You'd save by getting rid of all the duplicated hardware: DRAMs, memory controller, power circuitry, PCI bridge, cooling, PCB, etc.

    Not to mention you'd likely gain a lot of performance by having a PCI-E 16x slot and an ondie link to the GPU.
  • Calin - Monday, May 8, 2006 - link

    I wonder how much of the 2TB/s internal bandwidth will be used on the Ageia card... if enough of it, then the video card will have very little bandwidth remaining for its operations (graphic rendering). However, if the cooling really needs that heat sink/fan combo, and the card really needs that power connector, you won't be able to put one on the highest end video cards (for power and heat reasons).
  • kilkennycat - Friday, May 5, 2006 - link

    "I have a BFG PhysX card"

    Use it as a door-stop ?

    Pray tell me where you plug one of these if you have the following:-

    Dual 7900GTX512 (or dual 1900XTX)
    and
    Creative X-Fi

    already in your system.
  • Walter Williams - Friday, May 5, 2006 - link

    quote:

    Use it as a door-stop ?

    Actually, I use it to play CellFactor. Your missing out.
    quote:

    Pray tell me where you plug one of these if you have the following:

    SLi and CrossFire are the biggest waste of money unless you are doing intense rendering work.

    I hope people with that setup enjoy their little fps improvement per dollar while I'm playing CellFactor, which requires the PPU to run.
  • kilkennycat - Friday, May 5, 2006 - link

    Cellfactor MP tech demo....

    Cellfactor to be released in Q4 2007.. maybe... Your PhysX is going to be a little old by the time the full game is released...Should be up to quad-core CPUs and lots of cycles available for physics calculations by that time.

    I have recently been playing Oblivion a lot, like several million others. The Havok software physics are just great --- and you NEED the highest-end graphics for optimum visual experience in that game --- see the current Anandtech article. Sorry, I care little about (er) "better particle effects" or "more realistic explosions", even when I play Far Cry. In fact, from my experiences with BF2 and BF1942 I find them more than adequately immersive with their great scenery graphics and their CURRENT physics effects -- even the old and noble BF1942.

    On single-player games, I would far prefer seeing additional hardware, or compute-cycles, being directed at advanced-AI than physics. What point fancy physics-effects if the AI enemy has about as much built-in intelligence as a lump of Swiss cheese? Sure does not help the game's immersive experience at all. And tightly-scripted AI just does not work in open-area scenarios (c.f: Unreal 2 and the dumb enemies easily sneaked from behind -- somebody forgot to script that eventuality amongst many others that can occur in an open play-area). The successful tightly-scripted single-play shooters like Doom3, HL2, FEAR etc all have overt or disguised "corridors". So, the developers of open-area games like Far Cry or Oblivion chose an algorithmic *intelligent-agent AI* approach, with a simple overlay of scripting to set some broad behavioral and/or location boundaries. A distinct move in the right direction but there are some problems with the AI implementation in both games. More sophisticated AI algorithms will require more compute-power, which, if performed on the CPU, will need to be traded off with cycles available for graphics. Dual-core will help, but a general-purpose DSP might help even more... they are not expensive and easily integrated into a motherboard.

    Back to the immediate subject of the Ageia PPU and physics effects:-

    I am far more intrigued by Havok's exercises with Havok FX harnessing both dual-core CPU power and GPU power in the service of physics emulation. Would be great to have action games with a physics-adjustable slider so that one can trade off graphics with physics effects in a seamless manner, just as one can trade-off advanced-graphics elements in games today.... which is exactly where Havok is heading. No need to support marginal added hardware like the PhysX. Now, if the PhysX engine was an option on every high-end motherboard, for say not more than $50 extra, or as an optional motherboard plug-in at say $75, (like the 8087 of yore) and did not take up any additional precious peripheral slots, then I would rate its chances of becoming main-stream to be pretty high. Seems as if Ageia should license their hardware DESIGN as soon as possible to nVidia or ATi at (say) not more than $15 a copy and have them incorporate the design into their motherboard chip-sets.

    The current Ageia has 3 strikes against it for cost, hardware interface ( PCI ) and software-support reasons. The PhysX PPU certainly has NO hope at all as a periphreal device as long as it stays in PCI form. Must migrate to PCIe asap. Remember that a X1, or X4 PCIe card will happily work in a PCIe X16 slot, and there are still several million SLI and Crossfire motherboard with empty 2nd video slots. Plus, even on a dual-SLI with dual-slot-width video cards and an audio card present, it is more likely to find one PCIeX1 or X4 slot vacant that does not compromise the video-card ventilation than to find a PCI slot that is not either covered up by the dual-width video cards or that does not completely block airflow to one or other of the video cards.

    So if a PCIe version of the PhysX ever becomes available... you will be able to sell your PCI version... at about the price of a doorstop. Few will want the PCI version if a used PCIe version is also available.

    Hard on the wallet being an early adopter at times.....
  • tonjohn - Friday, May 5, 2006 - link

    The developers did a poor job when it came to how the implemented PPU support in GRAW.

    CellFactor is a MUCH better test of what the PhysX card is capable. The physics in CellFactor are MUCH more intense. When blowing up a load of crap, my fps only drop 2fps at the most, and that is mainly b/c my 9800Pro is struggling to render the actual effects of a grenade explosion.

Log in

Don't have an account? Sign up now