Introduction

A little over a year ago, we first heard about a company called AGEIA whose goal was to bring high quality physics processing power to the desktop. Today they have succeeded in their mission. For a short while, systems with the PhysX PPU (physics processing unit) have been shipping from Dell, Alienware, and Falcon Northwest. Soon, PhysX add-in cards will be available in retail channels. Today, the very first PhysX accelerated game has been released: Tom Clancy's Ghost Recon Advanced Warfighter, and to top off the excitement, ASUS has given us an exclusive look at their hardware.

We have put together a couple benchmarks designed to illustrate the impact of AGEIA's PhysX technology on game performance, and we will certainly comment heavily on our experience while playing the game. The potential benefits have been discussed quite a bit over the past year, but now we finally get a taste of what the first PhysX accelerated games can do.

With NVIDIA and ATI starting to dip their toes into physics acceleration as well (with Havok FX and in-house demos of other technology), knowing the playing field is very important for all parties involved. Many developers and hardware manufacturers will definitely give this technology some time before jumping on the bandwagon, as should be expected. Will our exploration show enough added benefit for PhysX to be worth the investment?

Before we hit the numbers, we want to take another look at the technology behind the hardware.

AGEIA PhysX Technology and GPU Hardware
Comments Locked

101 Comments

View All Comments

  • iNsuRRecTiON - Saturday, May 6, 2006 - link

    Hey,

    the ASUS PhysX card does already have 256 MB RAM instead of 128 MB RAM, compared to BFG Tech. card..

    best regards,

    iNsuRRecTiON
  • fishbits - Friday, May 5, 2006 - link

    I want the physics card to be equivalent to a sound card in terms of standardization and how often I feel compelled to upgrade it. In other words, it would be upgraded far less often than graphics cards are. Putting the physics hardware on a graphics card means you would throw away (or sell at a loss) perfectly good physics capability just to get a faster GPU, or get a second card to go to SLI/Crossfire. This is a bad idea for all the same reasons you'd say putting sound card functionality on a graphics card is a bad idea.
  • Calin - Friday, May 5, 2006 - link

    Yes, you could do all kind of nice calculations on the physics boards. However, moving geometry data from the video card to the physics board to be calculated and moving them back to the video card would be shooting yourself in all feets.
    I think this could run well as an accelerator for rendering images or for 3D applications... how soon until 3DStudio, PhotoShop and so on take advantage?
  • tonjohn - Friday, May 5, 2006 - link

    quote:

    I hope that they won't need a respin to add pcie functionality but fear this may be the case.

    The pre-production cards had both PCI and PCIe support at the same time. You simply flipped the card depending on which interface you wanted to use. So I believe that the PPU offers native PCIe support and that BFG and ASUS could produce PCIe boards today if Ageia would give them permission to.
    quote:

    I agree with the post that in volume, this kind of chip could find its way onto 3d graphics cards for gaming.

    Bad idea. Putting the PPU onboard with a GPU means higher costs all around (longer PCBs, possibly more layers, more ram). Also, the two chips will be fighting for banwidth which is never a good thing.

    Higher costs and lower performance = a bad idea.

    FYI: I have a BFG PhysX card.
  • saratoga - Friday, May 5, 2006 - link

    Actually, putting this on the GPU core would be much cheaper. You'd save by getting rid of all the duplicated hardware: DRAMs, memory controller, power circuitry, PCI bridge, cooling, PCB, etc.

    Not to mention you'd likely gain a lot of performance by having a PCI-E 16x slot and an ondie link to the GPU.
  • Calin - Monday, May 8, 2006 - link

    I wonder how much of the 2TB/s internal bandwidth will be used on the Ageia card... if enough of it, then the video card will have very little bandwidth remaining for its operations (graphic rendering). However, if the cooling really needs that heat sink/fan combo, and the card really needs that power connector, you won't be able to put one on the highest end video cards (for power and heat reasons).
  • kilkennycat - Friday, May 5, 2006 - link

    "I have a BFG PhysX card"

    Use it as a door-stop ?

    Pray tell me where you plug one of these if you have the following:-

    Dual 7900GTX512 (or dual 1900XTX)
    and
    Creative X-Fi

    already in your system.
  • Walter Williams - Friday, May 5, 2006 - link

    quote:

    Use it as a door-stop ?

    Actually, I use it to play CellFactor. Your missing out.
    quote:

    Pray tell me where you plug one of these if you have the following:

    SLi and CrossFire are the biggest waste of money unless you are doing intense rendering work.

    I hope people with that setup enjoy their little fps improvement per dollar while I'm playing CellFactor, which requires the PPU to run.
  • kilkennycat - Friday, May 5, 2006 - link

    Cellfactor MP tech demo....

    Cellfactor to be released in Q4 2007.. maybe... Your PhysX is going to be a little old by the time the full game is released...Should be up to quad-core CPUs and lots of cycles available for physics calculations by that time.

    I have recently been playing Oblivion a lot, like several million others. The Havok software physics are just great --- and you NEED the highest-end graphics for optimum visual experience in that game --- see the current Anandtech article. Sorry, I care little about (er) "better particle effects" or "more realistic explosions", even when I play Far Cry. In fact, from my experiences with BF2 and BF1942 I find them more than adequately immersive with their great scenery graphics and their CURRENT physics effects -- even the old and noble BF1942.

    On single-player games, I would far prefer seeing additional hardware, or compute-cycles, being directed at advanced-AI than physics. What point fancy physics-effects if the AI enemy has about as much built-in intelligence as a lump of Swiss cheese? Sure does not help the game's immersive experience at all. And tightly-scripted AI just does not work in open-area scenarios (c.f: Unreal 2 and the dumb enemies easily sneaked from behind -- somebody forgot to script that eventuality amongst many others that can occur in an open play-area). The successful tightly-scripted single-play shooters like Doom3, HL2, FEAR etc all have overt or disguised "corridors". So, the developers of open-area games like Far Cry or Oblivion chose an algorithmic *intelligent-agent AI* approach, with a simple overlay of scripting to set some broad behavioral and/or location boundaries. A distinct move in the right direction but there are some problems with the AI implementation in both games. More sophisticated AI algorithms will require more compute-power, which, if performed on the CPU, will need to be traded off with cycles available for graphics. Dual-core will help, but a general-purpose DSP might help even more... they are not expensive and easily integrated into a motherboard.

    Back to the immediate subject of the Ageia PPU and physics effects:-

    I am far more intrigued by Havok's exercises with Havok FX harnessing both dual-core CPU power and GPU power in the service of physics emulation. Would be great to have action games with a physics-adjustable slider so that one can trade off graphics with physics effects in a seamless manner, just as one can trade-off advanced-graphics elements in games today.... which is exactly where Havok is heading. No need to support marginal added hardware like the PhysX. Now, if the PhysX engine was an option on every high-end motherboard, for say not more than $50 extra, or as an optional motherboard plug-in at say $75, (like the 8087 of yore) and did not take up any additional precious peripheral slots, then I would rate its chances of becoming main-stream to be pretty high. Seems as if Ageia should license their hardware DESIGN as soon as possible to nVidia or ATi at (say) not more than $15 a copy and have them incorporate the design into their motherboard chip-sets.

    The current Ageia has 3 strikes against it for cost, hardware interface ( PCI ) and software-support reasons. The PhysX PPU certainly has NO hope at all as a periphreal device as long as it stays in PCI form. Must migrate to PCIe asap. Remember that a X1, or X4 PCIe card will happily work in a PCIe X16 slot, and there are still several million SLI and Crossfire motherboard with empty 2nd video slots. Plus, even on a dual-SLI with dual-slot-width video cards and an audio card present, it is more likely to find one PCIeX1 or X4 slot vacant that does not compromise the video-card ventilation than to find a PCI slot that is not either covered up by the dual-width video cards or that does not completely block airflow to one or other of the video cards.

    So if a PCIe version of the PhysX ever becomes available... you will be able to sell your PCI version... at about the price of a doorstop. Few will want the PCI version if a used PCIe version is also available.

    Hard on the wallet being an early adopter at times.....
  • tonjohn - Friday, May 5, 2006 - link

    The developers did a poor job when it came to how the implemented PPU support in GRAW.

    CellFactor is a MUCH better test of what the PhysX card is capable. The physics in CellFactor are MUCH more intense. When blowing up a load of crap, my fps only drop 2fps at the most, and that is mainly b/c my 9800Pro is struggling to render the actual effects of a grenade explosion.

Log in

Don't have an account? Sign up now