Final Words

Ideally, we would have a few more games to test in order to get a better understanding of what developers are doing with the hardware. We'd also love a little more flexibility in how the software we test handles hardware usage and physics detail. For example, what sort of performance can be had using multithreaded physics calculations on dual-core or multi-core systems? Can a high-end CPU even handle the same level of physics detail as with the PhysX card, or has GRAW downgraded the complexity of the software calculations for a reason? It would also be very helpful if we could dig up some low level technical detail on the hardware. Unfortunately, you can't always get what you want.

For now, the tests we've run here are quite impressive in terms of visuals, but we can't say for certain whether or not the PPU contributes substantially to the quality. From what GRAW has shown us, and from the list of titles on the horizon, it is clear that developers are taking an interest in this new PPU phenomenon. We are quite happy to see more interactivity and higher levels of realism make their way into games, and we commend AGEIA for their role in speeding up this process.

The added realism and immersion of playing Ghost Recon Advanced Warfighter with hardware physics is a huge success in this gamer's opinion. Granted, the improved visuals aren't the holy grail of game physics, but this is an excellent first step. In a fast fire fight with bullets streaming by, helicopters raining destruction from the heavens, and grenades tearing up the streets, the experience is just that much more hair raising with a PPU plugged in.

If every game out right now supported some type of physics enhancement with a PPU under the hood, it would be easy to recommend it to anyone who wants higher image quality than the most expensive CPU and GPU can currently offer. For now, one or two games aren't going get a recommendation for spending the requisite $300, especially when we don't know the extent of what other developers are doing. For those with money to burn, it's certainly a great part to play with. Whether it actually becomes worth the price of admission will remain to be seen. We are hopefully optimistic having seen these first fruits, especially considering how much more can be done.

Obviously, there's going to be some question of whether or not the PPU will catch on and stay around for the long haul. Luckily, software developers need not worry. AGEIA has worked very hard to do everything right, and we think they're on the right track. Their PhysX SDK is an excellent software physics solution its own right - Sony is shipping it with every PS3 development console, and there are XBox 360 games around with the PhysX SDK powering them as well. Even if the hardware totally fails to gain acceptance, games can still fall back to a software solution. Unfortunately, it's still up to developers to provide the option for modifying physics quality under software as well as hardware, as GRAW demonstrates.

As of now, the PhysX SDK has been adopted by engines such as: UnrealEngine3 (Unreal Tournament 2007), Reality Engine (Cell Factor), and Gamebryo (recently used for Elder Scrolls IV: Oblivion, though Havok is implimented in lieu of PhysX support). This type of developer penetration is good to see, and it will hopefully provide a compelling upgrade argument to consumers in the next 6-12 months.

We are still an incredibly long way off from seeing games that require the PhysX PPU, but it's not outside the realm of possibility. With such easy access to the PhysX SDK for developers, there has got to be some pressure now for those one to two year timeframe products to get in as many beyond-the-cutting-edge features as possible. Personally, I'm hoping the AGEIA PhysX hardware support will make it onto the list. If AGEIA is able to prove their worth on the console middleware side, we may end up seeing a PPU in XBox3 and PS4 down the line as well. There were plenty of skeptics that doubted the PhysX PPU would ever make it out the door, but having passed that milestone, who knows how far they'll go?

We're still a little skeptical about how much the PhysX card is actually doing that couldn't be done on a CPU -- especially a dual core CPU. Hopefully this isn't the first "physics decellerator", rather like the first S3 Virge 3D chip was more of a step sideways for 3D than a true enhancement. The promise of high quality physics acceleration is still there, but we can't say for certain at this point how much faster a PhysX card really makes things - after all, we've only seen one shipping title, and it may simply be a matter of making better optimizations to the PhysX code. With E3 on the horizon and more games coming out "real soon now", rest assured that we will have continuing coverage of AGEIA and the PhysX PPU.

PhysX Performance
Comments Locked

101 Comments

View All Comments

  • iNsuRRecTiON - Saturday, May 6, 2006 - link

    Hey,

    the ASUS PhysX card does already have 256 MB RAM instead of 128 MB RAM, compared to BFG Tech. card..

    best regards,

    iNsuRRecTiON
  • fishbits - Friday, May 5, 2006 - link

    I want the physics card to be equivalent to a sound card in terms of standardization and how often I feel compelled to upgrade it. In other words, it would be upgraded far less often than graphics cards are. Putting the physics hardware on a graphics card means you would throw away (or sell at a loss) perfectly good physics capability just to get a faster GPU, or get a second card to go to SLI/Crossfire. This is a bad idea for all the same reasons you'd say putting sound card functionality on a graphics card is a bad idea.
  • Calin - Friday, May 5, 2006 - link

    Yes, you could do all kind of nice calculations on the physics boards. However, moving geometry data from the video card to the physics board to be calculated and moving them back to the video card would be shooting yourself in all feets.
    I think this could run well as an accelerator for rendering images or for 3D applications... how soon until 3DStudio, PhotoShop and so on take advantage?
  • tonjohn - Friday, May 5, 2006 - link

    quote:

    I hope that they won't need a respin to add pcie functionality but fear this may be the case.

    The pre-production cards had both PCI and PCIe support at the same time. You simply flipped the card depending on which interface you wanted to use. So I believe that the PPU offers native PCIe support and that BFG and ASUS could produce PCIe boards today if Ageia would give them permission to.
    quote:

    I agree with the post that in volume, this kind of chip could find its way onto 3d graphics cards for gaming.

    Bad idea. Putting the PPU onboard with a GPU means higher costs all around (longer PCBs, possibly more layers, more ram). Also, the two chips will be fighting for banwidth which is never a good thing.

    Higher costs and lower performance = a bad idea.

    FYI: I have a BFG PhysX card.
  • saratoga - Friday, May 5, 2006 - link

    Actually, putting this on the GPU core would be much cheaper. You'd save by getting rid of all the duplicated hardware: DRAMs, memory controller, power circuitry, PCI bridge, cooling, PCB, etc.

    Not to mention you'd likely gain a lot of performance by having a PCI-E 16x slot and an ondie link to the GPU.
  • Calin - Monday, May 8, 2006 - link

    I wonder how much of the 2TB/s internal bandwidth will be used on the Ageia card... if enough of it, then the video card will have very little bandwidth remaining for its operations (graphic rendering). However, if the cooling really needs that heat sink/fan combo, and the card really needs that power connector, you won't be able to put one on the highest end video cards (for power and heat reasons).
  • kilkennycat - Friday, May 5, 2006 - link

    "I have a BFG PhysX card"

    Use it as a door-stop ?

    Pray tell me where you plug one of these if you have the following:-

    Dual 7900GTX512 (or dual 1900XTX)
    and
    Creative X-Fi

    already in your system.
  • Walter Williams - Friday, May 5, 2006 - link

    quote:

    Use it as a door-stop ?

    Actually, I use it to play CellFactor. Your missing out.
    quote:

    Pray tell me where you plug one of these if you have the following:

    SLi and CrossFire are the biggest waste of money unless you are doing intense rendering work.

    I hope people with that setup enjoy their little fps improvement per dollar while I'm playing CellFactor, which requires the PPU to run.
  • kilkennycat - Friday, May 5, 2006 - link

    Cellfactor MP tech demo....

    Cellfactor to be released in Q4 2007.. maybe... Your PhysX is going to be a little old by the time the full game is released...Should be up to quad-core CPUs and lots of cycles available for physics calculations by that time.

    I have recently been playing Oblivion a lot, like several million others. The Havok software physics are just great --- and you NEED the highest-end graphics for optimum visual experience in that game --- see the current Anandtech article. Sorry, I care little about (er) "better particle effects" or "more realistic explosions", even when I play Far Cry. In fact, from my experiences with BF2 and BF1942 I find them more than adequately immersive with their great scenery graphics and their CURRENT physics effects -- even the old and noble BF1942.

    On single-player games, I would far prefer seeing additional hardware, or compute-cycles, being directed at advanced-AI than physics. What point fancy physics-effects if the AI enemy has about as much built-in intelligence as a lump of Swiss cheese? Sure does not help the game's immersive experience at all. And tightly-scripted AI just does not work in open-area scenarios (c.f: Unreal 2 and the dumb enemies easily sneaked from behind -- somebody forgot to script that eventuality amongst many others that can occur in an open play-area). The successful tightly-scripted single-play shooters like Doom3, HL2, FEAR etc all have overt or disguised "corridors". So, the developers of open-area games like Far Cry or Oblivion chose an algorithmic *intelligent-agent AI* approach, with a simple overlay of scripting to set some broad behavioral and/or location boundaries. A distinct move in the right direction but there are some problems with the AI implementation in both games. More sophisticated AI algorithms will require more compute-power, which, if performed on the CPU, will need to be traded off with cycles available for graphics. Dual-core will help, but a general-purpose DSP might help even more... they are not expensive and easily integrated into a motherboard.

    Back to the immediate subject of the Ageia PPU and physics effects:-

    I am far more intrigued by Havok's exercises with Havok FX harnessing both dual-core CPU power and GPU power in the service of physics emulation. Would be great to have action games with a physics-adjustable slider so that one can trade off graphics with physics effects in a seamless manner, just as one can trade-off advanced-graphics elements in games today.... which is exactly where Havok is heading. No need to support marginal added hardware like the PhysX. Now, if the PhysX engine was an option on every high-end motherboard, for say not more than $50 extra, or as an optional motherboard plug-in at say $75, (like the 8087 of yore) and did not take up any additional precious peripheral slots, then I would rate its chances of becoming main-stream to be pretty high. Seems as if Ageia should license their hardware DESIGN as soon as possible to nVidia or ATi at (say) not more than $15 a copy and have them incorporate the design into their motherboard chip-sets.

    The current Ageia has 3 strikes against it for cost, hardware interface ( PCI ) and software-support reasons. The PhysX PPU certainly has NO hope at all as a periphreal device as long as it stays in PCI form. Must migrate to PCIe asap. Remember that a X1, or X4 PCIe card will happily work in a PCIe X16 slot, and there are still several million SLI and Crossfire motherboard with empty 2nd video slots. Plus, even on a dual-SLI with dual-slot-width video cards and an audio card present, it is more likely to find one PCIeX1 or X4 slot vacant that does not compromise the video-card ventilation than to find a PCI slot that is not either covered up by the dual-width video cards or that does not completely block airflow to one or other of the video cards.

    So if a PCIe version of the PhysX ever becomes available... you will be able to sell your PCI version... at about the price of a doorstop. Few will want the PCI version if a used PCIe version is also available.

    Hard on the wallet being an early adopter at times.....
  • tonjohn - Friday, May 5, 2006 - link

    The developers did a poor job when it came to how the implemented PPU support in GRAW.

    CellFactor is a MUCH better test of what the PhysX card is capable. The physics in CellFactor are MUCH more intense. When blowing up a load of crap, my fps only drop 2fps at the most, and that is mainly b/c my 9800Pro is struggling to render the actual effects of a grenade explosion.

Log in

Don't have an account? Sign up now