Final Words

Ideally, we would have a few more games to test in order to get a better understanding of what developers are doing with the hardware. We'd also love a little more flexibility in how the software we test handles hardware usage and physics detail. For example, what sort of performance can be had using multithreaded physics calculations on dual-core or multi-core systems? Can a high-end CPU even handle the same level of physics detail as with the PhysX card, or has GRAW downgraded the complexity of the software calculations for a reason? It would also be very helpful if we could dig up some low level technical detail on the hardware. Unfortunately, you can't always get what you want.

For now, the tests we've run here are quite impressive in terms of visuals, but we can't say for certain whether or not the PPU contributes substantially to the quality. From what GRAW has shown us, and from the list of titles on the horizon, it is clear that developers are taking an interest in this new PPU phenomenon. We are quite happy to see more interactivity and higher levels of realism make their way into games, and we commend AGEIA for their role in speeding up this process.

The added realism and immersion of playing Ghost Recon Advanced Warfighter with hardware physics is a huge success in this gamer's opinion. Granted, the improved visuals aren't the holy grail of game physics, but this is an excellent first step. In a fast fire fight with bullets streaming by, helicopters raining destruction from the heavens, and grenades tearing up the streets, the experience is just that much more hair raising with a PPU plugged in.

If every game out right now supported some type of physics enhancement with a PPU under the hood, it would be easy to recommend it to anyone who wants higher image quality than the most expensive CPU and GPU can currently offer. For now, one or two games aren't going get a recommendation for spending the requisite $300, especially when we don't know the extent of what other developers are doing. For those with money to burn, it's certainly a great part to play with. Whether it actually becomes worth the price of admission will remain to be seen. We are hopefully optimistic having seen these first fruits, especially considering how much more can be done.

Obviously, there's going to be some question of whether or not the PPU will catch on and stay around for the long haul. Luckily, software developers need not worry. AGEIA has worked very hard to do everything right, and we think they're on the right track. Their PhysX SDK is an excellent software physics solution its own right - Sony is shipping it with every PS3 development console, and there are XBox 360 games around with the PhysX SDK powering them as well. Even if the hardware totally fails to gain acceptance, games can still fall back to a software solution. Unfortunately, it's still up to developers to provide the option for modifying physics quality under software as well as hardware, as GRAW demonstrates.

As of now, the PhysX SDK has been adopted by engines such as: UnrealEngine3 (Unreal Tournament 2007), Reality Engine (Cell Factor), and Gamebryo (recently used for Elder Scrolls IV: Oblivion, though Havok is implimented in lieu of PhysX support). This type of developer penetration is good to see, and it will hopefully provide a compelling upgrade argument to consumers in the next 6-12 months.

We are still an incredibly long way off from seeing games that require the PhysX PPU, but it's not outside the realm of possibility. With such easy access to the PhysX SDK for developers, there has got to be some pressure now for those one to two year timeframe products to get in as many beyond-the-cutting-edge features as possible. Personally, I'm hoping the AGEIA PhysX hardware support will make it onto the list. If AGEIA is able to prove their worth on the console middleware side, we may end up seeing a PPU in XBox3 and PS4 down the line as well. There were plenty of skeptics that doubted the PhysX PPU would ever make it out the door, but having passed that milestone, who knows how far they'll go?

We're still a little skeptical about how much the PhysX card is actually doing that couldn't be done on a CPU -- especially a dual core CPU. Hopefully this isn't the first "physics decellerator", rather like the first S3 Virge 3D chip was more of a step sideways for 3D than a true enhancement. The promise of high quality physics acceleration is still there, but we can't say for certain at this point how much faster a PhysX card really makes things - after all, we've only seen one shipping title, and it may simply be a matter of making better optimizations to the PhysX code. With E3 on the horizon and more games coming out "real soon now", rest assured that we will have continuing coverage of AGEIA and the PhysX PPU.

PhysX Performance
Comments Locked

101 Comments

View All Comments

  • Ickus - Saturday, May 6, 2006 - link

    Hmmm - seems like the modern equivelant of the old-school maths co-processors.
    Yes these are a good idea and correct me if I'm wrong, but isn't that $250 (Aus $) CPU I forked out for supposedly quite good at doing these sorts of calculations what with it's massive FPU capabilities and all? I KNOW that current CPU's have more than enough ability to perform the calculations for the physics engines used in todays video games. I can see why companies are interested in pushings physics add-on cards though...
    "Are your games running crap due to inefficient programming and resource hungry operating systems? Then buy a physics processing unit add-in card! Guaranteed to give you an unimpressive performance benefit for about 2 months!" If these PPU's are to become mainstream and we take another backwards step in programming, please oh please let it be NVidia who takes the reigns... They've done more for the multimedia industry in the last 7 years than any other company...
  • DerekWilson - Saturday, May 6, 2006 - link

    CPUs are quite well suited for handling physics calculations for a single object, or even a handful of objects ... physics (especially game physics) calculations are quite simple.

    when you have a few hundred thousand objects all bumping into eachother every scene, there is no way on earth a current CPU will be able to keep up with PhysX. There are just too many data dependancies and too much of a bandwidth and parallel processing advantage on AGEIA's side.

    As for where we will end up if another add-in card business takes off ... well that's a little too much to speculate on for the moment :-)
  • thestain - Saturday, May 6, 2006 - link

    Just my opinion, but this product is too slow.

    Maybe there needs to be minimum ratio's to cpu and gpu speeds that Ageia and others can use to make sure they hit the upper performance market.

    Software looks ok, looks like i might be going with software PhysX if available along with software Raid, even though i would prefer to go with the hardware... if bridged pci bus did not screw up my sound card with noise and wasn't so slow... maybe.. but my thinking is this product needs to be faster and wider... pci-e X4 or something like it, like I read in earlier articles it was supposed to be.

    pci... forget it... 773 mhz... forget it... for me... 1.2 GHZ and pci-e X4 would have this product rocking.

    any way to short this company?

    They really screwed the pouch on speed for the upper end... should rename their product a graphics decelerator for faster cpus,.. and a poor man's accelerator.. but what person who owns a cpu worth $50 and a video card worth $50 will be willing to spend the $200 or more Ageia wants for this...

    Great idea, but like the blockhead who give us RAID hardware... not fast enough.
  • DerekWilson - Saturday, May 6, 2006 - link

    afaik, raid hardware becomes useful for heavy error checking configurations like raid 5. with raid 0 and raid 1 (or 10 or 0+1) there is no error correction processing overhead. in the days of slow CPUs, this overhead could suck the life out of a system with raid 5. Today it's not as big an impact in most situations (espeically consumer level).

    raid hardware was quite a good thing, but only in situations where it is necessary.
  • Cybercat - Saturday, May 6, 2006 - link

    I was reading the article on FiringSquad (http://www.firingsquad.com/features/ageia_physx_re...">exact page here) where Ageia responded to Havok's claims about where the credit is due, performance hits, etc, and on performance hits they said:

    quote:

    We appreciate feedback from the gamer community and based partly on comments like the one above, we have identified an area in our driver where fine tuning positively impacts frame rate.


    So they essentially responded immediately with a driver update that supposedly improves performance.

    http://ageia.com/physx/drivers.html">http://ageia.com/physx/drivers.html

    Driver support is certainly a concern with any new hardware, but if Ageia keeps up this kind of timely response to issues and performance with frequent driver updates, in my mind they'll have taken care of one of the major factors in determining their success, and swinging their number of advantages to outweigh their obstacles for making it in the market.
  • toyota - Friday, May 5, 2006 - link

    i dont get it. Ghost Recon videos WITHOUT PhysX looks much more natural. the videos i have seen with it look pretty stupid. everything that blows up or gets shot has the same little black pieces flying around. i have shot up quite a few things in my life and seen plenty of videos of real explosions and thats not what it looks like.
  • DeathBooger - Friday, May 5, 2006 - link

    The PC version of the new Ghost Recon game was supposed to be released along side the Xbox360 version but was delayed at the last minute for a couple of months. My guess is that PhysX implementation was a second thought while developing the game and the delay came from the developers trying to figure out what to do with it.
  • shecknoscopy - Friday, May 5, 2006 - link

    Think <I>you're</i> part of a niche market?

    I gotta' tell you, as a scientist, this whole topic of putting 'physics' into games makes for an intensely amusing read. Of course I understand what's meant here, but when I first look at people's text in these articles/discussion, I'm always taken aback: "Wait? We need to <i>add</i> physics to stuff? Man, THAT's why my experiments have been failing!"

    Anyway...

    I wonder if the types of computations employed by our controversial little PhysX accelerator could be harvested *outside* of the gaming environment. As someone who both loves to game, but also would love to telecommute to my lab, I'd ideally like to be able to handle both tasks using one machine (I'm talking about in-house molecular modeling, crystallographic analysis, etc... ). Right now I have to rely on a more appropriate 'gaming' GPU at home, but hustle on in to work to use an essentially indentical computer which has been outfitted with a Quadro graphics card do to my crazy experiments. I guess I'm curious if it's plasuable to make, say, a 7900GT + PhysX perform comparable calculations to a Quado/Fire-style workstation graphics setup. 'Cause seriosuly, trying to play BF2 on your $1500 Quadro card is seriously disappointing. But then, so is trying to perform realtime molecular electron density rendering on your $320 7900GT.

    SO - anybody got any ideas? Some intimate knowledge of the difference between these types of calculations? Or some intimate knowledge of where I can get free pizza and beer? Ah, Grad School.

  • Walter Williams - Friday, May 5, 2006 - link

    quote:

    I wonder if the types of computations employed by our controversial little PhysX accelerator could be harvested *outside* of the gaming environment.

    It will be great for military use as well as the automobile industry.
  • escapedturkey - Friday, May 5, 2006 - link

    Why don't developers use the second core of many dual core systems to do a lot of physics calculations? Is there a drawback to this?

Log in

Don't have an account? Sign up now