AGEIA PhysX Technology and GPU Hardware

First off, here is the low down on the hardware as we know it. AGEIA, being the first and only consumer-oriented physics processor designer right now, has not given us as much in-depth technical detail as other hardware designers. We certainly understand the need to protect intellectual property, especially at this stage in the game, but this is what we know.

PhysX Hardware:
125 Million transistors
130nm manufacturing process
128MB 733MHz Data Rate GDDR3 RAM
128-bit memory bus interface
20 giga-instructions per second
2 Tb/sec internal memory bandwidth
"Dozens" of fully independent cores


There are quite a few things to note about this architecture. Even without knowing all the ins and outs, it is quite obvious that this chip will be a force to be reckoned with in the physics realm. A graphics card, even with a 512-bit internal bus running at core speed, has less than 350 Gb/sec internal bandwidth. There are also lots of restrictions on the way data moves around in a GPU. For instance, there is no way for a pixel shader to read a value, change it, and write it back to the same spot in local RAM. There are ways to deal with this when tackling physics, but making highly efficient use of nearly 6 times the internal bandwidth for the task at hand is a huge plus. CPUs aren't able to touch this type of internal bandwidth either. (Of course, we're talking about internal theoretical bandwidth, but the best we can do for now is relay what AGEIA has told us.)

Physics, as we noted in last years article, generally presents itself in sets of highly dependant small problems. Graphics has become sets of highly independent mathematically intense problems. It's not that GPUs can't be used to solve these problems where the input to one pixel is the output of another (performing multiple passes and making use of render-to-texture functionality is one obvious solution); it's just that much of the power of a GPU is mostly wasted when attempting to solve this type of problem. Making use of a great deal of independent processing units makes sense as well. In a GPU's SIMD architecture, pixel pipelines execute the same instructions on many different pixels. In physics, it is much more often the case that different things need to be done to every physical object in a scene, and it makes much more sense to attack the problem with a proper solution.

To be fair, NVIDIA and ATI are not arguing that they can compete with the physics processing power AGEIA is able to offer in the PhysX chip. The main selling points of physics on the GPU is that everyone who plays games (and would want a physics card) already has a graphics card. Solutions like Havok FX which use SM3.0 to implement physics calculations on the GPU are good ways to augment existing physics engines. These types of solutions will add a little more punch to what developers can do. This won't create a revolution, but it will get game developers to look harder at physics in the future, and that is a good thing. We have yet to see Havok FX or a competing solution in action, so we can't go into any detail on what to expect. However, it is obvious that a multi-GPU platform will be able to benefit from physics engines that make use of GPUs: there are plenty of cases where games are not able to take 100% advantage of both GPUs. In single GPU cases, there could still be a benefit, but the more graphically intensive a scene, the less room there is for the GPU to worry about anything else. We are certainly seeing titles coming out like Oblivion which are able to bring everything we throw at it to a crawl, so balance will certainly be an issue for Havok FX and similar solutions.

DirectX 10 will absolutely benefit AGEIA, NVIDIA, and ATI. For physics on GPU implementations, DX10 will decrease overhead significantly. State changes will be more efficient, and many more objects will be able to be sent to the GPU for processing every frame. This will obviously make it easier for GPUs to handle doing things other than graphics more efficiently. A little less obviously, PhysX hardware accelerated games will also benefit from a graphics standpoint. With the possibility for games to support orders of magnitude more rigid body objects under PhysX, overhead can become an issue when batching these objects to the GPU for rendering. This is a hard thing for us to test for explicitly, but it is easy to understand why it will be a problem when we have developers already complaining about the overhead issue.

While we know the PhysX part can handle 20 GIPS, this measure is likely simple independent instructions. We would really like to get a better idea of how much actual "work" this part can handle, but for now we'll have to settle for this ambiguous number and some real world performance. Let's take a look a the ASUS card and then take a look at the numbers.

Index ASUS Card and Test Configuration
Comments Locked

101 Comments

View All Comments

  • bob661 - Friday, May 5, 2006 - link

    Joe SixPacks aren't gamers. They're email and Word users. People that game know what hardware is required.
  • nullpointerus - Friday, May 5, 2006 - link

    That's not how it works. New types of hardware are initially luxury items both in the sense that they are affordable only by a few and way overpriced. When the rich adopt these things, the middle class end up wanting them, and manufacturers find ways to bring the prices down by scaling down the hardware or using technological improvements. So in other words, pipe down, let those who can afford them buy them, and in an few years we may see $50-75 versions for ordinary gamers.
  • Mr Perfect - Saturday, May 6, 2006 - link

    I wish I could find the article now, but back a ways there was an interview with Ageia where it was said that prices would span a similar range as video cards. So yes, there will probably be low-end, minrange, and highend card.

    What I'm concerned about is the people who already are fighting a budget to game. These are the highschool kids with little to no income, the 40 year old with two kids and a morgage, and the casual gamer who's probably just as interested in a $170 PS2. What happens when they have to buy no only an enty level $150 video card, but also a $150 physics card? I can only imagine if gaming was currently limited to only those people with a $300 budget for a 7900GT or X1800 XL that we'd see PC gaming become a very elite selection for "enthusiasts" only.

    Hopefully we can get some snazzy physics without increasing the cost of admision so much, either by taking advantage of the dual core CPUs that are even now worming their way into the mainstream PCs, or some sort of new video card technology.
  • nullpointerus - Sunday, May 7, 2006 - link

    Game developers have to eat, too. They won't produce games requiring extravagant hardware. Your fear is irrational. When you go into a doctor's office to get a shot, do you insist that the needle be sterilized right in front of your eyes before it comes anywhere near your skin? No. The doctor wants to eat, so he's not going to blow his eduction and license by reusing needles...
  • Mr Perfect - Monday, May 8, 2006 - link

    Well obviously it won't be a problem if it's not required. If it becomes an unnecessary enhancment card, like an X-Fi, then all is well. All I've been saying is if it DOES become a required card there is the possibility for monetary problems for the bread-and-butter casual gamers who fill the servers.
  • Googer - Friday, May 5, 2006 - link

    I for one will not be an early adopter for one of these. First generation hardware is always so cool to look at but it's almost always something you do not want to own. DX9 Video Cards are a great example: ATi 9700 PRO Was a great card if you played DX8 games but by the time software rolled around to take advantage of DX9 hardware, the 9700PRO just was not truly cut to handle it. The 9700PRO lacked a ton of features that second generation DX9 Cards had. My point is you should wait for the revision/version 2.0 of this card and you wont regret it. By then programs should be on store shelves to take full advandage of PhysX hardware.
  • Jedi2155 - Monday, May 8, 2006 - link

    I think it handled the first gen dx9 games relatively well. Farcry was a great example as it played quite well on my 9700 pro (which lasted me till Sept. '05 when I upgraded to a refurb. x800 pro). It also was able to run most games on max details (although crappy framerates but it was able to do it!). I think the 9700 pro offered it lot for its time and was able to play the 1st gen Dx9 games well enough.
  • munky - Friday, May 5, 2006 - link

    Sure, the 9x00 series could handle DX9, just not at maxed out settings. I played Farcry on a 9800xt, and it ran smoothly at medium-high settings. But the physx card is just plain disappointing, since it causes such a performance hit in GRAW, even at cpu-limited resolutions. Either the developers did not code the physics properly, or the physx card is not all that it's hyped up to be. We'll need more games using the ppu to know for sure.
  • rqle - Friday, May 5, 2006 - link

    I bought a 9700 Pro, i saw it as so far ahead of ti4600 with its 4x the proformance when AA was apply. First card to play games with AA and AF even if wasnt directx9 games. BUT this ageia thing seem little pointless to me, i actually rather have 2 ATI or 2 Nvidia card, at least this gives you an option, less physics or better graphic experience. comes in handle for those 98% of games that not ageia compatible yet.
  • PeteRoy - Friday, May 5, 2006 - link

    I hope this thing will be integrated into video cards mobo or CPU instead of seperated card.

Log in

Don't have an account? Sign up now