By now, many have heard of AGEIA and its startling announcement: they will produce a processor used exclusively to process physics related computations. Called the PPU, or Physics Processing Unit, its role will be to offload highly intensive mathematics such as realistic water movement, realistic character physical reactions to objects and the world, from the CPU to a dedicated processor. This all seems like the natural progression of things, since dedicated sound, network and other processors are commonplace.

Today, however, most processors spend their time mostly idling - you're rarely ever pushing your hardware to its limits consistently. Thus Havok, a company that's well known to game developers, has announced that it has plans to do for you what AGEIA promises, but save you money and maximize your dollar spent at the same time. Indeed, Havok has confirmed with us that they are competing with AGEIA.

The Havok FX engine is what Havok claims will provide the functionality of a PPU, but its approach is entirely different than AGEIA's. What's special about Havok FX is that it's a software engine that is currently based on Havok's widely used physics engines. However, Havok FX is designed to offload many intensive physics functions from the CPU to the GPU. Using technology available in Shader Model 3.0 and beyond, the Havok FX engine will be able to take advantage of unused resources of today's powerful GPU's and put them to use.

Many games today are intensive, but while they don't use a lot of what your graphics card is capable of doing (features, etc.), they do stress the majority of GPUs that are out there - often to their very limits. For example, a given scene can be utilizing 80% of GPU rendering resources (bus, memory bandwidth, etc.) while another scene uses 20%. How Havok aims to utilize the resources in a GPU to accelerate physics calculations remains to be seen. Havok did mention to us however, that they believe having two GPUs would be better suited to handle such duties as load balancing would be the key issue. Microsoft's Shader Model 3.0 requires compliance for full programmability, so with today's DX9 compliant GPUs, it's entirely possible for Havok FX to program a Radeon X1800 or GeForce 7800 GT (and beyond) on the fly, with specific physics processing instructions.

Havok also pointed out to us that its Havok FX engine will allow a Shader Model 3.0 compliant GPU to accelerate "game-play" physics and not only the resulting visual effects of such physics, which Havok says, AGIEA's product only does. Havok explains that its engine is able to offload such physics operations as collision detection, which on today's general purpose GPUs are very slow to compute. We spoke to Havok and they said:
"It is definitely the case that load-balancing is a key challenge for both effects physics and graphics. Enabling effects physics via the GPU offers much greater flexibility for addressing that type of problem versus a proprietary physics hardware device that will inevitably sit idle while the GPU may be overtaxed. We believe that two GPU's stand a far better chance of collaborating more effectively."
One important fact to keep in mind is that Havok is playing a significant role in the development of console technology, and has not specifically stated that the PC desktop is the ideal platform for Havok FX. Consoles have the advantage of being a closed system, in which dedicated solutions such as Havok FX will have long term applications. It will be interesting to see if Havok can achieve this and how well.

Havok's physics engines are featured in many of today's games, including F.E.A.R., Age of Empires 3, Brothers in Arms: Road to Hill 30, Halo 2 and many others. The full list can be found here on Havok's website.



View All Comments

  • linuxOwnzIfUrLeet - Wednesday, November 02, 2005 - link

    What if creative labs used all the supercomputing power of their new sound card to do physics?
  • JarredWalton - Wednesday, November 02, 2005 - link

    Maybe Sony should sell a PS2-powered super-computer expansion card as well? Just think, Sony AND Creative working together on proprietary technology - who could stand against them!? :) Reply
  • ncage - Wednesday, November 02, 2005 - link

    If i had to guess nvidia/ati might be puting physic functions in their GPU as we get die shrinks. I think this is the best place to put it to. I can imagine some of the newer video cards will tought ppu functions. Ive heard some state that as amd/intel add more cores then they think that amd/intel would add specialized cores that would take care of the physics part of games. This makes no sense at all. How many people don't game? They would be adding features to a CPU that everyone would have to pay for yet only gamers could use. Oh wow i just got a new operteron 10 core cpu with 2 cores used for PPU for my Database server. They could add specialized cores that would help in more pipelined parallel operations (pretty much now a gpu is designed) but i dought they add specialized functions like this to the cpu. It makes much more sense to put it in the gpu and i don't think aegis can really compete with nvidia/ati. Reply
  • OrSin - Wednesday, November 02, 2005 - link

    The ATI video card in the new Xbox already off loads some physcial to a daughter GPU.
    Sorry but I doubt the CPU is the Cause of slow downs in most games. What they really need to do is off load the physics to the second Core. All new console and most CPU are going to dual core and most games are just not setup for it. If they write for PPU then maybse some software can Translate that PPU to the other CPU in the system.
    Makes the most since when trying to program.
  • adamfarden - Wednesday, November 02, 2005 - link

    Hmmm if it does work out on PC's then maybe we could see in systems a high end GPU used for graphics, and a low end GPU used for physics... a great way to recycle your old graphics card! Reply
  • huges84 - Wednesday, November 02, 2005 - link

    That is exactly what I was thinking. However, that would only work for in the future when people's old cards are SM 3.0 capable. It wouldn't work for most ppl right now.

    But then again, in the near future more people will have Dual Core CPUs. That would be the best way to go in my opinion, becuase you can use that second core for multitasking everyday, not just when playing games.

    Also, if physics is such a big deal and has high demand, why don't AMD and Intel introduce new instructions into their next generation CPUs to help accelerate physics calculations. They did similar with SSE* to accelerate multimedia calculations. I think it would be better if that was done because that means that again everyone will eventually have a CPU that can do that, while eveeryone will not have a PPU or 2 GPUs.
  • bersl2 - Wednesday, November 02, 2005 - link

    A CPU can't perform operations in parallel the way a GPU can.

    If graphics cards had open ISAs like CPUs do, there's no telling what kind of inventive uses people might find for them. The Wikipedia article">lists various uses of the GPU, w/ links.
  • bersl2 - Wednesday, November 02, 2005 - link

    To add to this, from an,1697,1867123...">ExtremeTech article (emphasis mine):

    The third future project at ATI is dramatically improved support for the GPGPU scene. These are researches, mostly academic, that are tapping into the massive parallel computing power of graphics processors for general computing tasks, like fluid dynamics calculations, protein folding, or audio and signal processing. ATI's new GPU architecture should be better at GPGPU tasks than any that has come before, as it provides more registers per pipeline than either ATI's old architecture or Nvidia's new one. This is a sore spot for GPGPU developers but not really a limitation for game makers. The improved performance of dynamic branching in the new architecture should be a huge win for GPGPU applications as well. Developers working to enable general purpose non-graphics applications on GPUs have lamented the lack of more direct access to the hardware, but ATI plans to remedy that by publishing a detailed spec and even a thin "close to the metal" abstraction layer for these coders, so it can get away from using DirectX and OpenGL as an interface to the cards. Those are fine graphics APIs, but they're less than optimal for general purpose computing.

    If this means what I think it means, it's a real gift not only for the GPGPU guys, but also for the DRI and Mesa projects.
  • Shinei - Wednesday, November 02, 2005 - link

    Cg has been doing fluid dynamics simulations for at least a year by now (the demo of which you can download from nVidia's developer site (">, so an abstract layer that runs specifically on GPUs isn't new. However, I'd be curious to see if nVidia intends on releasing a physics-oriented version of their Cg instruction set that allows them to handle more general-purpose calculations to run as competition against AGEIA and Havok.
    Which, of course, brings me to my next point--even if Havok offers this GPU-enabled physics engine, it's my understanding that AGEIA's NovodeX physics engine is about ten times as fast as Havok's current engine is, even when executed solely on CPU resources. Havok would have to significantly streamline their physics calculations if they ever intend to compete with NovodeX for speed, and especially if they want to keep the GPU usage levels low enough that the GPU can actually spend some time rendering the scene...
  • Lord Zado - Wednesday, November 02, 2005 - link

    Personally, I don't really care whether Ageia's method or Havok's method is better. I'm just glad to see another company showing interest in this physics idea. I was really excited about the PPU when I first heard about it months ago, but I figured it was dead in the water since only one company was really pushing it and it didn't look too consumer friendly.

    I think independant physics processing is the logical next evolution of gaming and I'm glad to see another company in the mix.

Log in

Don't have an account? Sign up now