By now, many have heard of AGEIA and its startling announcement: they will produce a processor used exclusively to process physics related computations. Called the PPU, or Physics Processing Unit, its role will be to offload highly intensive mathematics such as realistic water movement, realistic character physical reactions to objects and the world, from the CPU to a dedicated processor. This all seems like the natural progression of things, since dedicated sound, network and other processors are commonplace.

Today, however, most processors spend their time mostly idling - you're rarely ever pushing your hardware to its limits consistently. Thus Havok, a company that's well known to game developers, has announced that it has plans to do for you what AGEIA promises, but save you money and maximize your dollar spent at the same time. Indeed, Havok has confirmed with us that they are competing with AGEIA.

The Havok FX engine is what Havok claims will provide the functionality of a PPU, but its approach is entirely different than AGEIA's. What's special about Havok FX is that it's a software engine that is currently based on Havok's widely used physics engines. However, Havok FX is designed to offload many intensive physics functions from the CPU to the GPU. Using technology available in Shader Model 3.0 and beyond, the Havok FX engine will be able to take advantage of unused resources of today's powerful GPU's and put them to use.

Many games today are intensive, but while they don't use a lot of what your graphics card is capable of doing (features, etc.), they do stress the majority of GPUs that are out there - often to their very limits. For example, a given scene can be utilizing 80% of GPU rendering resources (bus, memory bandwidth, etc.) while another scene uses 20%. How Havok aims to utilize the resources in a GPU to accelerate physics calculations remains to be seen. Havok did mention to us however, that they believe having two GPUs would be better suited to handle such duties as load balancing would be the key issue. Microsoft's Shader Model 3.0 requires compliance for full programmability, so with today's DX9 compliant GPUs, it's entirely possible for Havok FX to program a Radeon X1800 or GeForce 7800 GT (and beyond) on the fly, with specific physics processing instructions.

Havok also pointed out to us that its Havok FX engine will allow a Shader Model 3.0 compliant GPU to accelerate "game-play" physics and not only the resulting visual effects of such physics, which Havok says, AGIEA's product only does. Havok explains that its engine is able to offload such physics operations as collision detection, which on today's general purpose GPUs are very slow to compute. We spoke to Havok and they said:
"It is definitely the case that load-balancing is a key challenge for both effects physics and graphics. Enabling effects physics via the GPU offers much greater flexibility for addressing that type of problem versus a proprietary physics hardware device that will inevitably sit idle while the GPU may be overtaxed. We believe that two GPU's stand a far better chance of collaborating more effectively."
One important fact to keep in mind is that Havok is playing a significant role in the development of console technology, and has not specifically stated that the PC desktop is the ideal platform for Havok FX. Consoles have the advantage of being a closed system, in which dedicated solutions such as Havok FX will have long term applications. It will be interesting to see if Havok can achieve this and how well.

Havok's physics engines are featured in many of today's games, including F.E.A.R., Age of Empires 3, Brothers in Arms: Road to Hill 30, Halo 2 and many others. The full list can be found here on Havok's website.



View All Comments

  • rqle - Wednesday, November 02, 2005 - link

    Honestly, I RATHER have Developer use their time to offload physics to the GPU or Duel core CPU when/if possible, even IF it is at a lost of a few physics effects. To me, a little less effect is better if more can benefit. A little more physics (assuming this AGEIA PPU is GREATER then developer what DEVELOPER can offload to CPU/GPU) isnt going to make a bad game good. Reply
  • Houdani - Wednesday, November 02, 2005 - link

    Very valid point about developer resources. Consoles are the ideal market for Havok FX since they are a closed system, therefore developers know exactly what hardware is available and can tailor their software accordingly. On PCs (an open system), the decision isn't nearly as cut and dry. Reply
  • Houdani - Wednesday, November 02, 2005 - link

    Why would anyone want to put MORE burden on the graphics card when the graphics card is already the bottleneck (at higher resolutions or AA/AF)?

    In Havok's own quote they state the GPU may be overtaxed as it is. And giving the overtaxed GPU the additional responsibility of handling physics processing is good why?

    AGEIA may be a lame duck (mostly due to cost), but frankly I'm not really keen on Havok's idea.

    I had bad thoughts about overburdened GPU's when ATI was tooting their own horn over GPU physics processing. Havok has just rekindled those thoughts. Oye!
  • Griswold - Wednesday, November 02, 2005 - link


    Why would anyone want to put MORE burden on the graphics card when the graphics card is already the bottleneck (at higher resolutions or AA/AF)?

    Fear is about the only game that makes a vid card the bottleneck at a reasonable resolution right now. In most cases, the CPU is the bottleneck. If you take the physics load off the CPU and put it on the GPU, there shouldnt be much of a penalty - but the GPU apparently can process physics much faster than a CPU.

    Alternatively, CryTek will support dualcores with the next version of the Farcry engine. They will let the physics engine run on a second core, if its available.

    At any rate, I dont see many people buying a PPU card if there are other viable options to use existing hardware.
  • JarredWalton - Wednesday, November 02, 2005 - link

    There are other games that tax the GPU quite a lot, like Doom 3, Quake 4, AOE3, etc. The question is how they tax the GPU. HL2 and FEAR do a lot of pixel shader stuff. Doom 3 and Quake 4 are more constrained by GPU memory bandwidth (for all the shadows among other things). So while D3/Q4 are GPU limited, there may still be unused GPU resources available.

    How this all plays out remains to be seen, naturally. I'm skeptical but at the same time hopeful.
  • Lord Zado - Wednesday, November 02, 2005 - link

    I still think the route to go is some sort of all in one "Gamer Card" with the GPU and PPU on the same card. Even Havok admits that its method would work best with two GPU's. Well what's better: 2 GPU's or 1 GPU and 1 PPU?
    The latter is probably cheaper unless you are buying really low end SLI cards.
  • Griswold - Wednesday, November 02, 2005 - link

    AGEIA is already dead before it was born.

    Nobody in their right mind would shell out 200-300 bucks for some physics engine when the same can be done by your 200-300 dollar vid card.
  • Crassus - Wednesday, November 02, 2005 - link

    Now why would spending 200-300 $ for a PPU be so different from spending the same amount for a second grafics card? Reply
  • xtknight - Wednesday, November 02, 2005 - link

    Because there's no reasonable certainty that game devs will even use this thing we spent $300 for. Reply
  • dwalton - Wednesday, November 02, 2005 - link

    I think this will force Ageia to introduce this type of off loading to the GPU function to their engine. I doubt many developers will want to use a physics engine that only can be take advantage of with a discrete PPU. This would be akin to offering SLI-only games.

    However, offering an engine that flexible enough to be used by a CPU, GPU or a discrete PPU (depending on setup) enables the opportunity for developers to create games based off the AGEIA engine for all gamers. This will allow games to offer different physic feature sets depending on the setup. Im a fan of this scenario because it gives more flexibility to the gamer especially under the condition that any PPU would be generally cheaper than a highend GPU. It also offer a gradual transition to a GPU/PPU setup while not requiring a PPU.

    It seems that offloading physics calcualtion to the GPU will come at cost of reducing the resources for actual graphics. It seems that under normal scenarios the most graphic intensive scenes will also be the ones that are also the most physic intensive. Its hard to believe that graphics and physics are inversely related and developers won't be forced to make graphical or physical sacrifices to enhance the other.

    Using an old GPU card to function as a PPU card is a great idea. However, it seems that SLI and Crossfire are moving towards allowing different generation card SLI/CrossFire setups to enhance graphical performance as well. A PPU will allow extreme gamers who dont want to sacrifice graphical performance for physic performance to simply add a PPU to their SLI setup. A PPU and a flexible physic engine (if available now) would give gamers a choice of:

    Extreme HighEnd

    7800GTX-GT/6800Ultra-vanilla-6600GT-5XXX generation/PPU
    7800GTX-GT/6800Ultra-vanilla-6600GT-5XXX generation
    1800XT-XL/850XT-Pro-800XL-GTO-700PRO-9XXX generation/PPU
    1800XT-XL/850XT-Pro-800XL-GTO-700PRO-9XXX generation

    and so on and so on

Log in

Don't have an account? Sign up now