When Things Go Wrong & The Test

It's worth noting that at one point this article had a very different tone to it, based on the benchmark results we had gotten. We recently replaced the test rig we run the PhysX articles on with a newer machine, and as the component of note, threw in an ATI Radeon X1900XTX as it's generally the fastest single-slot card we have that isn't an SLI card (i.e. the GeForce 7950 GX2). City of Heroes/Villains is a game we long ago established was CPU limited, so the choice in video cards is largely academic, or so we thought.

City of Villains Performance


It turns out that ATI's latest drivers have a problem with City of Heroes/Villains where the performance of the game chokes when using some of the advanced rendering features. Fortunately, we caught this issue, but for a while we were wondering why the PhysX card wasn't helping as much as expected. It's always interesting to discover where the bottlenecks are in different benchmarks. City of Heroes is a great testing components since it's an OpenGL title that isn't built on the Doom3 engine. Unfortunately, this is bad timing for ATI, given their recent OpenGL improvements on games that do use the Doom3 engine.

Due to the issues with the ATI card, we switched to testing with a 7950 GX2 instead. Here are the details of our test setup.

PhysX Testbed Configuration
CPU: Intel Core 2 Extreme X6800 (2.93GHz/4MB)
Motherboard: Intel D975XBX (LGA-775)
Chipset: Intel 975X
Chipset Drivers: Intel 7.2.2.1007 (Intel)
Hard Disk: Seagate 7200.7 160GB SATA
Memory: Corsair XMS2 DDR2-800 4-4-4-12 (1GB x 2)
Video Card: NVIDIA GeForce 7950GX2
Video Drivers: NVIDIA ForceWare 91.33
OS: Windows XP Professional SP2


Index PhysX Performance
POST A COMMENT

31 Comments

View All Comments

  • stepz - Thursday, September 07, 2006 - link

    I don't really care for the PPU, but it would be really interesting to see what quad-cores or AMD's 4x4 would do with city of villains. Can the PPU keep its ground and does the game scale to 4 cores. You can emulate the 4x4 platform with 2xx Opterons. Reply
  • Ryan Smith - Thursday, September 07, 2006 - link

    CoV only has 2 worker threads(basically broken up in to a renderer and a physics thread), so more cores wouldn't directly help. Reply
  • PrinceGaz - Friday, September 08, 2006 - link

    That's a shame because I was wondering the same thing about quad-core processors and whether they can match or even exceed the throughput of the PhysX card. After all quad-core processors should be available this time next year and will be commonplace by 2008.

    The application should ideally branch off as many physics-threads as there are cores available, so on a dual-core system I would like to see two physics threads (rather than just one) in addition to the main game thread, thus ensuring all spare CPU power can be used for physics work. Having only two threads in total each performing different types of work will usually result in one of the cores being partially idle.

    I personally see the PhysX card as a short-lived product because CPU power is set to rise dramatically in coming years now the focus is on ever more cores (doubling every couple of years or so); there'll be so much CPU power available to easily multi-threaded tasks like physics that there will be no need for a dedicated physics processor chip in any form.
    Reply
  • Gilhooley - Thursday, September 07, 2006 - link

    It would be nice with more of a "real world" test. Today when people are playing games online, they usally have: the game, teamspeak, game scanner, browsers for forum game/clan info and perhaps a torrent client running.

    Myself I noticed a huge diffence in min fps with a dualcore vs single just with game and teamspeak - just as you tested in a earlier article. So, the question is, witch HW takes the biggest hit in a "real world" situation?
    Reply
  • bespoke - Thursday, September 07, 2006 - link

    Maybe if DirectX 10 has a physics API that the AGEIA's PhysX card can hook into, we'll see games that can use the card well. Otherwise no one is going to buy a card that only works for a few games that go out of their way to support a 3rd party API that results in a small frame rate increase. Reply
  • poohbear - Thursday, September 07, 2006 - link

    hhhmmhmmmmmm should i upgrade to a new x1900xt w/ 256mb ram for $280 or buy ageia's ppu for $280? really tough decision. wtf does ageia think they are selling @ that price point? Reply
  • DigitalFreak - Thursday, September 07, 2006 - link

    These companies always target people with more money than sense. Reply
  • Kwincy - Thursday, September 07, 2006 - link

    I don't think they're targeting people with more money than sense, it's just like all new products introduced, they're recouping all their R&D costs to bring the product to the market. Once they either do that, or people stop buying or don't buy this PPU at all, you'll see prices go down. It always happens, unless a competing product comes out for a better value.
    Reply
  • yyrkoon - Thursday, September 07, 2006 - link

    Going from no PPU, to using one seems to be about the difference possible of switching from onboard audio, to a dedicated audio card (not very much of a difference). Less than 6 FPS min on a low end (Conroe ?) CPU, just doesnt seem to be worth the additional $250.

    Now, since there is no standards for PPUs, I think this makes it even worse. I bet GPU manufactuers will end up winning the race in this arena.
    Reply
  • Christobevii3 - Thursday, September 07, 2006 - link

    If they could make the card accelerate other things too it would be cool. Imagine if like batch conversions in photoshop could use some of the processing power... Reply

Log in

Don't have an account? Sign up now