Let's Talk about PhysX Baby

When AMD and NVIDIA both started talking about GPU accelerated physics the holy grail was the following scenario:

You upgrade your graphics card but now instead of throwing your old one away, you simply use it for GPU accelerated physics. Back then NVIDIA didn't own AGEIA and this was the way of competing with AGEIA's PPU, why buy another card when you can simply upgrade your GPU and use your old one for pretty physics effects?

It was a great promise but something that was never enabled in a useable way by either company until today. Previous NVIDIA drivers with PhysX support required that you hook a monitor up to the card that was to accelerate PhysX and extend the windows desktop onto that monitor. With NVIDIA's 180.48 drivers you can now easily choose which NVIDIA graphics card you'd like to use for PhysX. Disabling PhysX, enabling it on same GPU as the display, or enabling it on a different GPU are now as easy as picking the right radio button option and selecting the card from a drop down menu.

When we tested this, it worked. Which was nice. While it's not a fundamental change in what can be done, the driver has been refined to the point where it should have been in the first place. It is good to have an easy interface to enable and adjust the way PhysX runs on the system and to be able to pick whether PhysX runs on the display hardware (be it a single card or an SLI setup) or on a secondary card. But this really should have already been done.

There is another interesting side effect. When we enabled PhysX on our secondary card, we noticed that the desktop had been extended onto a non-existent monitor.

Windows has a limitation of not allowing GPUs to be used unless they are enabled as display devices, which was the cause of the cumbersome issues with enabling PhysX on a secondary card in the first place. Microsoft hasn't fixed anything from their end, but NVIDIA has made all the mucking around with windows transparent. It seems they simply tell windows a display is connected when it is actually not. It's a cool trick, but hopefully future versions of Windows will not require such things.

Mirror's Edge: The First Title to Impress us with GPU PhysX?

Around every single GPU release, whether from AMD or NVIDIA, we get a call from NVIDIA telling us to remember that only on NVIDIA hardware can you get PhysX acceleration (not physics, but PhysX). We've always responded by saying that none of the effects enabled by PhysX in the games that support it are compelling enough for us to recommend an underperforming NVIDIA GPU over a more competitive AMD one. Well, NVIDIA promises that Mirror's Edge, upon its release in January for the PC will satisfy our needs.

We don't have the game nor do we have a demo for anything other than consoles, but NVIDIA promises it'll be good and has given us a video that we can share. To under line the differences between the PhysX and non-PhysX version, here's what to look for: glass fragments are a particle system without PhysX and persistent objects with (meaning they stick around and can be interacted with). Glass fragments are also smaller and more abundant with PhysX. Cloth is non-interactive and can't be ripped torn or shot through without PhysX (it will either not there at all or it won't respond to interaction). Some of the things not shown really clearly are that smoke responds to and interacts with characters and leaves and trash will blow around to help portray wind and in response to helicopters.

Another aspect to consider is the fact that PhysX effects can be run without GPU acceleration at greatly reduced performance. This means that AMD users will be able to see what their missing. Or maybe an overclocked 8 core (16 thread) Nehalem will do the trick? Who knows... we really can't wait to get our hands on this one to find out.

We'll let you be the judge, is this enough to buy a NVIDIA GPU over an AMD one? What if the AMD one was a little cheaper or a little faster, would it still be enough?

We really want to see what the same sequence would have looked like with PhysX disabled. Unfortunately, we don't have a side by side video. But that could also significantly impact our recommendation. We are very interested in what Mirror's Edge has to offer, but it looks like getting the full picture means waiting for the game to hit the streets.

Multi-monitor SLI Users Rejoice! Driver Performance Improvements
Comments Locked

63 Comments

View All Comments

  • Finally - Friday, November 21, 2008 - link

    I could have taken you seriously if it wasn't for your child-like pronounciation of that green firm's name.

    Do you also write about "Micro$oft"?
  • Paratus - Thursday, November 20, 2008 - link

    I always see both camps complaining about the state of each companies drivers.

    IMHO I'll take AMDs bad drivers every month instead of NVs bad drivers every whenever they decide to release them.

    Sorry
  • ggathagan - Thursday, November 20, 2008 - link

    We would like to have seen the performance gains NVIDIA talked about. While we don't doubt that they are in there, it is likely we just didn't look at the right settings or hardware.

    If NVIDIA claims "Up to 38% performance increase in Far Cry 2", they should be able to tell you the exact circumstances where that 38% increase can be seen. If it's reproducible, great. If not, they're lying and should be called on it.

    As for PhysX: I'm all for realizing its potential, but Mirror's Edge strikes me as having PhysX simply for the sake of having Physx.
    Granted, it's just a trailer, but I wasn't that impressed with the look of the game. It looked as if they spent their time on the Physx and ignored the character modeling. The arm/body movement looks rather bizarre.
  • Kode - Thursday, November 20, 2008 - link

    Although I agree that some ATI/AMD driver updates aren't that good, the good thing about a monthly release is that when you have a small bug/glitch in a certain game, this can be updated in a month. If you have the same thing on a NVIDIA card, you don't know when to expect a new driver, and so you are stuck with it untill the next driverrelease unless they release a hotfix or perhaps beta. But installing hotfixes/beta's isn't done often by regular people.
  • Casper42 - Thursday, November 20, 2008 - link

    Title says it all. Driver enhancements and TELSA are great and all, but where are the darn die shrinks?

    I was really hoping nVidia would have their stuff together and have released the GTX 279/290 or whatever they decide to call the 55nm parts when Intel released the i7 processors. When gamers are blowing $1000+ on a new Board/Chip/RAM, whats another $600 for that top of the line nVidia card?

    After all, wasnt the point of allowing SLI on x58 to sell more cards?
  • Casper42 - Thursday, November 20, 2008 - link

    The HPC Market seems to be going more and more toward Blade servers these days as you can cram an awful lot of computer power into a 10U space with hardware from 2 or 3 different vendors.

    I am curious if nVidia is working with HP or Dell or IBM on making a special Blade version of their TESLA cards. The expansion cards in the HP c series are very small which may prohibit TESLA from physically even fitting into the Blade server. BUT, they also have a way of channelling PCI Express lanes into an adjacent blade slot (for instance, to support their "Storage Blade") so if TESLA won't fit inside the blade itself, why not put together a TESLA blade that contains 2/3/4 Cards and connects to the adjacent blade server.

    This would allow you (for instance) to take an HP c7000 chassis and put 8 BL460c Blades with up to 2 Xeon 54xx chips, 64GB of RAM (assuming 8GB DIMMs), and then have 2-4 TESLA cards attached to each, and cram all that into a 10U space. At a minimum that would be 16 Processors, 256GB of RAM (32GB/node) and 16 TESLA Cards.

    You even get your choice of 10GB Ethernet or Infiniband to connect all the nodes.
  • Spoelie - Thursday, November 20, 2008 - link

    This is the first time I've seen someone complain about AMD's driver mantra.

    AMD provides a constant evolution in their drivers, it's the users choice to update the driver or not. You can not fault them for providing lots of updates. Their readme is also very clear and concise in what is fixed and what is not.

    The possible sacrifices do not outweigh the advantages IMO. That comment was a bit of a potshot
  • kilkennycat - Thursday, November 20, 2008 - link

    For at least the last 5 years, ATi's drivers have periodically had the spotty reputation that the next update fixes a bunch of problems with the latest games, but then has newly introduced brand-new problems with earlier "legacy" games. Seemed as if they rushed QC, with only a handful of the latest titles. And for an obvious reason.... the burden of a monthly release cycle is no help in enabling thorough QC at all !!! Much better if the offical releases were at least 3 months apart, with beta updates for the "brave" to try out. The 'next driver breaks something not previously broken' problem was particularly bad when ATi transititioned their architecture with the introduction of the X1800 series. Recently, this ATi legacy problem has got much, much better, but they seem to have slid backwards recently.
  • DerekWilson - Thursday, November 20, 2008 - link

    We have complained about AMD's driver development issues in the past. But we always try and keep it as fair and neutral as possible.

    If all things were equal, I would agree that "you can not fault them for providing lots of updates" ... but that is not what they do.

    NVIDIA regression tests with hundreds of games for every driver release. In fact, comprehensive regression testing was one of the major reasons NVIDIA acquired 3dfx back in the day.

    AMD only regression tests with 25 games. These 25 games change with driver versions so that over time they'll cover many games. The problem is that this doesn't work well. for example ...

    Let's say some x.y driver is regression tested with ... let's pick bioshock. The next month, bioshock falls off the list and x.(y+1) breaks crossfire with bioshock. crossfire isn't as popular as single card performance so there aren't as many users to complain and it will either take them adding bioshock back to their regression test list (which could be never or 6 months or a year), or a large hardware review site will need to go test it an publish an article on how broken it is only to get a hotfix driver 2 days later that fixes the issue.

    that happened by the way. and not only with bioshock. it has happened with other games as well, and most of the time it is an issue that affects crossfire. sometimes its other bugs, but multi-GPU support is the thing that seems to be at highest risk in our experience.

    this is not an infrequent problem.

    and lets say you find a bug in the recently released 8.11 -- no lets say AMD finds a bug in 8.11 ... It will not be fixed until at least 9.1 as they can't push 8.12 back to include more fixes. until then, if its a big name title that has a fix, AMD will put out a hotfix. But then you've got to use a non-WHQL version of 8.11 for upwards of two months, even if there are features in 8.12 you want/need.

    We are currently in a situation where we have to stick with an 8.10 + hotfix until 8.12 comes out.

    I am very conservative in my articles about mentioning problems with driver teams. Driver work is tough, and reviewers tend to hit many more problems than the average gamer. We test much more software on a wide variety of hardware and are more prone to running into issues. While the problems do exist for end users, it's always just a subset of users at a time. It has to be that way to some extent no matter what (there will always be tradeoffs made), but AMDs trade offs do impact us quite a bit. And I also feel like they cut too many corners and make too many tradeoffs to the point where it negatively impacts too many end users. If we hit more problems with one vendor than another, that is a very relevant bit of information for every consumer. Even if it isn't of the same magnitude it is for us, it's still an issue.

    Thus, I am aware that my view of AMD driver development will be more negative than most users out there. But it does still negatively impact end users in a bigger way than NVIDIA's approach in general (though NVIDIA's execution isn't always spot on either).

    Here's the best way I can put it.

    If you find an AMD driver that works, stick with it. Don't change drivers unless something is broken that got fixed that you need. Upgrading when not necessary will likely break something else that you might find you needed.

    On the contrary, I would never recommend against upgrading to an NVIDIA WHQL driver. They are much better about not breaking things that have previously been fixed and are much more hardened by the extensive regression testing. All the fixes that go into one driver (beta or WHQL) will be included in the next beta or WHQL driver, unlike with AMD and their multiple trunk or overlapping branch system or whatever you want to call it.

    There are simply few to no real advantages (other than for marketing purposes) with AMD's driver development approach, so if there are negatives at all they've already outweighed everything else.
  • JonnyDough - Friday, November 21, 2008 - link

    Care to explain to me what happened to Neverwinter Nights 2 and Nvidia then? It doesn't work.

Log in

Don't have an account? Sign up now