Applications of GF100’s Compute Hardware

Last but certainly not least are the changes to gaming afforded by the improved compute/shader hardware. NVIDIA believes that by announcing the compute abilities so far ahead of the gaming abilities of the GF100, that potential customers have gotten the wrong idea about NVIDIA’s direction. Certainly they’re increasing their focus on the GPGPU market, but as they’re trying their hardest to point out, most of that compute hardware has a use in gaming too.

Much of this is straightforward: all of the compute hardware is what processes the pixel and vertex shader commands, so the additional CUDA cores in the GF100 give it much more shader power than the GT200. We also have DirectCompute, which can use the compute hardware to quickly do some things that couldn’t be done quickly via shader code, such as Self Shadowing Ambient Occlusion in games like Battleforge, or to take an NVIDIA example, the depth-of-field effect in Metro 2033.

Perhaps the single biggest improvement for gaming that comes from NVIDIA’s changes to the compute hardware are the benefits afforded to compute-like tasks for gaming. PhysX plays a big part here, as along with DirectCompute it’s going to be one of the biggest uses of compute abilities when it comes to gaming.

NVIDIA is heavily promoting the idea that GF100’s concurrent kernels and fast context switching abilities are going to be of significant benefit here. With concurrent kernels, different PhysX simulations can start without waiting for other SMs to complete the previous simulation. With fast context switching, the GPU can switch from rendering to PhysX and back again while wasting less time on the context switch itself. The result is that there’s going to be less overhead in using the compute abilities of GF100 during gaming, be it for PhysX, Bullet Physics, or DirectCompute.

NVIDIA is big on pushing specific examples here in order to entice developers in to using these abilities, and a number of demo programs will be released along with GF100 cards to showcase these abilities. Most interesting among these is a ray tracing demo that NVIDIA is showing off. Ray tracing is something even G80 could do (albeit slowly) but we find this an interesting way for NVIDIA to go since promoting ray tracing puts them in direct competition with Intel, who has been showing off ray tracing demos running on CPUs for years. Ray tracing nullifies NVIDIA’s experience in rasterization, so to promote its use is one of the riskier things they can do in the long-term.


NVIDIA's car ray tracing demo

At any rate, the demo program they are showing off is a hybrid program that showcases the use of both rasterization and ray tracing for rendering a car. As we already know from the original Fermi introduction, GF100 is supposed to be much faster than GT200 at ray tracing, thanks in large part due to the L1 cache architecture of GF100. The demo we saw of a GF100 card next to a GT200 card had the GF100 card performing roughly 3x as well as the GT200 card. This specific demo still runs at less than a frame per second (0.63 on the GF100 card) so it’s by no means true real-time ray tracing, but it’s getting faster all the time. For lower quality ray tracing, certainly this would be doable in real-time.


Dark Void's turbulence in action

NVIDIA is also showing off several other demos of compute for gaming, including a PhysX fluid simulation, the new PhysX APEX turbulence effect on Dark Void, and an AI path finding simulation that we did not have a chance to see. Ultimately PhysX is still NVIDIA’s bigger carrot for consumers, while the rest of this is to entice developers to make use of the compute hardware through whatever means they’d like (PhysX, OpenCL, DirectCompute). Outside of PhysX, heavy use of the GPU compute abilities is still going to be some time off.

Better Image Quality: CSAA & TMAA 3D Vision Surround: NVIDIA’s Eyefinity
Comments Locked

115 Comments

View All Comments

  • DanNeely - Monday, January 18, 2010 - link

    For the benefit of myself and everyone else who doesn't follow gaming politics closely, what is "the infamous Batman: Arkham Asylum anti-aliasing situation"?
  • sc3252 - Monday, January 18, 2010 - link

    Nvidia helped get AA working in batman which also works on ATI cards. If the game detects anything besides a Nvidia card it disables AA. The reason some people are angry is when ATI helps out with games it doesn't limit who can use the feature, at least that's what they(AMD) claim.
  • san1s - Monday, January 18, 2010 - link

    the problem was that nvidia did not do qa testing on ati hardware
  • Meghan54 - Monday, January 18, 2010 - link

    And nvidia shouldn't have since nvidia didn't develop the game.

    On the other hand, you can be quite certain that the devs. did run the game on Ati hardware but only lock out the "preferred" AA design because of nvidia's money nvidia invested in the game.

    And that can be plainly seen by the fact that when the game is "hacked" to trick the game into seeing an nvidia card installed despite the fact an Ati card is being used and AA works flawlessly....and the ATi cards end up faster than current nvidia cards....the game is exposed for what it is. Purposely crippling a game to favor one brand of video card over another.

    But the nvididiots seem to not mind this at all. Yet, this is akin to Intel writing their complier to make AMD cpus run slower or worse on programs compiled with the Intel compiler.

    Read about that debacle Intel's now suffering from and that the outrage is fairly universal. Now, you'd think nvidia would suffer the same nearly universal outrage for intentionally crippling a game's function to favor one brand of card over another, yet nvidiots make apologies and say "Ati cards weren't tested." I'd like to see that as a fact instead of conjecture.

    So, one company cripples the function of another company's product and the world's up in arms, screaming "Monopolistic tactics!!!" and "Fine them to hell and back!"; another company does essentially the same thing and it gets a pass.

    Talk about bias.
  • Stas - Tuesday, January 19, 2010 - link

    If nV continues like this, it will turn around on them. It took MANY years for the market guards to finally say, "Intel, quit your sh*t!" and actually do something about it. Don't expect immediate retaliation in a multibillion dollar world-wide industry.
  • san1s - Monday, January 18, 2010 - link

    "yet nvidiots make apologies and say "Ati cards weren't tested." I'd like to see that as a fact instead of conjecture. "
    here you go
    http://www.legitreviews.com/news/6570/">http://www.legitreviews.com/news/6570/
    "On the other hand, you can be quite certain that the devs. did run the game on Ati hardware but only lock out the "preferred" AA design because of nvidia's money nvidia invested in the game. "
    proof? that looks like conjecture to me. Nvidia says otherwise.
    Amd doesn't deny it either.
    http://www.bit-tech.net/bits/interviews/2010/01/06...">http://www.bit-tech.net/bits/interviews...iew-amd-...
    they just don't like it
    And please refrain from calling people names such as "nvidiot," it doesn't help portray your image as unbiased.
  • MadMan007 - Monday, January 18, 2010 - link

    Oh for gosh sakes, this is the 'launch' and we can't even have a paper launch where at least reviewers get hardware? This is just more details for the same crap that was 'announced' when the 5800s came out. Poor show NV, poor show.
  • bigboxes - Monday, January 18, 2010 - link

    This is as close to a paper launch as I've seen in a while, except that there is not even an unattainable card. Gawd, they are gonna drag this out a lonnnnngg time. Better start saving up for that 1500W psu!
  • Adul - Monday, January 18, 2010 - link

    I suppose this is a vaporlaunch then.
  • Adul - Monday, January 18, 2010 - link

    I suppose this is a vaporlaunch then.

Log in

Don't have an account? Sign up now