NVIDIA Works: ANSEL & VRWorks Audio

Along with the various hardware aspects of Pascal, NVIDIA’s software teams have also been working on new projects to coincide with the Pascal launch. These are a new screenshot tool, and a new audio simulation package based on path traced audio.

We’ll start with NVIDIA’s new screenshot utility. Dubbed ANSEL, after famous American environmental photographer Ansel Adams, ANSEL is a very different take on screenshots. Rather than taking screenshots from the player’s perspective at the game rendering resolution, ANSEL allows for an entire scene to be captured at a far higher resolution than with standard screenshots. NVIDIA is pitching this as an art tool rather than a gaming tool, and I get the impression that this is one of those pie-in-the-sky kind of ideas that NVIDIA’s software group decided to run with in order to best show off Pascal’s various capabilities.

At its core, ANSEL is a means to decouple taking screenshot from the limitations of the player’s view. In an ANSEL-enabled application, ANSEL can freeze the state of the game, move the camera around, and then generate a copious amount of viewports to take screenshots. The end result is that ANSEL makes it possible to generate an ultra-high resolution 360 degree stereo 3D image of a game scene. The analogy NVIDIA is working towards is dropping a high quality 360 degree camera into a game, and letting users play with it as they see fit.

But even this isn’t really a great description of ANSEL, as there isn’t anything else like it to compare it to. Some games have offered 360 degree capture, but they haven’t done so at any kind of resolution approaching what ANSEL can do. And this still doesn’t touch features such as HDR (FP16) scene capture or the free camera.

Under the hood, ANSEL is at times a checklist for Pascal technologies (though it does work with Maxwell 2 as well). In order to capture scenes at a super high resolution, it forces a scene to its maximum LOD and breaks it down into a number of viewports, implemented efficiently using SMP. To demonstrate this technology NVIDIA put together a 4.5Gpix image rendered out of The Witcher 3, which was composed of 3600 such viewport tiles. Meanwhile stitching together the individual tiles is a CUDA based rendering process, which uses overlapping tiles to resolve any tone mapping conflicts. Finally, ANSEL captures images before they’re actually sent to a display, grabbing HDR images (in EXR format0 in games that support HDR.

Meanwhile given its level of deep interaction with games, ANSEL does require individual game support to work. This is in the form of a library provided by NVIDIA, which helps ANSEL and NVIDIA’s driver make sense of a scene and pause the simulation when necessary. Unsurprisingly, NVIDIA is eager to get ANSEL into more games – it just launched on Mirror’s Edge: Catalyst – and as a result is touting to developers that ANSEL is easy to implement, having taken only 150 lines of code on The Witcher 3.

Ultimately NVIDIA seems to be throwing ANSEL at the wall here to see what sticks. But it should be neat to see what users end up doing with the technology,

VRWorks Audio

Not to be outdone by the ANSEL team, other parts of NVIDIA’s software group has been working on a slightly different kind of project for NVIDIA: audio. As a GPU company, NVIDIA has never been deeply involved with audio (not since getting out of the chipset business, at least), but with the current focus on VR, they are taking a crack at it in a new way.

VRWorks Audio is the latest library as part of NVIDIA’s larger VRWorks suite. As given away by the name, this library is focused on audio, specifically for VR. In a nutshell, VRWorks is a full audio simulation library, using path tracing to power the simulation. The goal of VRWorks Audio is to provide a realistic sound simulation for VR, to further increase the apparent realism.

Under the hood, VRWorks audio leverages NVIDIA’s existing OptiX path tracing technology. Only rather than tracing light it’s used to trace sound waves. Along with simulating audio propagation itself – including occlusion and reverb – VRWorks Audio is also able to run the necessary Head Related Transfer Functions (HRTFs) to reduce the simulation down to binaural audio for headphones.

All of this is, of course, executed on Pascal’s CUs in a manner similar to path tracing or PhysX, running alongside the main graphics rendering thread. The amount of processing power required for VRWorks Audio can vary considerably depending on the detail desired (particularly the number of reflections); for NVIDIA’s VR Funhouse demo, VR Works audio can occupy most of a GPU on its own.

Ultimately, unlike some of the other technologies presented by NVIDIA, VRWorks Audio is in a relatively early stage. As a result while NVIDIA is shipping the SDK, there aren’t any games that are announced to be using it at this time, and if it gets any traction it’ll be farther into the future before we see the first games using it. That said, NVIDIA is already reaching out to the all-important middleware vendors on the subject, and to that end their own VR Funhouse demo is using FMOD with a VRWorks Audio plugin to handle the sound, demonstrating that they already have VRWorks Audio working with the popular audio middleware.

GPU Boost 3.0: Finer-Grained Clockspeed Controls Meet the GeForce GTX 1080 & GTX 1070 Founders Edition Cards
Comments Locked

200 Comments

View All Comments

  • Robalov - Tuesday, July 26, 2016 - link

    Feels like it took 2 years longer than normal for this review :D
  • extide - Wednesday, July 27, 2016 - link

    The venn diagram is wrong -- for GP104 it says 1:64 speed for FP16 -- it is actually 1:1 for FP16 (ie same speed as FP32) (NOTE: GP100 has 2:1 FP16 -- meaning FP16 is twice as fast as FP32)
  • extide - Wednesday, July 27, 2016 - link

    EDIT: I might be incorrect about this actually as I have seen information claiming both .. weird.
  • mxthunder - Friday, July 29, 2016 - link

    its really driving me nuts that a 780 was used instead of a 780ti.
  • yhselp - Monday, August 8, 2016 - link

    Have I understood correctly that Pascal offers a 20% increase in memory bandwidth from delta color compression over Maxwell? As in a total average of 45% over Kepler just from color compression?
  • flexy - Sunday, September 4, 2016 - link

    Sorry, late comment. I just read about GPU Boost 3.0 and this is AWESOME. What they did, is expose what previously was only doable with bios modding - eg assigning the CLK bins different voltages. The problem with overclocking Kepler/Maxwell was NOT so much that you got stuck with the "lowest" overclock as the article says, but that simply adding a FIXED amount of clocks across the entire range of clocks, as you would do with Afterburner etc. where you simply add, say +120 to the core. What happened here is that you may be "stable" at the max overclock (CLK bin), but since you added more CLKs to EVERY clock bin, the assigned voltages (in the BIOS) for each bin might not be sufficient. Say you have CLK bin 63 which is set to 1304Mhz in a stock bios. Now you use Afterburner and add 150 Mhz, now all of a sudden this bin amounts to 1454Mhz BUT STILL at the same voltage as before, which is too low for 1454Mhz. You had to manually edit the table in the BIOS to shift clocks around, especially since not all Maxwell cards allowed adding voltage via software.
  • Ether.86 - Tuesday, November 1, 2016 - link

    Astonishing review. That's the way Anandtech should be not like the mobile section which sucks...
  • Warsun - Tuesday, January 17, 2017 - link

    Yeah looking at the bottom here.The GTX 1070 is on the same level as a single 480 4GB card.So that graph is wrong.
    http://www.hwcompare.com/30889/geforce-gtx-1070-vs...
    Remember this is from GPU-Z based on hardware specs.No amount of configurations in the Drivers changes this.They either screwed up i am calling shenanigans.
  • marceloamaral - Thursday, April 13, 2017 - link

    Nice Ryan Smith! But, my question is, is it truly possible to share the GPU with different workloads in the P100? I've read in the NVIDIA manual that "The GPU has a time sliced scheduler to schedule work from work queues belonging to different CUDA contexts. Work launched to the compute engine from work queues belonging to different CUDA contexts cannot execute concurrently."
  • marceloamaral - Thursday, April 13, 2017 - link

    Nice Ryan Smith! But, my question is, is it truly possible to share the GPU with different workloads in the P100? I've read in the NVIDIA manual that "The GPU has a time sliced scheduler to schedule work from work queues belonging to different CUDA contexts. Work launched to the compute engine from work queues belonging to different CUDA contexts cannot execute concurrently."

Log in

Don't have an account? Sign up now