Bounding Volume Hierarchy - How Computers Test the World

Perhaps the biggest aspect of NVIDIA’s gamble on ray tracing is that traditional GPUs just aren’t very good at the task. They’re fast at rasterization and they’re even fast at parallel computing, however ray tracing does not map very well to either of those computing paradigms. Instead NVIDIA has to add hardware dedicated to ray tracing, which means devoting die space and power to hardware that cannot help with traditional rasterization.

A big part of that hardware, in turn, will go into solving the most basic problem of ray tracing: how do you figure out what a ray is intersecting with? The most common solution to this problem is to store triangles in a data structure that is well-suited for ray tracing. And this data structure is called a Bounding Volume Hierarchy.

Conceptually, a BVH is relatively simple – at least for the purposes of this article. Rather than testing every polygon to see if a ray interacts with it, the idea is to test a portion of a scene to see if it interacts with a ray, and then keep drilling down. If there is an intersection with that portion of the scene, then subdivide it into smaller portions and test again. And again. And again. All the way until you reach the individual polygon, at which point the ray testing is resolved.

For the computer scientists in the crowd, this might sound a lot like an application of a binary search, and it is. Each test allows for a significant number of options (in this case polygons) to be discarded as possible answers. This gets to the right polygon in just a fraction of the time. A BVH, in turn, is stored in what’s essentially a tree data structure, with each subdivision – called bounding boxes – stored as children of their parent bounding box.

Now the catch with BVH is that while it radically cuts down on the amount of ray intersection needed compared to a naïve implementation, it’s still not super cheap. A number of tests are still required for each ray, with both successful and failed tests adding to the total number of tests taken. And all of this is for a single ray, when a significant number of rays are going to be needed for each pixel. Which is why hardware acceleration of the process is so important (and not at all easy).

The other major computational cost here is that BVHs themselves aren’t free. One needs to be created for a scene from the polygons in it, so there is an additional step before ray casting can even begin. This is more a developer concern – when can they modify and reuse a BVH versus building a new one – but it’s another step in the process. Furthermore it’s an example of why developer training and efficient engine implementations are so crucial to the process, as a poor implementation can make ray tracing much too slow to be viable.

Ray Tracing 101: What It Is & Why NVIDIA Is Betting On It The Turing Architecture: Volta in Spirit
Comments Locked

111 Comments

View All Comments

  • Yojimbo - Saturday, September 15, 2018 - link

    Of course he meant giga as in billion. Of course it's simple. But where did you see "gigarays/sec" before? You didn't. So as far as you know, it is an NVIDIA-made term.

    He said it that way because it's easier to talk about it that way, and because NVIDIA is making a marketing term out of it. And he introduced it with a joke.
  • edzieba - Saturday, September 15, 2018 - link

    'Gigs' is an SI prefix. You put it in front of any unit to indicate 10^9 of that unit.
  • eddman - Saturday, September 15, 2018 - link

    Why does it even matter?! As I pointed out, I meant to write "rays/sec". I forgot to take out the giga part after copy/paste.

    I'm pretty sure a lot of ray tracing developers have uttered the words "giga rays" before jensen.
  • markiz - Monday, September 17, 2018 - link

    Well car makers are also using kilowatts for engines. Bastards.
  • Yojimbo - Saturday, September 15, 2018 - link

    Where do you get these die area estimates from?
  • edzieba - Saturday, September 15, 2018 - link

    If you compare scaled die shots of Turing to Pascal (GP102), isolate an SM, and assume each CUDA core occupies the same die area, then the combination of Tensor and RT cores takes around 24% die area. This is more of an upper bound, as the CUDA cores are likely to be larger due to the concurrent execution capability, there is more space in the SM taken by memory, and more uncore taken by the NVLink interfaces
  • Yojimbo - Saturday, September 15, 2018 - link

    Where are these die shots? Paste them? I think that people are looking at schematics, not die shots.
  • edzieba - Saturday, September 15, 2018 - link

    Nvidia posted them (surprisingly correctly scaled) as part of the Turing unveil presentation.
  • Yojimbo - Saturday, September 15, 2018 - link

    Can you link me to them and to an analysis of them?
  • 808Hilo - Sunday, September 16, 2018 - link

    So far 10% better than 1080ti and 15% over 1080. Really not worth it. Better spend money on MB, new chip, ram and fast SSD if coming from an older PC.

Log in

Don't have an account? Sign up now