What is Raytracing?

In all modern forms of 3D rendering for display on a computer, the goal is to determine the color of every pixel on the screen as fast as possible. Raytracing is simply a method that can be used to do so. Currently, the most common method for rendering realtime 3D graphics is rasterization. There are fundamental differences between the way rasterization and raytracing go about determining pixel color.

With rasterization and raytracing, we start with geometry. Triangles to be specific. We have a scene made up of triangles and shader programs are used to determine the color at any given point on every triangle. With a rasterizer, we loop through every triangle and use math to project the triangle onto the screen. This is like taking a 3D scene, and flattening it out. We find out what pixels every triangle overlaps and save the depth values for later when we shade those pixels. We use lighting algorithms, texture maps, and the location of the pixel on the triangle itself to do the shading.


Click to Enlarge

Unlike rasterization, raytracing starts with the pixels. If we draw a line from a central eye (or camera) position through each pixel, we can use math to determine what triangles this line (called a primary ray) intersects. For every triangle that our primary intersects, we save the position of the intersection. After all our geometry has been checked for intersection, we keep the intersection closest to the viewer (ignoring transparency for a minute). This process means lots of conditionals and branching in addition to the compute power required by whatever shader programs are used.


Click to Enlarge

From here, like with rasterization, we can use shaders to determine the color of our pixel, but the input the shaders use can be other rays (these are secondary rays) that have been cast (or shot) from our saved closest point of intersection. These secondary rays can be used to do lots of things like look for shadows (shoot a ray at every light source and see if that light is blocked by something), and reflections (shoot a ray at the angle the primary ray would reflect from and start the process over again). Rays used to do reflection, refraction, radiosity, and other effects can end up generating a good number of secondary rays. The key advantages to the rendering quality of raytracing lie in secondary rays, but these are also what add the incredible complexity to raytracing renderers.


Click to Enlarge

Calculating secondary rays is particularly time consuming, as not only do we have the same branching issues, but we are less likely to see speed up from grouping rays together into packets. Its easy to see that when we shot a lot of primary rays (say four for each pixel for some antialiasing), include a lot of bounces for reflective surfaces (lots of secondary rays becoming increasingly incoherent), have a lot of geometry (and thus lots of things to check for intersection), lots of light sources (which means lots of shadows), have translucent material with refraction indexes, or treat other lit objects as light sources (radiosity), computing our scene has a ton of branches and a ton of computation.


Click to Enlarge

Typically CPUs are really good for branching and sequential operations. On the flipside, GPUs are great for situations with tons of parallel independent operations with little branching. Putting both lots of branching and lots of parallel independent operations together can lead to an algorithm that can't benefit from the full potential of either CPUs or GPUs. Caustic Graphics has put together hardware that attempts to create a perfect environment for raytracing, approaching the problem differently than either a CPU or a GPU. Unfortunately, they didn't go into much detail about the architecture of their hardware. But they did tell us some things and we can speculate on others.

Index CausticOne and the Initial Strategy
Comments Locked

48 Comments

View All Comments

  • HelToupee - Tuesday, April 21, 2009 - link

    Go outside. Look around. Real-time raytracing is here today! The future is now!! :)
  • MrPickins - Monday, April 20, 2009 - link

    The FPGA implementation surprised me as well. It's impressive that they can get such performance out of a pair of them.
  • SonicIce - Monday, April 20, 2009 - link

    I'll give them 12 months...
  • Harbinger - Monday, April 20, 2009 - link

    I'm pretty sure they will succeed. Just make a working prototype and prove to Pixar/Dreamworks/Disney/whatever that this thing will hugely accelerate they're rendering.

    They don't have to appeal to masses that expect a wide variety of features on a wide variety of platforms and software. They target a very very specific segment and if they can convince that segment they'll gonna be fine.
  • DerekWilson - Tuesday, April 21, 2009 - link

    You are right, except if Larrabee competes with this in terms of speeding up raytracing ... but we'll have to wait and see on that one. If they focus on a niche market, they could succeed.
  • RamarC - Monday, April 20, 2009 - link

    agreed, unless they get a mainstream rendering app to sign on and can get some royalties out of the software end. if not, nvidia will just implement a similar api and they'll promote using quadros as render accelerators.
  • ssj4Gogeta - Monday, April 20, 2009 - link

    Unlike Ageia PhysX, this is not about the API, but the hardware.
  • smartalco - Monday, April 20, 2009 - link

    Except, given that this is /custom hardware/, nvidia can't just role out a CUDA update

Log in

Don't have an account? Sign up now