What is Raytracing?

In all modern forms of 3D rendering for display on a computer, the goal is to determine the color of every pixel on the screen as fast as possible. Raytracing is simply a method that can be used to do so. Currently, the most common method for rendering realtime 3D graphics is rasterization. There are fundamental differences between the way rasterization and raytracing go about determining pixel color.

With rasterization and raytracing, we start with geometry. Triangles to be specific. We have a scene made up of triangles and shader programs are used to determine the color at any given point on every triangle. With a rasterizer, we loop through every triangle and use math to project the triangle onto the screen. This is like taking a 3D scene, and flattening it out. We find out what pixels every triangle overlaps and save the depth values for later when we shade those pixels. We use lighting algorithms, texture maps, and the location of the pixel on the triangle itself to do the shading.


Click to Enlarge

Unlike rasterization, raytracing starts with the pixels. If we draw a line from a central eye (or camera) position through each pixel, we can use math to determine what triangles this line (called a primary ray) intersects. For every triangle that our primary intersects, we save the position of the intersection. After all our geometry has been checked for intersection, we keep the intersection closest to the viewer (ignoring transparency for a minute). This process means lots of conditionals and branching in addition to the compute power required by whatever shader programs are used.


Click to Enlarge

From here, like with rasterization, we can use shaders to determine the color of our pixel, but the input the shaders use can be other rays (these are secondary rays) that have been cast (or shot) from our saved closest point of intersection. These secondary rays can be used to do lots of things like look for shadows (shoot a ray at every light source and see if that light is blocked by something), and reflections (shoot a ray at the angle the primary ray would reflect from and start the process over again). Rays used to do reflection, refraction, radiosity, and other effects can end up generating a good number of secondary rays. The key advantages to the rendering quality of raytracing lie in secondary rays, but these are also what add the incredible complexity to raytracing renderers.


Click to Enlarge

Calculating secondary rays is particularly time consuming, as not only do we have the same branching issues, but we are less likely to see speed up from grouping rays together into packets. Its easy to see that when we shot a lot of primary rays (say four for each pixel for some antialiasing), include a lot of bounces for reflective surfaces (lots of secondary rays becoming increasingly incoherent), have a lot of geometry (and thus lots of things to check for intersection), lots of light sources (which means lots of shadows), have translucent material with refraction indexes, or treat other lit objects as light sources (radiosity), computing our scene has a ton of branches and a ton of computation.


Click to Enlarge

Typically CPUs are really good for branching and sequential operations. On the flipside, GPUs are great for situations with tons of parallel independent operations with little branching. Putting both lots of branching and lots of parallel independent operations together can lead to an algorithm that can't benefit from the full potential of either CPUs or GPUs. Caustic Graphics has put together hardware that attempts to create a perfect environment for raytracing, approaching the problem differently than either a CPU or a GPU. Unfortunately, they didn't go into much detail about the architecture of their hardware. But they did tell us some things and we can speculate on others.

Index CausticOne and the Initial Strategy
Comments Locked

48 Comments

View All Comments

  • tdenton1138 - Tuesday, April 21, 2009 - link

    Check out their videos here:

    http://vimeo.com/4240520">http://vimeo.com/4240520
  • poohbear - Tuesday, April 21, 2009 - link

    im so replacing their ugly blue heatsink w/ my own aftermarket cooling..... gonna overclock the hell outta this thing!!!! WOOOOOOOOOOOOOOT! Raytracing here i come!
  • Flunk - Tuesday, April 21, 2009 - link

    The use of FPGAs for Caustic One makes it sound more like a prototype than an actual product. It's nice to see people trying to sell new ideas but it might be a bit premature today. Then again a proof of concept is always nice.
  • DerekWilson - Tuesday, April 21, 2009 - link

    FPGAs are good enough if the benefit is real. lots of people use FPGAs in shipping devices. an ASIC would be better (faster), but it requires a lot of start up money.
  • DeathBooger - Monday, April 20, 2009 - link

    I'm a professional 3D artist. I don't really see this taking off. Right now I have a core i7 and it does the job for me just fine. I create photorealistic images and animations for a living and I don't really see the point of this in this day and age. And if I can't see the point, I doubt production companies with access to large render farms will. Especially if it makes more fiscal sense to pop in a new processor instead of changing out all of the mainboards to fit a new card that might be faster.

    The software used for ray tracing these days is a lot faster than it used to be. 3rd party apps like Vray pretty much solved the issue of slow ray tracing years ago.

    I could see this taking off for games to get real time global illumination to be come a standard, but only if Microsoft and Sony decides to add Caustic hardware to their next consoles. Keeping it PC exclusive wouldn't go anywhere long term.

    Actually another prospect would be for Nvidia to buy them out since they own Mental Ray. Mental Ray is the renderer that ships with most 3D software these days. It still won't change the fact that people in the know use Vray instead since it's a lot faster than Mental Ray and more user friendly. Mental Ray is more powerful in the right hands and I could see the film industry gobbling these cards up if the SDK was implemented into Mental Ray, but freelance guys like me probably will never touch one.
  • Griswold - Tuesday, April 21, 2009 - link

    You dont sound like you actually know what you're talking about in respect of this hardware let alone like a professional 3d artist who does what this thing was designed for...
  • ssj4Gogeta - Tuesday, April 21, 2009 - link

    I'd like to disagree too. They say they provide a 20x improvement in rendering times. Surely this card will be cheaper than buying 20 processors. And who said you need to replace the mainboard? It clearly uses a PCIe slot. Look at the pic.

    Now if Intel can deliver something with similar better performance with Larrabee, at a price point that many consumers can afford, then things would be different.
  • DeathBooger - Wednesday, April 22, 2009 - link

    PCIe is a rare commodity in servers still to this day. Render farms use servers, not typical workstations. This company is essentially trying to add another component where one never existed before. That requires a total reconfiguration for server farms. It's not like each server has a video card that can just be swapped out for this Caustic card easily.

    Tell you what, if Pixar adopts it, then I'll eat my words. Pixar has the resources to do anything they want. If they find value in this card then I was wrong.
  • RagingDragon - Friday, April 24, 2009 - link

    Uh, you might want check the HP, IBM and Dell server linups...

    New Intel/AMD servers do have PCIe (mostly 8x and 4x)
    New RAID controllers are mostly PCIe 8x or 4x
    10GB ethernet, fiber channel cards, etc. are mostly availible in PCIe too.
  • Tuvok86 - Tuesday, April 21, 2009 - link

    Monsters vs Aliens movie required 40 million hours of rendering time

Log in

Don't have an account? Sign up now