What is Raytracing?

In all modern forms of 3D rendering for display on a computer, the goal is to determine the color of every pixel on the screen as fast as possible. Raytracing is simply a method that can be used to do so. Currently, the most common method for rendering realtime 3D graphics is rasterization. There are fundamental differences between the way rasterization and raytracing go about determining pixel color.

With rasterization and raytracing, we start with geometry. Triangles to be specific. We have a scene made up of triangles and shader programs are used to determine the color at any given point on every triangle. With a rasterizer, we loop through every triangle and use math to project the triangle onto the screen. This is like taking a 3D scene, and flattening it out. We find out what pixels every triangle overlaps and save the depth values for later when we shade those pixels. We use lighting algorithms, texture maps, and the location of the pixel on the triangle itself to do the shading.


Click to Enlarge

Unlike rasterization, raytracing starts with the pixels. If we draw a line from a central eye (or camera) position through each pixel, we can use math to determine what triangles this line (called a primary ray) intersects. For every triangle that our primary intersects, we save the position of the intersection. After all our geometry has been checked for intersection, we keep the intersection closest to the viewer (ignoring transparency for a minute). This process means lots of conditionals and branching in addition to the compute power required by whatever shader programs are used.


Click to Enlarge

From here, like with rasterization, we can use shaders to determine the color of our pixel, but the input the shaders use can be other rays (these are secondary rays) that have been cast (or shot) from our saved closest point of intersection. These secondary rays can be used to do lots of things like look for shadows (shoot a ray at every light source and see if that light is blocked by something), and reflections (shoot a ray at the angle the primary ray would reflect from and start the process over again). Rays used to do reflection, refraction, radiosity, and other effects can end up generating a good number of secondary rays. The key advantages to the rendering quality of raytracing lie in secondary rays, but these are also what add the incredible complexity to raytracing renderers.


Click to Enlarge

Calculating secondary rays is particularly time consuming, as not only do we have the same branching issues, but we are less likely to see speed up from grouping rays together into packets. Its easy to see that when we shot a lot of primary rays (say four for each pixel for some antialiasing), include a lot of bounces for reflective surfaces (lots of secondary rays becoming increasingly incoherent), have a lot of geometry (and thus lots of things to check for intersection), lots of light sources (which means lots of shadows), have translucent material with refraction indexes, or treat other lit objects as light sources (radiosity), computing our scene has a ton of branches and a ton of computation.


Click to Enlarge

Typically CPUs are really good for branching and sequential operations. On the flipside, GPUs are great for situations with tons of parallel independent operations with little branching. Putting both lots of branching and lots of parallel independent operations together can lead to an algorithm that can't benefit from the full potential of either CPUs or GPUs. Caustic Graphics has put together hardware that attempts to create a perfect environment for raytracing, approaching the problem differently than either a CPU or a GPU. Unfortunately, they didn't go into much detail about the architecture of their hardware. But they did tell us some things and we can speculate on others.

Index CausticOne and the Initial Strategy
Comments Locked

48 Comments

View All Comments

  • DeathBooger - Wednesday, April 22, 2009 - link

    They're speaking in terms of workstation hours, not actual hours. HP is hyping their products so it is misleading.
  • Roland00 - Tuesday, April 21, 2009 - link

    So each frame took about (94*60*30=169,200 frames)

    Thus each final frame took 236.40 hours of render time.
  • Verdant - Monday, April 20, 2009 - link

    I respectfully disagree, a fully raytraced scene with anything more than basic lighting can easily take well over a minute per frame, even if you have a huge render farm, it takes a long time to render an animation of any significant length and detail. Most larger animation houses would jump on something like this, if it really can render their frames 20x faster, and not use 20x the power.
  • jabber - Monday, April 20, 2009 - link

    ....that still cant show anything but a rendered image of its product several months after its been announced.
  • Tuvok86 - Tuesday, April 21, 2009 - link

    the card pictured
    http://www.pcper.com/images/reviews/694/card-angle...">http://www.pcper.com/images/reviews/694/card-angle...
  • monomer - Monday, April 20, 2009 - link

    Is it just me, or does the Caustic logo look similar to a slightly rotated Quake III logo?
  • SonicIce - Monday, April 20, 2009 - link

    lol yea. except its like quake 6 or something
  • ssj4Gogeta - Monday, April 20, 2009 - link

    If raytracing catches on in games, how long will it take Intel to make similar units and put a couple of them on the Larrabee die? I'm sure if they could do it, Intel's scientists too.

    Besides, from what I've seen/read, it seems Larrabee will be good enough for raytracing. In a Larrabee research paper from Intel I read that Larrabee is 4.6 times more efficient than Intel Xeon (Core based) processors in raytracing on a per clock, per core basis. Also, Intel ray traced quake war at around 25 fps @1280x720 using 4 Intel Dunnington hexa-core processors (24 cores in total).

    So if Larrabee will have 32 cores, and even if we take it to be 4x more efficient instead of 4.6 (scaling etc.) then it will be (32*4)/24 = around 5.5 times faster than the setup they used. That's enormous! 130 fps at 1280x720 res for a fully ray traced game!, or you could increase the res and keep the fps to 60. Besides, Larrabee will most likely have MUCH more bandwidth available than that FSB based Dunnington system had.

    I can't wait, Intel. Hurry up!
  • lopri - Monday, April 20, 2009 - link

    Interesting article. Thank you for the explanation on Ray Tracing v. Rasterization. The difference is still confusing to me, but hopefully I'll eventually understand. I don't expect a simple answer to my questions but maybe someone can enlighten me.

    1. Doesn't Ray-Tracing still require triangles anyway? I understand Rasterazation as Derek explained: draw triangles and 'flatten' them. Ray-tracing shoots (?) rays on triangles. So it still needs triangles anyway. It sounds more like shooting rays on 'flattened' triangles.. Oh but what do I know.

    2. Is there any 'fundemental' reason why Ray-traced images look better than rasterized images? It seems to me they're just different ways for a same result. Yes, I'm a noob.

    Anyway, I agree with others regarding this specific company. It's proably applying some patents and then looking to be bought by bigger fishes. Do they even have a working hardware?
  • DerekWilson - Tuesday, April 21, 2009 - link

    1) You can do raytracing without triangles -- you can just use math to describe your objects like spheres and stuff as all that's really needed is an intersection point. But you can also use triangles, and this is often what is done because it does still make some things easier. You just do intersection between a line and a plane and see if the intersection point falls inside your triangle. So -- for rasterization, triangles are required while for raytracing they are perfectly fine to use but you aren't as locked in to using them as with rasterizers.

    2) because each pixel can contain input from more of the rest of the scene with little programatic complexity and a high degree of accuracy. it is possible for raytracing to produce a more accurate image /faster/ than rasterization would require to achieve an equally accurate image. however, it is possible for rasterization to produce an image that is "close enough" MUCH faster than raytracing (especially with modern hardware acceleration).

    ...

    there are some raytraced images that look very bad but accurately portray reflection and refraction. accuracy in rendering isn't all that's required for a good looking image. The thing that is being rendered also needs to be handled well by artists -- accurate textures and materials need to be developed and used correctly or the rendered image will still look very bad. I think this is why a lot of raytracing proof of concepts use solid colored glass even when they don't have to. I honestly don't think the sample images Caustic provided are very "good" looking, but they do show off good effects (reflection, refraction, caustics, ambient occlusion, soft shadows ...) ...

    so ... I could take a diamond and try cutting it myself. I could put this diamond on a table next to a really well cut cubic zirconium. people might think the imitation looks much better and more "diamond" like in spite of the fact that my horribly cut diamond is a diamond ... which one is "better" is different than which one is more "accurate" ... both are good though :-)

    hope that helps ...

Log in

Don't have an account? Sign up now