What is Raytracing?
In all modern forms of 3D rendering for display on a computer, the goal is to determine the color of every pixel on the screen as fast as possible. Raytracing is simply a method that can be used to do so. Currently, the most common method for rendering realtime 3D graphics is rasterization. There are fundamental differences between the way rasterization and raytracing go about determining pixel color.
With rasterization and raytracing, we start with geometry. Triangles to be specific. We have a scene made up of triangles and shader programs are used to determine the color at any given point on every triangle. With a rasterizer, we loop through every triangle and use math to project the triangle onto the screen. This is like taking a 3D scene, and flattening it out. We find out what pixels every triangle overlaps and save the depth values for later when we shade those pixels. We use lighting algorithms, texture maps, and the location of the pixel on the triangle itself to do the shading.
Unlike rasterization, raytracing starts with the pixels. If we draw a line from a central eye (or camera) position through each pixel, we can use math to determine what triangles this line (called a primary ray) intersects. For every triangle that our primary intersects, we save the position of the intersection. After all our geometry has been checked for intersection, we keep the intersection closest to the viewer (ignoring transparency for a minute). This process means lots of conditionals and branching in addition to the compute power required by whatever shader programs are used.
From here, like with rasterization, we can use shaders to determine the color of our pixel, but the input the shaders use can be other rays (these are secondary rays) that have been cast (or shot) from our saved closest point of intersection. These secondary rays can be used to do lots of things like look for shadows (shoot a ray at every light source and see if that light is blocked by something), and reflections (shoot a ray at the angle the primary ray would reflect from and start the process over again). Rays used to do reflection, refraction, radiosity, and other effects can end up generating a good number of secondary rays. The key advantages to the rendering quality of raytracing lie in secondary rays, but these are also what add the incredible complexity to raytracing renderers.
Calculating secondary rays is particularly time consuming, as not only do we have the same branching issues, but we are less likely to see speed up from grouping rays together into packets. Its easy to see that when we shot a lot of primary rays (say four for each pixel for some antialiasing), include a lot of bounces for reflective surfaces (lots of secondary rays becoming increasingly incoherent), have a lot of geometry (and thus lots of things to check for intersection), lots of light sources (which means lots of shadows), have translucent material with refraction indexes, or treat other lit objects as light sources (radiosity), computing our scene has a ton of branches and a ton of computation.
Typically CPUs are really good for branching and sequential operations. On the flipside, GPUs are great for situations with tons of parallel independent operations with little branching. Putting both lots of branching and lots of parallel independent operations together can lead to an algorithm that can't benefit from the full potential of either CPUs or GPUs. Caustic Graphics has put together hardware that attempts to create a perfect environment for raytracing, approaching the problem differently than either a CPU or a GPU. Unfortunately, they didn't go into much detail about the architecture of their hardware. But they did tell us some things and we can speculate on others.