CausticTwo, the Long Term, and Preliminary Thoughts

Looking toward the future, Caustic Graphics will bring out the CausticTwo next year. The major differences with this hardware will be with the replacement of the FPGAs with ASICs (application specific integrated circuit - a silicon chip like a CPU or a GPU). This will enable an estimated additional 14x performance improvement as ASICs can run much much faster than FPGAs. We could also see more RAM on board as well. This would bring the projected performance to over 200x the speed of current CPU based raytracing performance.

Of course, next year CPUs will be faster, but based on that kind of projection we are still looking at about two orders of magnitude more performance than CPU based algorithms. This means that instead of seconds per frame, we can start talking about frames per second. Unless we want even more photorealistic images. That will still take a very long time.

The CausticTwo will also be available to end users. Hopefully by this time raytraceing plugins for 3D Studio Max, Maya, and all the other content creation tools that some prosumers and students dabble in will be hardware accelerated on Caustic Graphics hardware. And maybe at this point we'll start to see some realtime raytracing engine demos. Maybe.

Planting their flag firmly in film, video and advanced visualization markets makes the most sense and holds the most potential for long term viability. Jumping completely into games won't be the best way to go at this point -- it needs to be either a gradual adoption or they need to get their hardware into future game consoles. Pushing PC gaming before console adoption will likely prove as just as difficult for Caustic as it did for Ageia, and might not be the best use of resources. Especially if they can cut out a niche in the higher end space.

But they do have their eye on games at some point and are already talking about game consoles. While hardware, service and support for render farms, large scale visualization and those who need the hyper-realism that raytracing can offer has the potential to create a sustainable business, conceiving a piece of hardware that becomes nearly required for gaming (like the GPU) would be the holy grail in this case. It's not likely, but you can bet it's at the back of their mind. Staying focused on more modest goals is definitely a better way to stay in business though.

But they could go another direction. They could try and get themselves acquired by a 3rd party like Ageia did. Of course, NVIDIA killed Ageia's hardware business, and it would be nice if Caustic's hardware technology survived any acquisition. But that is often times how these things go. We'll simply have to wait and see.

There is another factor looming on the horizon as well. As we mentioned earlier, raytracing is very branch heavy, memory dependent and compute heavy. It's a beast of an algorithm that seems to always have a bottleneck no matter what it is running on. Though it will still be a while before we have hardware, Larrabee might just as well be a solution to the raytracing option. The Larrabee architecture tries to blend some of the CPU and GPU approach to processing, and the hybrid may enable a platform that competes with Caustic when it hits the scene. Memory organization and size are probably still going to favor Caustic, but we've continually heard rumblings that raytracing on Larrabee will be where it's at. It will certainly be interesting to compare the two approaches when they both arrive.

Beyond Larrabee, the long term plan for many core CPUs could include application specific processors. We will see combined CPUs and GPUs in the near future, and maybe we'll see dedicated raytracing units integrated as one or more of the many cores on a CPU down the road. The really long term picture is a bit more fuzzy, but they've got short term potential in the markets that need all the power they can get.

For now, we don't have hardware and we don't have developer feedback either. Caustic is going to get us a copy of their SDK so we can play around with it a bit and evaluate it. But as for knowing how applicable or useful Caustic Graphics hardware will be in the realworld, we just don't have the information we need yet.

Here's to hoping for the best.

CausticOne and the Initial Strategy
Comments Locked

48 Comments

View All Comments

  • ssj4Gogeta - Monday, April 20, 2009 - link

    I'm no pro, but from what I know the main difference is that things like shadows, refractions and reflections render MUCH better. This is because in raytracing you also use secondary rays. So the rays reflected off/refracted from a surface can affect the color of other nearby surfaces, producing shadows, reflections, etc. In ray tracing you do it just like nature does it in real world (in reverse of that, but that doesn't affect the outcome). In rasterization, you need to manually program for producing shadows, reflections, etc. and so they are mostly just approximations.

    Another advantage of ray tracing is that programmers don't need to work that hard - things which may take you hundreds of lines of code in rasterization, only take 10 lines in ray tracing.

    Compare this to 3D vs 2D rendering of a 3D cube. Suppose you need to render the cube as the camera circles around it. If you're doing it in 3D, you just render it in 3D, flatten the image and display it. Now if you're using 2D, but you want to create the effect of 3D, you don't have to render a cube - you need to directly render how the flattened image would look. That is, you need to take into account the 3D distortion due to perspective/depth and you need to make the edges of the cube oblique accordingly, you need to make the part of the cube that is farther look farther by rendering it smaller than the face of the cube that is closer to the camera. A lot of work for the programmer, and needless to say, it won't be very accurate. Well in this case it may be accurate as it's just a cube with straight edges, and so you can easily calculate things. But not in the case of a complex object.

    Same way, rasterization at best can offer approximations of phenomena like refraction etc., how close they are depend on the programmer.

    Check this rendered pic to see what ray tracing can deliver:
    http://upload.wikimedia.org/wikipedia/commons/e/ec...">http://upload.wikimedia.org/wikipedia/commons/e/ec...

    Note: I'm not a graphics programmer but this is how I understand it. Please correct me where I'm wrong.
  • AmbroseAthan - Wednesday, April 22, 2009 - link

    That picture is a ray-traced rendering?! It looks like a photograph! Someone put in a lot of time and crunching power on that one.
  • JimmiG - Monday, April 20, 2009 - link

    I'm also wondering about #2 - is Raytracing really "better", or just "different"?

    Back in the 90's I used to be really impressed with the quality of raytraced animations and pictures, with their shiny, reflective objects, realistic water, lighting, soft shadows etc. However back then the capabilities of "3d accelerators" were very limited - 3d games used simple models, "flat" lighting models with no shadows and only one light source, and blurry textures without any shader effects like bumps, parallax maps, reflections etc. Today it seems the latest 3d engines already do everything in realtime that you needed raytracing and many minutes/hours per scene to do in the past.
  • jimhsu - Monday, April 20, 2009 - link

    Someone correct me, but i think it is not that raytracing "looks" better, but because it is closer to a physical description of light (only in inverse), effects such as ambient occlusion, caustics, and other shiny things can be implemented in a relatively straightforward manner in a physically correct manner, while rasterization requires the use of shaders to "emulate" reality. These approximations are often complicated to program and implement, even though they achieve nearly the same effect.
  • DerekWilson - Tuesday, April 21, 2009 - link

    this is sort of true ... it's possible to write shaders for a rasterizer that do everything a raytracer does. But in addition to the code complexity in a z-buffer based rasterizer you end up with performance disadvantages.

    At the point where you properly and accurately emulate raytracing with a rasterizer you need to start generating all kinds of dynamic environment maps every frame for every object. treating objects as light sources and doing real time radiosity for rasterization (which can be done as well) is also difficult. To get an image that is as physically accurate as raytracing, rasterization (at this point with todays technology) would be slower even using a GPU designed for rasterization.

    Honestly, there are some things that rasterization can approximate well enough that we don't notice the difference, and I think for a long time to come we'll still see rasterization as the main vehicle for realtime graphics. I think we'll start to see raytracing come in as a secondary tool to augment rasterization and add effects that are tough to achieve otherwise.
  • lopri - Tuesday, April 21, 2009 - link

    I think I am learning more abou raytracing from this article and comments than anywhere else to date. Thank you for excellent explanations and analogies!
  • jimhsu - Monday, April 20, 2009 - link

    An analogy for math majors would probably be trying to solve a function analytically (i.e. rules of calculus) vs. numerically (i.e. Newton's Method, Euler, etc). The numerical result is often close to the analytical result, but intuitively the analytical result is the "right" way to do the problem, except when it is infeasible (we have something that we can't integrate).
  • Einy0 - Monday, April 20, 2009 - link

    This may actually help Larrabee gain ground. An alternative to do realtime raytracing. Then again Larrabee may help this gain ground as Larrabee will be do both rasterizing and raytracing.
    I'm surpirsed they can make anything worth buying with FPGAs. I guess FPGAs have come a long way. I'd love to get some specifics on the underlying architecture.
    Bad timing considering the global economy etc...
  • ifkopifko - Tuesday, April 21, 2009 - link

    lol... real-time raytracing? Keep dreaming. :-D Something like that is far far in the future.
  • Sivar - Tuesday, April 21, 2009 - link

    There have been assembly language demos from the 4k scene which have done realtime ratracing since the late 90's. In software. On a Pentium MMX.
    They aren't quite what I'd call Pixar-quality, but it's far from impossible.

Log in

Don't have an account? Sign up now