CausticTwo, the Long Term, and Preliminary Thoughts

Looking toward the future, Caustic Graphics will bring out the CausticTwo next year. The major differences with this hardware will be with the replacement of the FPGAs with ASICs (application specific integrated circuit - a silicon chip like a CPU or a GPU). This will enable an estimated additional 14x performance improvement as ASICs can run much much faster than FPGAs. We could also see more RAM on board as well. This would bring the projected performance to over 200x the speed of current CPU based raytracing performance.

Of course, next year CPUs will be faster, but based on that kind of projection we are still looking at about two orders of magnitude more performance than CPU based algorithms. This means that instead of seconds per frame, we can start talking about frames per second. Unless we want even more photorealistic images. That will still take a very long time.

The CausticTwo will also be available to end users. Hopefully by this time raytraceing plugins for 3D Studio Max, Maya, and all the other content creation tools that some prosumers and students dabble in will be hardware accelerated on Caustic Graphics hardware. And maybe at this point we'll start to see some realtime raytracing engine demos. Maybe.

Planting their flag firmly in film, video and advanced visualization markets makes the most sense and holds the most potential for long term viability. Jumping completely into games won't be the best way to go at this point -- it needs to be either a gradual adoption or they need to get their hardware into future game consoles. Pushing PC gaming before console adoption will likely prove as just as difficult for Caustic as it did for Ageia, and might not be the best use of resources. Especially if they can cut out a niche in the higher end space.

But they do have their eye on games at some point and are already talking about game consoles. While hardware, service and support for render farms, large scale visualization and those who need the hyper-realism that raytracing can offer has the potential to create a sustainable business, conceiving a piece of hardware that becomes nearly required for gaming (like the GPU) would be the holy grail in this case. It's not likely, but you can bet it's at the back of their mind. Staying focused on more modest goals is definitely a better way to stay in business though.

But they could go another direction. They could try and get themselves acquired by a 3rd party like Ageia did. Of course, NVIDIA killed Ageia's hardware business, and it would be nice if Caustic's hardware technology survived any acquisition. But that is often times how these things go. We'll simply have to wait and see.

There is another factor looming on the horizon as well. As we mentioned earlier, raytracing is very branch heavy, memory dependent and compute heavy. It's a beast of an algorithm that seems to always have a bottleneck no matter what it is running on. Though it will still be a while before we have hardware, Larrabee might just as well be a solution to the raytracing option. The Larrabee architecture tries to blend some of the CPU and GPU approach to processing, and the hybrid may enable a platform that competes with Caustic when it hits the scene. Memory organization and size are probably still going to favor Caustic, but we've continually heard rumblings that raytracing on Larrabee will be where it's at. It will certainly be interesting to compare the two approaches when they both arrive.

Beyond Larrabee, the long term plan for many core CPUs could include application specific processors. We will see combined CPUs and GPUs in the near future, and maybe we'll see dedicated raytracing units integrated as one or more of the many cores on a CPU down the road. The really long term picture is a bit more fuzzy, but they've got short term potential in the markets that need all the power they can get.

For now, we don't have hardware and we don't have developer feedback either. Caustic is going to get us a copy of their SDK so we can play around with it a bit and evaluate it. But as for knowing how applicable or useful Caustic Graphics hardware will be in the realworld, we just don't have the information we need yet.

Here's to hoping for the best.

CausticOne and the Initial Strategy
Comments Locked

48 Comments

View All Comments

  • DeathBooger - Wednesday, April 22, 2009 - link

    They're speaking in terms of workstation hours, not actual hours. HP is hyping their products so it is misleading.
  • Roland00 - Tuesday, April 21, 2009 - link

    So each frame took about (94*60*30=169,200 frames)

    Thus each final frame took 236.40 hours of render time.
  • Verdant - Monday, April 20, 2009 - link

    I respectfully disagree, a fully raytraced scene with anything more than basic lighting can easily take well over a minute per frame, even if you have a huge render farm, it takes a long time to render an animation of any significant length and detail. Most larger animation houses would jump on something like this, if it really can render their frames 20x faster, and not use 20x the power.
  • jabber - Monday, April 20, 2009 - link

    ....that still cant show anything but a rendered image of its product several months after its been announced.
  • Tuvok86 - Tuesday, April 21, 2009 - link

    the card pictured
    http://www.pcper.com/images/reviews/694/card-angle...">http://www.pcper.com/images/reviews/694/card-angle...
  • monomer - Monday, April 20, 2009 - link

    Is it just me, or does the Caustic logo look similar to a slightly rotated Quake III logo?
  • SonicIce - Monday, April 20, 2009 - link

    lol yea. except its like quake 6 or something
  • ssj4Gogeta - Monday, April 20, 2009 - link

    If raytracing catches on in games, how long will it take Intel to make similar units and put a couple of them on the Larrabee die? I'm sure if they could do it, Intel's scientists too.

    Besides, from what I've seen/read, it seems Larrabee will be good enough for raytracing. In a Larrabee research paper from Intel I read that Larrabee is 4.6 times more efficient than Intel Xeon (Core based) processors in raytracing on a per clock, per core basis. Also, Intel ray traced quake war at around 25 fps @1280x720 using 4 Intel Dunnington hexa-core processors (24 cores in total).

    So if Larrabee will have 32 cores, and even if we take it to be 4x more efficient instead of 4.6 (scaling etc.) then it will be (32*4)/24 = around 5.5 times faster than the setup they used. That's enormous! 130 fps at 1280x720 res for a fully ray traced game!, or you could increase the res and keep the fps to 60. Besides, Larrabee will most likely have MUCH more bandwidth available than that FSB based Dunnington system had.

    I can't wait, Intel. Hurry up!
  • lopri - Monday, April 20, 2009 - link

    Interesting article. Thank you for the explanation on Ray Tracing v. Rasterization. The difference is still confusing to me, but hopefully I'll eventually understand. I don't expect a simple answer to my questions but maybe someone can enlighten me.

    1. Doesn't Ray-Tracing still require triangles anyway? I understand Rasterazation as Derek explained: draw triangles and 'flatten' them. Ray-tracing shoots (?) rays on triangles. So it still needs triangles anyway. It sounds more like shooting rays on 'flattened' triangles.. Oh but what do I know.

    2. Is there any 'fundemental' reason why Ray-traced images look better than rasterized images? It seems to me they're just different ways for a same result. Yes, I'm a noob.

    Anyway, I agree with others regarding this specific company. It's proably applying some patents and then looking to be bought by bigger fishes. Do they even have a working hardware?
  • DerekWilson - Tuesday, April 21, 2009 - link

    1) You can do raytracing without triangles -- you can just use math to describe your objects like spheres and stuff as all that's really needed is an intersection point. But you can also use triangles, and this is often what is done because it does still make some things easier. You just do intersection between a line and a plane and see if the intersection point falls inside your triangle. So -- for rasterization, triangles are required while for raytracing they are perfectly fine to use but you aren't as locked in to using them as with rasterizers.

    2) because each pixel can contain input from more of the rest of the scene with little programatic complexity and a high degree of accuracy. it is possible for raytracing to produce a more accurate image /faster/ than rasterization would require to achieve an equally accurate image. however, it is possible for rasterization to produce an image that is "close enough" MUCH faster than raytracing (especially with modern hardware acceleration).

    ...

    there are some raytraced images that look very bad but accurately portray reflection and refraction. accuracy in rendering isn't all that's required for a good looking image. The thing that is being rendered also needs to be handled well by artists -- accurate textures and materials need to be developed and used correctly or the rendered image will still look very bad. I think this is why a lot of raytracing proof of concepts use solid colored glass even when they don't have to. I honestly don't think the sample images Caustic provided are very "good" looking, but they do show off good effects (reflection, refraction, caustics, ambient occlusion, soft shadows ...) ...

    so ... I could take a diamond and try cutting it myself. I could put this diamond on a table next to a really well cut cubic zirconium. people might think the imitation looks much better and more "diamond" like in spite of the fact that my horribly cut diamond is a diamond ... which one is "better" is different than which one is more "accurate" ... both are good though :-)

    hope that helps ...

Log in

Don't have an account? Sign up now