Why Optimize?

All real-time 3D is based on approximation and optimization. If mathematical models that actually represent the real world were used, it would take minutes to render frames. We see this in modern 3D rendering applications like 3DStudio, Maya, and the like. The question isn't whether or not we should "optimize", but how many corners that we can reasonably cut. We are always evaluating where the line between performance and accuracy (or the "goodness" of the illusion of reality) should be drawn.

So, who makes the decision on what should and shouldn't be done in modern real-time 3D? The answer is very complex and involves many different parties. There really is no bottom line, so we'll start at academia and research. Papers are published all the time on bleeding edge 3D graphics concepts and mathematics. More and more, it's the game developers who pick up academic papers and look for ways to implement their ideas in fast and efficient ways. Game developers are always trying to find that edge over the competition, which will draw people to their title. They have a strong interest in making a beautiful, playable experience accessible to everyone. The hardware vendors have to take every aspect of the industry into account, but they have their specific areas of focus.

NVIDIA has looked hard at what a few prominent game developers want and tried hard to provide that functionality to them. Of course, game developers don't have the time to write code for every bit of hardware on which their game will run, so they use general APIs to help lighten the load. ATI has tried very hard to figure out how to run Microsoft's DirectX API code run as fast as possible. Both perspectives have had positive and negative effects. ATI doesn't perform quite as well under OpenGL games as NVIDIA hardware in most cases, but generally runs DirectX code faster. NVIDIA may lag in the straight DX performance arena, but they offer more features to developers than ATI in broader DirectX feature support and more vendor specific OpenGL extensions that developers can take advantage of.

Also on the API side, the OpenGL ARB and Microsoft decide what tools to provide to game developers. ATI and NVIDIA decide how to implement those tools in their hardware. The API architects have some power in how tight restrictions are placed on implementation, but game developer and consumer feedback are the deciding factors in the way hardware vendors implement functionality. One of the best examples of how this ends up affecting the industry is in the DX9 precision specification. The original drafts called for "at least" 24bit precision in the pixel shader. ATI literally implemented 24bit floating point (which heretofore had not existed), while NVIDIA decided to go with something closer to IEEE single precision floating point (though, we would like to see both companies be IEEE 754 compliant eventually).

To expand on the significance of this example, both are equally valid and both have advantages and disadvantages. Full precision is going to be faster on ATI hardware as it doesn't need to move around or manipulate as many bits as NVIDIA hardware. At the same time, 24bits of precision aren't always needed for maximal acuracy in algorithms and NVIDIA is able to offer 16bit precision where its full 32bits are not needed. One of the downsides of NVIDIA's implementation is that it requires more transistors, is more complex, and shows performance characteristics that have been hard to predict in the absence of a mature compiler. At a cost of being more flexible (and not necessarily faster), NVIDIA's implementation is also more complicated.

So, what's involved in the process of determining what actually happens in real-time 3D are: hardware architects, API architects, 3D graphics academics, game developers, and consumers. It's almost a much more complex "chicken or the egg" problem. The ultimate judge of what gets used is the game developer in their attempt to realize a vision. They are constrained by hardware capabilities and work within these limits. At the same time, all that the hardware vendor can do is strive to deliver the quality and performance that a game developer wants to see. But even if this relationship ends up working out, the final authority is the consumer of the hardware and software. Consumers demand things from game developers and hardware vendors that push both parties. And it is somewhere in this tangled mess of expectations and interpretations that optimization can be used, abused, or misunderstood (or even all three at the same time).

Index Perpsectives on Optimizing
Comments Locked

8 Comments

View All Comments

  • 861 - Tuesday, September 28, 2004 - link

    a great tool!
  • dvinnen - Tuesday, September 28, 2004 - link

    That's good. Only difference I saw was a little lighting difference on the floor. Even then very slight. Good for an extra 10 frams.
  • gordon151 - Monday, September 27, 2004 - link

    They did make some general opengl improvements that helped the 8500 in doom3 and some other opengl games in recent drivers, but thats pretty much it for that class.
  • ksherman - Sunday, September 26, 2004 - link

    it too bad all the new driver enhanchments have abandoned the Radeon 8500... I could really use some of the improvements in games... :(
  • KillaKilla - Sunday, September 26, 2004 - link

    Here it is: its the third image down. Don't know if the whole .net framework could make this imposible, but I don't see how this could have affect...

    http://www.anandtech.com/video/showdoc.aspx?i=1931...
  • KillaKilla - Sunday, September 26, 2004 - link

    Would it be posible to implement the old, but very, very effective trick to see difference between AI settings? I'm talking about the thing where you hover your mouse over an image and it changes to the other image. I'll look for a link in a minute, if you don't see what I'm talking about.
  • acemcmac - Sunday, September 26, 2004 - link

    my question remains...... can I finally leave Cat 3.8 and MMC 8.7 if I want MMC and multimonitor support simultaneously???
  • coldpower27 - Sunday, September 26, 2004 - link

    I am glad ATI, is giving this suite allowing user to choose to use optimized drivers or not and having the ability to disable them as well. Good for ATI, now they are on par with Nvidia in this respect.

Log in

Don't have an account? Sign up now