The question of image quality is much more complicated than determining which video card renders a scene the fastest. Years ago, we could say that the image that came out of two different computer systems should be exactly the same because developers controlled every aspect of how their program ran with software, rather than leaving some decisions to the hardware on which the program was running. With the advent of hardware acceleration, developers could get impressive speed gains from their software. As a side effect, the implementation of very basic functionality was defined completely by the designers of the hardware (e.g. ATI and NVIDIA). For example, a developer no longer needs to worry about the mathematics and computer science behind mapping a perspective correct texture onto a surface; now, all one needs to do is to turn on the hardware texturing features that they want and assign textures to surfaces. In addition to saving the developer from having to code these kinds of algorithms, this took away some control and made it so different hardware could produce different output (there is more than one correct way to implement every feature).

Obviously, there are many more pros to hardware acceleration than cons. The speed gains that we are able to make in real-time 3D rendering alone excuse any problems caused. Since the developer doesn't need to worry about writing code worthy of a Ph.D. in mathematics (as that is left to the GPU designers), games can be developed faster or more time can be spent on content. The only real con is the loss of control over how everything is done.

Different types of hardware do things differently. There is more room for choice in how things are done in 3D hardware than in something like an x86 processor. For one thing, IHVs have to support APIs (DirectX and OpenGL) rather than an instruction set architecture. There is much more ambiguity in asking a GPU to apply a perspective correct lighted mipmap to a surface with anisotropic filtering than in asking a CPU to multiply two numbers. Of course, we see this as a very good thing. The IHVs will be in constant competition to provide the best image quality at the fastest speed with the lowest price.

Unfortunately, defining image quality is a more difficult task than it seems. Neither ATI nor NVIDIA produce images that match the DX9 reference rasterizer (Microsoft's tool to estimate what image should be produced by a program). There is, in fact, no “correct” image for any given frame of a game. This makes it very hard to draw a line in the sand and say that one GPU does something the right way and the other one does not.

There is the added problem that taking screenshots in a game isn't really the best place to start when looking for a quantitative comparison. Only a handful of tests will allow us to grab the exact same frame of a game for use in a direct comparison. We are always asking developers to include benchmarks in their games, and this is a feature that we would love to see in every benchmark.

The other issue with screenshots is trying to be sure that the image we grab from the framebuffer (the part of the GPU's memory that holds information about the screen) is the same as the image we see on the screen. For instance, NVIDIA saves some filtering and post-processing (work done on the 2D image produced from the 3D scene) until data is being sent out from the framebuffer to the display device. This means that the data in the framebuffer is never what we see on our monitors. In order to make it so people could take accurate screenshots of their games, NVIDIA does the same post-processing effects on the framebuffer data when a screenshot is taken. While screenshot post-processing is necessary at the moment, using this method introduces another undesirable variable into the equation. To this end, we are working very hard on finding alternate means of comparing image quality (such as capturing images from the DVI port).

When trying to render scenes, it is very important to minimize the amount of useless work a GPU does. This has led to a great number of optimizations being implemented in hardware that attempt to do less work whenever possible. Implementing such optimizations is absolutely necessary for games to run smoothly. The problem is that some optimizations make a slight difference in how a scene is rendered (such as approximating things like sine and inverse square root using numerical methods rather than calculating the exact answer). The perceptibility (or lack thereof) of the optimization should be an important factor in which optimizations are used and which are not. Much leeway is allowed in how things are done. In order to understand what's going on, we will attempt to explain some of the basics of real-time 3D rendering.

Color and Alpha
Comments Locked

35 Comments

View All Comments

  • retrospooty - Thursday, December 11, 2003 - link

    I have been visiting Anandtech for well over 4 years , and I have often exclaimed how thorough, fair, and unbiased this site is to others...

    This is the first article I have ever read here that I think is complete poop. I cannot beleive that in any fair IQ test Nvidia came anywhere close to ATI. Either the author is not being honest, or is color blind. Anyone with eyeballls can compare the two and see that ATI is much sharper, and vibrant especially with AA... Nvidia is WAY blurry.

    I am very VERY dissapointed in this. :(
  • TheGoldenMenkey - Thursday, December 11, 2003 - link

    Excellent article. I would much rather be taught why things are different than be showed some differences in rendering and then have someone declare which one is cheating. Thanks for teaching us enough to let us come to our own conclusions. Keep up the good work AT.
  • tazdevl - Thursday, December 11, 2003 - link

    Better look @ that... then we might have something to discuss

    http://www.anandtech.com/video/showdoc.html?i=1931...
  • dvinnen - Thursday, December 11, 2003 - link

    Artical seemed fair and unbias to me. Your AA and AF question is odvious. Look at the URL of the png file. It clearly states what is on.

    It seems they have cleaned up there DX9 proformance, but they still treat synthitic benchmarks badly. Most recintly the 3DMark03 patch a month ago and how they handeled the media (PR on one side of the pond said one thing, on the other saide, they said another)
  • tazdevl - Thursday, December 11, 2003 - link

    So Derek to you own stock in nVIDIA? Did Brian Burke write this for you?

    Were AA and Aniso used in all tests or a few? Which ones? What modes are we comparing against which benchmarks?

    Ever thought that BOTH nVIDIA and ATI can fix the outstanding instead of just nVIDIA?

    I swear, every since Anand got caught up in the whole NV30 fiasco, the site's credibility is worth absolutely squat when it comes to nVIDIA.

    I'm not saying ATI is without faults, but let's try to appear unbiased at a minimum in the article.

Log in

Don't have an account? Sign up now