The question of image quality is much more complicated than determining which video card renders a scene the fastest. Years ago, we could say that the image that came out of two different computer systems should be exactly the same because developers controlled every aspect of how their program ran with software, rather than leaving some decisions to the hardware on which the program was running. With the advent of hardware acceleration, developers could get impressive speed gains from their software. As a side effect, the implementation of very basic functionality was defined completely by the designers of the hardware (e.g. ATI and NVIDIA). For example, a developer no longer needs to worry about the mathematics and computer science behind mapping a perspective correct texture onto a surface; now, all one needs to do is to turn on the hardware texturing features that they want and assign textures to surfaces. In addition to saving the developer from having to code these kinds of algorithms, this took away some control and made it so different hardware could produce different output (there is more than one correct way to implement every feature).

Obviously, there are many more pros to hardware acceleration than cons. The speed gains that we are able to make in real-time 3D rendering alone excuse any problems caused. Since the developer doesn't need to worry about writing code worthy of a Ph.D. in mathematics (as that is left to the GPU designers), games can be developed faster or more time can be spent on content. The only real con is the loss of control over how everything is done.

Different types of hardware do things differently. There is more room for choice in how things are done in 3D hardware than in something like an x86 processor. For one thing, IHVs have to support APIs (DirectX and OpenGL) rather than an instruction set architecture. There is much more ambiguity in asking a GPU to apply a perspective correct lighted mipmap to a surface with anisotropic filtering than in asking a CPU to multiply two numbers. Of course, we see this as a very good thing. The IHVs will be in constant competition to provide the best image quality at the fastest speed with the lowest price.

Unfortunately, defining image quality is a more difficult task than it seems. Neither ATI nor NVIDIA produce images that match the DX9 reference rasterizer (Microsoft's tool to estimate what image should be produced by a program). There is, in fact, no “correct” image for any given frame of a game. This makes it very hard to draw a line in the sand and say that one GPU does something the right way and the other one does not.

There is the added problem that taking screenshots in a game isn't really the best place to start when looking for a quantitative comparison. Only a handful of tests will allow us to grab the exact same frame of a game for use in a direct comparison. We are always asking developers to include benchmarks in their games, and this is a feature that we would love to see in every benchmark.

The other issue with screenshots is trying to be sure that the image we grab from the framebuffer (the part of the GPU's memory that holds information about the screen) is the same as the image we see on the screen. For instance, NVIDIA saves some filtering and post-processing (work done on the 2D image produced from the 3D scene) until data is being sent out from the framebuffer to the display device. This means that the data in the framebuffer is never what we see on our monitors. In order to make it so people could take accurate screenshots of their games, NVIDIA does the same post-processing effects on the framebuffer data when a screenshot is taken. While screenshot post-processing is necessary at the moment, using this method introduces another undesirable variable into the equation. To this end, we are working very hard on finding alternate means of comparing image quality (such as capturing images from the DVI port).

When trying to render scenes, it is very important to minimize the amount of useless work a GPU does. This has led to a great number of optimizations being implemented in hardware that attempt to do less work whenever possible. Implementing such optimizations is absolutely necessary for games to run smoothly. The problem is that some optimizations make a slight difference in how a scene is rendered (such as approximating things like sine and inverse square root using numerical methods rather than calculating the exact answer). The perceptibility (or lack thereof) of the optimization should be an important factor in which optimizations are used and which are not. Much leeway is allowed in how things are done. In order to understand what's going on, we will attempt to explain some of the basics of real-time 3D rendering.

Color and Alpha
Comments Locked

35 Comments

View All Comments

  • nourdmrolNMT1 - Thursday, December 11, 2003 - link

    i hate flame wars but, blackshrike....

    there is hardly any difference between the images. and nvidia used to be way behind in that area. so they have caught up, and are actually in some instances doing more work to get the image to look a little different, or maybe they render everything that should be there, while ati doesnt (halo 2)

    MIKE
  • Icewind - Thursday, December 11, 2003 - link

    I have no idea what your bitching about Blackshrike, the UT2k3 pics look exactly the same to me.

    Perhaps you should go work for for AT and run benchmarks how you want them done instead of whining like a damn 5 year old..sheesh.
  • Shinei - Thursday, December 11, 2003 - link

    Maybe I'm going blind at only 17, but I couldn't tell a difference between nVidia's and ATI's AF, and I even had a hard time seeing the effects of the AA. I agree with AT's conclusion, it's very hard to tell which one is better, and it's especially hard to tell the difference when you're in the middle of a firefight; yes, it's nice to have 16xAF and 6xAA, but is it NECESSARY if it looks pretty at 3 frames per second? I'm thinking "No"; performance > quality, that's why quality is called a LUXURY and not a requirement.
    Now I imagine that since I didn't hop up and down and screech "omg nvidia is cheeting ATI owns nvidia" like a howler monkey on LSD, I'll be called an nVidia fanboy and/or told that A) I'm blind, B) my monitor sucks, C) I'm color blind, and D) my head is up my biased ass. Did I meet all the basic insults from ATI fanboys, or are there some creative souls out there who can top that comprehensive list? ;)
  • nastyemu25 - Thursday, December 11, 2003 - link

    cheer up emo kid
  • BlackShrike - Thursday, December 11, 2003 - link

    For my first line I forget to say blurry textures on the nvidia card. Sry, I was frustrated at the article.
  • BlackShrike - Thursday, December 11, 2003 - link

    Argh, this article concluded suddenly and without concluding anything. Not to mention, I saw definite blurry textures in UT 2003, and TRAOD. Not to mention the use of D3D AF Tester seemed to imply a major problem with one or the other hardware but they didn't use it at different levels of AF. I mean I only use 4-8 AF, I'd like to see the difference.

    AND ANOTHER THING. OKAY THE TWO ARE ABOUT THE SAME IN AF BUT IN AA ATI USUALLY WINS. SO, IF YOU CAN'T CONCLUDE ANYTHING, GIVE US A PERFORMACE CONCLUSION, like which runs better with AA or AF? Which creates the best with both settings enabled?

    Oh and AT. Remember back to the very first ATI 9700 Pro, you did tests with 6x AA and 16 AF. DO IT AGAIN. I want to see which is faster and better quality when their settings are absoulutely maxed out. Because I prefer playing 1024*768 at 6x AA and 16 AF then 1600*1200 at 4x AA 8x AF because I have a small moniter.

    I am VERY disappointed in this article. You say Nvidia has cleaned up their act, but you don't prove anything conclusive as to why. You say they are similar but don't say why. The D3D AF Tester was totally different for the different levels. WHAT DOES THIS MEAN? Come on Anand clean up this article, it's very poorly designed and concluded and is not at all like your other GPU articles.
  • retrospooty - Thursday, December 11, 2003 - link

    Well a Sony G500 is pretty good in my book =)

  • Hanners - Thursday, December 11, 2003 - link

    Not a bad article per se - Shame about the mistakes in the filtering section and the massive jumping to conclusions regarding Halo.
  • gordon151 - Thursday, December 11, 2003 - link

    I'm with #6 and the sucky monitor theory :P.
  • Icewind - Thursday, December 11, 2003 - link

    Or your monitor sucks #5

    ATI wins either way

Log in

Don't have an account? Sign up now