Texture Mapping and Filtering

In the beginning was the wire frame, and it was good. But texture mapping changed completely the face of computer generated 3D graphics. Everything from coloring, to lighting and shadows, to bumping and displacement can be done with texture mapping. For this discussion, we will be talking about mipmapping, bilinear, trilinear, and anisotropic filtering.

To set the stage, our 3D scene has an object and a texture map (an image to be applied to the object). When we stare “through the looking glass,” we see that each screen pixel maps to a particular area of our object. Likewise, each area on the object maps to an area in the texture map. Unfortunately, a texture map has a fixed number of data points (pixels in the image), while the surface of the object is continuous. This means that it is possible for an area on the object to map to a position that lies between pixels in the texture image. In order to fill in the gaps, developers need to choose a method to interpolate existing data. The first method developers used to solve this problem was simply to make the color of an area equal to that of the nearest pixel in the texture map.

The image quality resulting from using the nearest texel (a pixel in a texture map) is something commonly referred to as pixelization (large blocks of color visible when objects with low resolution textures fill the screen). Anyone who's played early first-person shooters (like Doom, Duke3D, etc.) will know this from how the screen looks when pressed up against a wall.

To solve this, rather than using the nearest texel in the texture map, we can do linear interpolation. We take the 4 surrounding texels, and interpolate between the two pairs of pixels in one direction. With the two resulting points, we then do another linear interpolation to get something closer to what the color should be. This is called bilinear filtering.

 
 The light blue color is interpolated linearly from the other two.

Another problem with what we have already talked about is that when high resolution textures are used on a surface that is far away, one screen pixel can be mapped to an area in the texture map, while the neighboring screen pixels are mapped to entirely different areas of the texture. Essentially, the size of the screen pixel is much greater than that of the texel. The visual result of this is a shimmering or sparkling in distant textures.

We can fix this by making multiple versions of texture maps with different resolutions. These multiple resolution texture maps are mipmaps (as this is a description of mipmapping). A Level of Detail (LOD) calculation based on the distance of the area on the object that we are texturing to the viewer is used to choose a mipmap level with texels close to the size of (but not smaller than) screen pixels.

One of the things that is completely different between ATI and NVIDIA is the way LOD is calculated for mipmapped textures. NVIDIA uses the Euclidean distance (sqrt(x2, y2, x2)) in their calculations, while ATI uses a weighted Manhattan distance calculation (.28*x+.53*y+.19*z). This causes the way textures look to vary a great deal between GPUs when looking at anything but very near or very far surface.

Of course, we still have problems that we need to solve. When the LOD calculation dictates a change in the mipmap being used on a surface, we can see a discontinuity in the texturing of that surface. To combat this, trilinear filtering was devised. First, we do filtering on the two surrounding mipmap levels, then interpolate between the resulting values.

 
 This diagram shows one way to do trilinear filtering.

The result is a smooth transition between mipmap levels. Of course, there are plenty of different algorithms for doing all this interpolation, and there is not a GPU on the market that does full trilinear filtering all the time. This operation is very expensive to do for every pixel on the screen. There are plenty of optimizations to make trilinear faster, such as only doing bilinear filtering where banding isn't an issue. This is acceptable as long as there is no perceptible loss in visual quality.

But the madness doesn't stop there. As it happens, trilinear filtering is isotropic. What this means to us is that trilinear filtering produces blurry images when not viewed straight on. The reason this happens is that the pixels in the mipmap levels are laid out in a square grid. Anisotropic filtering allows us to change the shape of the region we sample and interpolate between each mipmap based on the viewing angle. (Think of this as needing to be done for the same reason tilting a square object makes the far edge look narrower than the near edge.) The way this is done is also up to the implementation, thus adding another level of complexity in determining the color of one pixel due to one texture on one surface.

Color and Alpha Antialiasing
Comments Locked

35 Comments

View All Comments

  • zhangping0233 - Thursday, January 5, 2012 - link

    Thank you for your post. Strongly recomend that you used the xeccon flashlight.
  • DerekWilson - Friday, December 12, 2003 - link

    #23

    The FX cards support full DX9. In fact, they support more than the minimum to be DX9 cards and support some features that people speculate will be in DX10 (fp32, longer shader programs, etc...).

    Unfortuntely, this doesn't make the FX cards worth any more.

    Part of the reason ATI is able to lead so well in performance is that they don't support many of these features. And when games come out that can take advantage of them, it isn't likely that FX cards will run those features at very high framerates.

    The only thing FX feature support does for NVIDIA is give them one more generation of experience in supporting those features. Of course, it remains to be seen what they will do with that.

    ATI has proven that it can do a good job of hitting the nail on the head with actually leading the release of DX9 with full DX9 support. If they can do the same for DX10, then they will be doing very well.

    When DX10 emerges, things will get very interesting...
  • Pumpkinierre - Friday, December 12, 2003 - link

    So does this meam NVidia FX series cards are now true DX9 cards or DX9 compatible (whatever that means?) or partly DX9 but better at DX8.1 and hopefully DX10?
  • MOwings - Friday, December 12, 2003 - link

    This article was excellent. The explanations of all the technologies used in these cards was very clear. I thought the NVidia screenshots were brighter (the flashlight pic in Halo, the first pic in UT on the front right of the scnene). It seems NVdidia is being more accurate in their methodologies. To me, correct lighting is more important than the antialiasing so I would tend to prefer the NVidia. However, I doubt I would really notice any difference at full game speed and so maybe it is better to get the card that is fastest (ATI), although both seem to be plenty fast enough with their current drivers. Tough call. If there are stability issues with ATI drivers that might swing me although this is the first I have heard of problems with ATI's latest cards and drivers.
  • virtualgames0 - Thursday, December 11, 2003 - link

    #17...
    I'm in the exact same shoes as you are in. I have gone from ATI 9700pro to nvidia geforcefx 5900 from game incompatibility issues. Half my OpenGL games would crash. Tried every driver.. only to find that other games crash while my old game is fixed. Switched to nvidia, not one problem
    However, if you are lucky and do not have problems, I would agree the 9700pro had far superior AA quality, and takes less performance hit doing AA, but nvidia is good for me since 2xAA is all i need when I use 1600x1200 resolution
  • DerekWilson - Thursday, December 11, 2003 - link

    In response to the article, Scali over at Beyond3d put up a thread about the alpha blending issues we observed on the ATI cards:

    http://www.beyond3d.com/forum/viewtopic.php?t=9421

    He's written a program to test the accuracy of your cards alpha blending, which is kinda cool (and also confirms that ATI is a little off in the calculation).

    The theory Scali has is that the problem is due to ATI substituing in a couple shifts to save a division.

    We are definitely going to continue looking into the issue, and thanks, everyone, for your feedback.
  • Shinei - Thursday, December 11, 2003 - link

    Exactly, Araczynski. I mean, I love my UT2k3 with 4xAA/8xAF(64-tap), I really do, but I don't NEED it to enjoy the game. As long it plays fast and looks better than the games before it, it doesn't matter if I have to dial back my AA/AF; hell, Halo won't even run above 10fps if I don't knock my filter to bilinear, let alone whining about AF quality at 8x!! Sometimes people just need to realize that IQ is a secondary concern to getting the game to run in the first place; suffer in vain with a Ti4200 in Halo at 1024x768 and then tell me that the "lower" IQ for an FX makes it less worthwhile than a Ti or an ATI.
  • araczynski - Thursday, December 11, 2003 - link

    aside from the rest of the points, as far as i'm personally concerned, AA is a waste of time anyway. The only time i bother to use it is when i'm forced to run something in 800x600 or less (assuming it even supports D3D).

    Other then that I will always choose a higher resolution over an AA'd lower one. Personally i prefer the sharpnees/clarity of higher resolutions to the look of AA, but that's my opinion and my taste, I know others think the opposite, which is fine by me.

    Point is, i wish revewiers would stop focusing on AA quality/speed as if EVERYBODY gives a rat's hiny about it.
  • Ineptitude - Thursday, December 11, 2003 - link

    Having owned a few ATI and Nvidia cards I can say that I agree with the article. Finding a difference in image quality between the current top models is mostly subjective. It is amazing to see the amount of bickering over the subject.

    What I don't understand is the failure to mention all of the driver problems people have experienced with ATI cards. At this point I will probably never buy an ATI product again due to the poor drivers. Nvidia drivers don't crash my machine.

    I've got a 9800 going cheap if anybody wants it.
  • tyski - Thursday, December 11, 2003 - link

    Everybody here who says the article 'concluded suddenly and without concluding anything' could not have read the article. The whole point of the article is that the author provided enough information for the reader to come his/her own conclusions.

    The only conclusion that Derek made was that nVidia does more work to get approximately the same thing done. If you understand anything about real time hardware, this is not a good thing.

    Having read every article Derek has written so far, I think this is probably the best one. Unbiased throughout. And if you want a screenshot for every possible combination of AA, AF, and resolutions, then go buy the cards and see how long it really takes to perform this many benchmarks. There is such a thing as article deadlines.

    Tyrel

Log in

Don't have an account? Sign up now