Texture Mapping and Filtering

In the beginning was the wire frame, and it was good. But texture mapping changed completely the face of computer generated 3D graphics. Everything from coloring, to lighting and shadows, to bumping and displacement can be done with texture mapping. For this discussion, we will be talking about mipmapping, bilinear, trilinear, and anisotropic filtering.

To set the stage, our 3D scene has an object and a texture map (an image to be applied to the object). When we stare “through the looking glass,” we see that each screen pixel maps to a particular area of our object. Likewise, each area on the object maps to an area in the texture map. Unfortunately, a texture map has a fixed number of data points (pixels in the image), while the surface of the object is continuous. This means that it is possible for an area on the object to map to a position that lies between pixels in the texture image. In order to fill in the gaps, developers need to choose a method to interpolate existing data. The first method developers used to solve this problem was simply to make the color of an area equal to that of the nearest pixel in the texture map.

The image quality resulting from using the nearest texel (a pixel in a texture map) is something commonly referred to as pixelization (large blocks of color visible when objects with low resolution textures fill the screen). Anyone who's played early first-person shooters (like Doom, Duke3D, etc.) will know this from how the screen looks when pressed up against a wall.

To solve this, rather than using the nearest texel in the texture map, we can do linear interpolation. We take the 4 surrounding texels, and interpolate between the two pairs of pixels in one direction. With the two resulting points, we then do another linear interpolation to get something closer to what the color should be. This is called bilinear filtering.

 
 The light blue color is interpolated linearly from the other two.

Another problem with what we have already talked about is that when high resolution textures are used on a surface that is far away, one screen pixel can be mapped to an area in the texture map, while the neighboring screen pixels are mapped to entirely different areas of the texture. Essentially, the size of the screen pixel is much greater than that of the texel. The visual result of this is a shimmering or sparkling in distant textures.

We can fix this by making multiple versions of texture maps with different resolutions. These multiple resolution texture maps are mipmaps (as this is a description of mipmapping). A Level of Detail (LOD) calculation based on the distance of the area on the object that we are texturing to the viewer is used to choose a mipmap level with texels close to the size of (but not smaller than) screen pixels.

One of the things that is completely different between ATI and NVIDIA is the way LOD is calculated for mipmapped textures. NVIDIA uses the Euclidean distance (sqrt(x2, y2, x2)) in their calculations, while ATI uses a weighted Manhattan distance calculation (.28*x+.53*y+.19*z). This causes the way textures look to vary a great deal between GPUs when looking at anything but very near or very far surface.

Of course, we still have problems that we need to solve. When the LOD calculation dictates a change in the mipmap being used on a surface, we can see a discontinuity in the texturing of that surface. To combat this, trilinear filtering was devised. First, we do filtering on the two surrounding mipmap levels, then interpolate between the resulting values.

 
 This diagram shows one way to do trilinear filtering.

The result is a smooth transition between mipmap levels. Of course, there are plenty of different algorithms for doing all this interpolation, and there is not a GPU on the market that does full trilinear filtering all the time. This operation is very expensive to do for every pixel on the screen. There are plenty of optimizations to make trilinear faster, such as only doing bilinear filtering where banding isn't an issue. This is acceptable as long as there is no perceptible loss in visual quality.

But the madness doesn't stop there. As it happens, trilinear filtering is isotropic. What this means to us is that trilinear filtering produces blurry images when not viewed straight on. The reason this happens is that the pixels in the mipmap levels are laid out in a square grid. Anisotropic filtering allows us to change the shape of the region we sample and interpolate between each mipmap based on the viewing angle. (Think of this as needing to be done for the same reason tilting a square object makes the far edge look narrower than the near edge.) The way this is done is also up to the implementation, thus adding another level of complexity in determining the color of one pixel due to one texture on one surface.

Color and Alpha Antialiasing
Comments Locked

35 Comments

View All Comments

  • Nate420 - Monday, December 22, 2003 - link

    I think the people bitching about the article being biased to one side or another are biased themselves. Let go of your hangups with ATI and Nvidia and read the article again. IMO the article was well written and about as unbiased as it could be. The fine folks at Anandtech are human after all.
  • titus - Saturday, December 20, 2003 - link

    Here's something no one pointed out:
    Check the Jedi Knight: Jedi Academy screenshots, and notice how the NVIDIA employs the light sabre alpha-blending almost in *post* processing, so that even the head and body (which obstruct our view of the light sabre) glow as much as where it isn't obstructed.

    ATI's alpha blending works in that only visible areas 'glow', however, where it does glow, it is as irregular as heck.

    Point is, they both have faults with alpha blending.
  • valnar - Wednesday, December 17, 2003 - link

    Derek #30,

    I bought a Leadtek GeForce 4 Ti4400 some time ago on the review that it was a speedy, stable and "cool" card for DirectX games. It was all those, but a piece of visual junk otherwise. DVD's looked horrible because of the lack of a gamma adjustment. I had to settle for brightness/contrast alone which made it washed out. The overall contrast between 0 black and 255 white in any photo program was no where near my current Radeon 9600, or even an older Matrox G400. It makes it hard to do any kind of calibration for photo work, let alone enjoy the occasional DVD.

    My 2 cents...
  • DaveBaumann - Sunday, December 14, 2003 - link

    “WRT dx 10, it still seems to me that nvidia is closer to ps3 support than ati. I will definitely give the Meltdown presentations some more in depth study time though.”

    Derek – As I said, PS3.0 is already part of *DX9*, DX10 will be PS/VS4.0. Please read out DirectX Next article to gain an understanding of the directions DX10 is taking

    http://www.beyond3d.com/articles/directxnext/

    We need to fill in the gap between DX9 PS/VS2.0 and DX10 PS/VS4.0, being the update to DX9.0 that will allow hardware targets for PS/VS3.0. Regardless of whether NVIDIA are closer to PS3.0 or not that doesn’t mean that ATI will have issues support it – ATI hadn’t support true multisampling FSAA before R300 and yet leapfrogged NVIDIA in this respect, NVIDIA hadn’t supported PS1.4 before the FX series and yet they went to PS2.0 Extended – what they currently support doesn’t necessarily have much bearing on what they are going to support.

    “I know (as do many who were present when David Kirk was speaking about the issue) that the FX hardware supports Multiple Render Targets.”

    FX natively supports MET’s, not MRT’s. The FX series would probably need the driver to support MRT’s by packing and unpacking instructions.

    I asked a developer about the difference between MRT’s and MET’s and this was his reply:

    “MRT are a lot more flexible, effectively upto 4 render-target can be rendered to simulatously, the only real restrictions are same size and for some chipsets (ATI but not AFAIK PowerVR) same format.
    MET are special textures that are wider than usual (usual being upto 4 channels (i.e. RGBA), MET aren't really anything special there just >4 channel textures (RGBAXYZW). It may be possible in MET's are different formats for each 'logical' texture. i.e. 8bit per channel RGBA and float16 XY.
    MRT are much better as you can mix and match standard render targets as you like. MET have to be special cases where MRT can be used much more like any render target.

    Both are handy but MRT is the perferred route (AFAIK MET were added to Dx9 to try and cope with the GFFX style pack/unpack render targets). MRT are much more the theorytical idea, you can choose to render to multiple places at teh same time.”

    [Derek] “I'm not sure if flexible float buffers fall into the same catagory, but I will look into it.”

    NVIDIA has some odd addressing restrictions, outside of DX’s standard requirements (for any revision), with their float texture support that restricts use for output buffers a conditional flag needs to be added to DX9 to support these restriction on NV’s hardware. This flag is likely to be added to the update to DX9 that will also allow hardware PS/VS3/0 support next year.
  • DerekWilson - Sunday, December 14, 2003 - link

    Dave (27+29),

    Thanks for responding in so thuroughly.

    WRT dx 10, it still seems to me that nvidia is closer to ps3 support than ati. I will definitely give the Meltdown presentations some more in depth study time though.

    I wasn't quite clear on a couple things with NV supporting DX9... Not all of the features are exposed with current drivers even thought the hardware is capable of them. I know (as do many who were present when David Kirk was speaking about the issue) that the FX hardware supports Multiple Render Targets. We could not get any confirmation on a reason this feature is not yet enabled in software (and I will leave the speculations to the reader). I'm not sure if flexible float buffers fall into the same catagory, but I will look into it.

    Of course, my original assesment of these facts not mattering except from an experience for NVIDIA standpoint still stands.

    I really appreciate your comments Dave, please keep them coming.

    And Valnar (#25):

    I am really looking forward to doing just what you are asking for in future reviews. I don't feel comfortable with doing such analysis on any real current display technology, and am looking for ways to capture the output of cards over DVI. I can use photoshop for as much analysis as necessary to compare images at that point (including spectrum/brightness and crispness or blurryness of text).

    Of course, this stuff won't test the cards ramdac, which is still very important. I'm hoping to also come up with some solid 2D benches in the coming year that should include such analysis.

    Please keep the comments and suggestions coming as I am very open to including the kinds of information you all want.

    Thanks,
    Derek Wilson
  • DaveBaumann - Saturday, December 13, 2003 - link

    "So what about the missing 24bit fpu in the FX series which have 16 and 32bit the latter being very slow. Doesnt this mean that the FX series is not even compliant?"

    No. DirectX9 specification say that the *minimum* precision requirement for "full precision" pixel shaders is FP24 - this being the minimum requirement means that anything above that is also deemed as full precision. FP16 is the minimum requirement for partial precision shaders.

    "And given the scarcity of DX9 games at present, inevitable that FXs will run into trouble in the future"

    Its not an inevitability, but it may make developers live a little more difficult in the interim since, as NVIDIA themselves point out, they will have to work a little harder to make them optimal for FX hardware. Whether or not that means that some development houses will put off implementing serious use of DX9 for the time being is a real question.

    However, I'd say that the DX9 talk doesn't tell the whole story as ATI's shaders appear more effective even with many DX8 shaders in many cases. For instance, look at these performance numbers with a test application:

    http://www.beyond3d.com/forum/viewtopic.php?p=1991...

    GF FX 5950, 52.16:
    PS 1.1 - Simple - 938.785889M pixels/sec
    PS 1.4 - Simple - 885.801453M pixels/sec

    9800 PRO
    PS 1.1 - Simple - 1489.901123M pixels/sec
    PS 1.4 - Simple - 1489.822754M pixels/sec
  • Pumpkinierre - Saturday, December 13, 2003 - link

    Thank you 27 for explaining compliant versus full DX9. So what about the missing 24bit fpu in the FX series which have 16 and 32bit the latter being very slow. Doesnt this mean that the FX series is not even compliant? And given the scarcity of DX9 games at present, inevitable that FXs will run into trouble in the future,
  • DaveBaumann - Friday, December 12, 2003 - link

    “The FX cards support full DX9. In fact, they support more than the minimum to be DX9 cards and support some features that people speculate will be in DX10 (fp32, longer shader programs, etc...).”

    Derek, the FX series supports the full features required for DX9 “compliancy”, they do not support “full” DX9 – there are numerous optional features within DirectX specifications that IHV’s can choose to support or not. Two such features within DX9 is the support of flexible float buffers and MRT’s, neither of which the FX series do support.

    The lack of float buffer support is causing some head scratching among developers as it would generally make their life easier (check with some to see if they would find it useful if all vendors would support it) – float buffers are already use in some game which means that the FX series can’t support some options (check Tomb Raider). MRT’s also have their uses, and we’ve published some research work from one developer on lighting which is made easier by the support of MRT’s:

    http://www.beyond3d.com/articles/deflight/

    As for the “speculation of the FX’s support for what will be in DX10” please check the Micorsoft Meltdown presentations – they list the specification requirements for Pixel Shader 3.0 compliant which is already spec’ed within DX9 and the FX series falls short of this requirement.

    “Part of the reason ATI is able to lead so well in performance is that they don't support many of these features.”

    And NVIDIA don’t support some features that are also within DX9 – I’m not sure you can claim that ATI has better DX9 performance as they both have elements that each other support and don’t support.
  • BlackShrike - Friday, December 12, 2003 - link

    I don't get it. You guys are content for sub perfect products? When I buy something, I want it to be the best for my money. But you guys don't seem to care for the best image quality possible while getting high frame rates. Isn't that why you buy these high end cards?

    Another thing, how come anandtech doesn't do an article by finding the minimum frame rates. Remember how we found out ATI and Nvidia sometimes have high average frame rates, but very low minimum frame rates? Now this would be the deciding factor for me. Image quality plus consistent frame rates.

    Really guys, I expected more from Anandtech readers.
  • valnar - Friday, December 12, 2003 - link

    I read the headline of this article and actually thought it was going to be about image quality. No shocker that it wasn't.

    This article talked about rendering quality, not image quality. I'd really like somebody to bring up a color spectrum analyzer. I'd like to hear about text crispness and/or fuzzyness of displaying text at 6 point. I'd like to hear about color accuracy and brightness/contrast/hue/sat/gamma capabilities.

    Oh well.

Log in

Don't have an account? Sign up now