A New Compression Scheme: 3Dc

3Dc isn't something that's going to make current games run better or faster. We aren't talking about a glamorous technology; 3Dc is a lossy compression scheme for use in 3D applications (as its name is supposed to imply). Bandwidth is a highly prized commodity inside a GPU, and compression schemes exist to try to help alleviate pressure on the developer to limit the amount of data pushed through a graphics card.

There are already a few compressions schemes out there, but in their highest compression modes, they introduce some discontinuity into the texture. This is acceptable in some applications, but not all. The specific application ATI is initially targeting for use with 3Dc is normal mapping.

Normal mapping is used in making the lighting of a surface more detailed than is its geometry. Usually, the normal vector at any given point is interpolated from the normal data stored at the vertex level, but, in order to increase the detail of lighting and texturing effects on a surface, normal maps can be used to specify the way normal vectors should be oriented across an entire surface at a high level of detail. If very large normal maps are used, enormous amounts of lighting detail can produce the illusion of geometry that isn't actually there.


Here's an example of how normal mapping can add the appearance of more detailed geometry

In order to work with these large data sets, we would want to use a compression scheme. But since we don't want discontinuities in our lighting (which could appear as flashy or jumpy lighting on a surface), we would like a compression scheme that maintains the smoothness of the original normal map. Enter 3Dc.


This is an example of how 3Dc can help alieve continuity problems in normal map compression

In order to facilitate a high level of continuity, 3Dc divides textures into four by four blocks of vector4 data with 8 bits per component (512bit blocks). For normal map compression, we throw out the z component which can be calculated from the x and y components of the vector (all normal vectors in a normal map are unit vectors and fit the form x^2 + y^2 + z^2 = 1). After throwing out the unused 16 bits from each normal vector, we then calculate the minimum and maximum x and minimum and maximum y for the entire 4x4 block. These four values are stored, and each x or y value is stored as a 3 bit value selecting any of 8 equally spaced steps between the minimum and maximum x or y values (inclusive).


The storage space required for a 4x4 block of normal map data using 3Dc

the resulting compressed data is 4 vectors * 4 vectors * 2 components * 3 bits + 32 bits (128 bits) large, giving a 4:1 compression ratio for normal maps with no discontinuities. Any two channel or scalar data can be compressed fairly well via this scheme. When compressing data that is very noisy (or otherwise inherently discontinuous -- not that this is often seen) accuracy may suffer, and compression ratio falls off for data that is more than two components (other compression schemes may be more useful in these cases).

ATI would really like this compression scheme to catch on much as ST3C and DXTC have. Of course, the fact that compression and decompression of 3Dc is built in to R420 (and not NV40) won't play a small part in ATI's evangelism of the technology. After all is said and done, future hardware support by other vendors will be based on software adoption rate of the technology, and software adoption will likely also be influenced by hardware vendor's plans for future support.

As far as we are concerned, all methods of increasing apparent useable bandwidth inside a GPU in order to deliver higher quality games to end users are welcome. Until memory bandwidth surpasses the needs of graphics processors (which will never happen), innovative and effective compressions schemes will be very helpful in applying all the computational power available in modern GPUs to very large sets of data.

Depth and Stencil with Hyper Z HD The Cards
Comments Locked

95 Comments

View All Comments

  • l3ored - Tuesday, May 4, 2004 - link

    only the 800xt was winning, the pro usually came after the 6800's
  • Keeksy - Tuesday, May 4, 2004 - link

    Yeah, it is funny how ATi excels in DirectX, yet loses in the OpenGL bechmarks. Looks like I'm going to have both an NVIDIA and an ATi card. The first to play Doom3, the other to play HL2.
  • peroni - Tuesday, May 4, 2004 - link

    I wish there was some testing done with overclocking.

    There are quite a few spelling errors in there Derek.

    Did I miss something or I did not see any mention of prices for these 2 cards?
  • Glitchny - Tuesday, May 4, 2004 - link

    #11 thats what everyone thought when Nvidia bought all the people from 3dFX and look what happened with that.
  • araczynski - Tuesday, May 4, 2004 - link

    i agree with 5 and 10, still the same old stalemate as before, one is good at one thing, the other is good at another. i guess i'll let price dictate my next purchase.

    but ati sure did take the wind out of nvidia's sails with these numbers.

    i wish one of the two would buy the other one out and combine the technologies, one would think they would have a nice product in the end.
  • eBauer - Tuesday, May 4, 2004 - link

    #8 - OpenGL still kicks butt on the nVidia boards. Think of all the Doom3 fans that will buy the 6800's....

    As for myself, I will wait and see how the prices pan out. For now leaning on the X800.
  • ViRGE - Tuesday, May 4, 2004 - link

    ...On the virge of ATI's R420 GPU launch...

    Derek, I'm so touched that you thought of me. ;)
  • Tallon - Tuesday, May 4, 2004 - link

    Ok, so let's review. with the x800XT having better image quality, better framerates, only taking up one slot for cooling and STILL being cooler, and only needing one molex connecter (uses less power than the 9800 XT, actually), who in their right mind would choose a 6800u over this x800XT? I mean, seriously, NVIDIA is scrambling to release a 6850u now which is exactly identical to a 6800u, it's just overclocked (which means more power and higher temperatures). This is ridiculous. ATI is king.
  • noxipoo - Tuesday, May 4, 2004 - link

    ATi wins again.
  • Akaz1976 - Tuesday, May 4, 2004 - link

    Dang! On one hand, I am saddened by the review. My recently purchased (last month) Radeon9800PRO would be at the bottom of the chart in most of the tests carried out in this review :(

    On the other hand this sure bode well for my next vid card upgrade. Even if it is a few months off! :)

    Akaz

Log in

Don't have an account? Sign up now