The 3D Pipeline

Before we can talk about tile rendering, it is critical to first understand how a 3D accelerator works in general.

All 3D accelerators seek to take a 3D world, modeled on your computer mathematically, and render it to a 2D image that is displayed on your computer monitor. This 3D "world" is typically modeled by objects made up of adjoining polygons that are defined by their verticies, which in turn are represented by their x, y, and z coordinates as well as a color value. These polygons are later shaded and/or textured to add color and surface properties. There are three steps to processing such an image in order to render it as a 2D image for display on your monitor: transform and lighting, hidden surface removal, and texturing and shading.

NVIDIA has chosen to focus on accelerating the transform and lighting (T&L) part of the pipeline by implementing it in hardware on their GeForce series of cards. ATI has done the same with their upcoming Radeon. Other cards use the CPU to handle these functions. The hardware implementation allows many more polygons to be drawn in a scene, while freeing up the CPU for other tasks. Although the power of hardware T&L is yet to be utilized to any great extent by any game currently on the market, future games are expected to implement higher polygon counts that only a hardware T&L unit can process. Most games today actually use light maps to speed things up instead of the lighting implemented by current T&L units. The reasoning is that without hardware lighting, performance would degrade to unacceptable levels and game developers have to design their games such that most people can play them.

Imagination Technologies has decided to focus on improving the other two parts of the rendering pipeline by using a completely new way of performing those functions. Before we get into how they've improved on these parts of the pipeline, let's take a look at a traditional 3D accelerator.

Traditional Hidden Surface Removal and Texturing / Shading

A traditional 3D accelerator processes each polygon as it is sent to the hardware, without any knowledge of the rest of the scene. Since there is no knowledge of the rest of the scene, every forward facing polygon must be shaded and textured. A z-buffer is used to store the depth of each pixel in the current back buffer. Each pixel of each polygon rendered must be checked against the z-buffer to determine if it is closer to the viewer than the pixel currently stored in the back buffer.

Checking against the z-buffer must be performed after the pixel is already shaded and textured. If a pixel turns out to be in front of the current pixel, the new pixel replaces (or is blended with in the case of transparency) the current pixel in the back buffer and the z-buffer depth updated. If the new pixel ends up behind the current pixel, the new pixel is thrown out and no changes are made to the back buffer (or blended in the case of transparency). When pixels are drawn for no reason, this is known as overdraw. Drawing the same pixel three times is equivalent to an overdraw of 3, which Imagination Technologies and STMicro claim is typical.

Once the scene is complete, the back buffer is flipped to the front buffer for display on the montior.

What we've just described is known as "immediate mode rendering" and has been used since the 1960's for still frame CAD rendering, architectural engineering, film special effects, and now in most 3D accelerators found inside your PC.

KYRO Specifications The PowerVR Approach: Tile Rendering


View All Comments

Log in

Don't have an account? Sign up now