The R420 Vertex Pipeline

The point of the vertex pipeline in any GPU is to take geometry data, manipulate it if needed (with either fixed function processes, or a vertex shader program), and project all of the 3D data in a scene to 2 dimensions for display. It is also possible to eliminate unnecessary data from the rendering pipeline to cut out useless work (via view volume clipping and backface culling). After the vertex engine is done processing the geometry, all the 2D projected data is sent to the pixel engine for further processing (like texturing and fragment shading).

The vertex engine of R420 includes 6 total vertex pipelines (R3xx has four). This gives R420 a 50% per clock increase in peak vertex shader power per clock cycle.

Looking inside an individual vertex pipeline, not much has changed from R3xx. The vertex pipeline is laid out exactly the same, including a 128bit vector math unit, and a 32bit scalar math unit. The major upgrade R420 has had from R3xx is that it is now able to compute a SINCOS instruction in one clock cycle. Before now, if a developer requested the sine or cosine of a number in a vertex shader program, R3xx would actually compute a taylor series approximation of the answer (which takes longer to complete). The adoption of a single cycle SINCOS instruction by ATI is a very smart move, as trigonometric computations are useful in implementing functionality and effects attractive to developers. As an example, developers could manipulate the vertices of a surface with SINCOS in order to add ripples and waves (such as those seen in bodies of water). Sine and cosine computations are also useful in more basic geometric manipulation. Overall, R420 has a welcome addition in single cycle SINCOS computation.

So how does ATI's new vertex pipeline layout compare to NV40? On a major hardware "black box" level, ATI lacks the vertex texture unit featured in NV40 that's required for shader model 3.0's vertex texturing support. Vertex texturing allows developers to easily implement any effect which would benefit from allowing texture data to manipulate geometry (such as displacement mapping). The other major difference between R420 and NV40 is feature set support. As has been widely talked about, NV40 supports Shader Model 3.0 and all the bells and whistles that come along with it. R420's feature set support can be described as an extended version of Shader Model 2.0, offering a few more features above and beyond the R3xx line (including more support of longer shader programs, and more registers).

What all this boils down to is that we are only seeing something that looks like a slight massaging of the hardware from R300 to R420. We would probably see many more changes if we were able too peer deeper under the hood. From a functionality standpoint, it is sometimes hard to see where performance comes from, but (as we will see even more from the pixel pipeline) as graphics hardware evolves into multiple tiny CPUs all laid out in parallel, performance will be effected by factors traditionally only spoken of in CPU analysis and reviews. The total number of internal pipeline stages (rather than our high level functionality driven pipeline), cache latencies, the size of the internal register file, number of instructions in flight, number of cycles an instructions takes to complete, and branch prediction will all come heavily into play in the future. In fact, this review marks the true beginning of where we will be seeing these factors (rather than general functionality and "computing power") determine the performance of a generation of graphics products. But, more on this later.

After leaving the vertex engine portion of R420, data moves into the setup engine. This section of the hardware takes the 2D projected data from the vertex engine, generates triangles and point sprites (particles), and partitions the output for use in the pixel engine. The triangle output is divided up into tiles, each of which are sent to a block of four pixel pipelines (called a quad pipeline by ATI). These tiles are simply square blocks of projected pixel data, and have nothing to do with "tile based rendering" (front to back rendering of small portions of the screen at a time) as was seen in PowerVR's Kyro series of GPUs.

Now we're ready to see what happens on the per-pixel level.

The Chip The Pixel Shader Engine
Comments Locked

95 Comments

View All Comments

  • l3ored - Tuesday, May 4, 2004 - link

    only the 800xt was winning, the pro usually came after the 6800's
  • Keeksy - Tuesday, May 4, 2004 - link

    Yeah, it is funny how ATi excels in DirectX, yet loses in the OpenGL bechmarks. Looks like I'm going to have both an NVIDIA and an ATi card. The first to play Doom3, the other to play HL2.
  • peroni - Tuesday, May 4, 2004 - link

    I wish there was some testing done with overclocking.

    There are quite a few spelling errors in there Derek.

    Did I miss something or I did not see any mention of prices for these 2 cards?
  • Glitchny - Tuesday, May 4, 2004 - link

    #11 thats what everyone thought when Nvidia bought all the people from 3dFX and look what happened with that.
  • araczynski - Tuesday, May 4, 2004 - link

    i agree with 5 and 10, still the same old stalemate as before, one is good at one thing, the other is good at another. i guess i'll let price dictate my next purchase.

    but ati sure did take the wind out of nvidia's sails with these numbers.

    i wish one of the two would buy the other one out and combine the technologies, one would think they would have a nice product in the end.
  • eBauer - Tuesday, May 4, 2004 - link

    #8 - OpenGL still kicks butt on the nVidia boards. Think of all the Doom3 fans that will buy the 6800's....

    As for myself, I will wait and see how the prices pan out. For now leaning on the X800.
  • ViRGE - Tuesday, May 4, 2004 - link

    ...On the virge of ATI's R420 GPU launch...

    Derek, I'm so touched that you thought of me. ;)
  • Tallon - Tuesday, May 4, 2004 - link

    Ok, so let's review. with the x800XT having better image quality, better framerates, only taking up one slot for cooling and STILL being cooler, and only needing one molex connecter (uses less power than the 9800 XT, actually), who in their right mind would choose a 6800u over this x800XT? I mean, seriously, NVIDIA is scrambling to release a 6850u now which is exactly identical to a 6800u, it's just overclocked (which means more power and higher temperatures). This is ridiculous. ATI is king.
  • noxipoo - Tuesday, May 4, 2004 - link

    ATi wins again.
  • Akaz1976 - Tuesday, May 4, 2004 - link

    Dang! On one hand, I am saddened by the review. My recently purchased (last month) Radeon9800PRO would be at the bottom of the chart in most of the tests carried out in this review :(

    On the other hand this sure bode well for my next vid card upgrade. Even if it is a few months off! :)

    Akaz

Log in

Don't have an account? Sign up now