Inside The Pipes

The pixel pipe is made up of two vector units and a texture unit that all operate together to facilitate effective shader program execution. There are a couple mini-ALUs in each shader pipeline that allow operations such as a free fp16 normalize and other specialized features that relate to and assist the two main ALUs.



Even though this block diagram looks slightly different from ones shown during the 6800 launch, NVIDIA has informed us that these mini-ALUs were also present in NV4x hardware. There was much talk when the 6800 launched about the distinct functionality each of the main shader ALUs had. In NV4x, only one ALU had the ability to perform a single clock MADD (multiply-add). Similarly, only one ALU assisted in texture address operations for the texture unit. Simply having these two distinct ALUs (regardless of their functionality difference) is what was able to push the NV4x so much faster than the NV3x architecture.

In their ongoing research into commonly used shaders (and likely much of their work with shader replacement), NVIDIA discovered that a very high percentage of shader instructions were MADDs. Multiply-add is extremely common in 3D mathematics as linear algebra, matrix manipulation, and vector calculus are a huge part of graphics. G70 implements MADD on both main Shader ALUs. Taking into account the 50% increase in shader pipelines and each pipe's ability to compute twice as many MADD operations per clock, the G70 has the theoretical ability to triple MADD performance over the NV4x architecture (on a clock for clock basis).

Of course, we pressed the development team to tell us if both Shader ALUs featured identical functionality. The answer is that they do not. Other than knowing that only one ALU is responsible for assisting the texture hardware, we were unable to extract a detailed answer about how similar the ALUs are. Suffice it to say that they still don't share all features, but that NVIDIA certainly feels that the current setup will allow G70 to extract twice the shader performance for a single fragment over NV4x (depending on the shader of course). We have also learned that the penalty for branching in the pixel shaders is much less than in previous hardware. This may or may not mean that the pipelines are less dependent on following the exact same instruction path, but we really don't have the ability to determine what is going on at that level.

No More Memory Bandwidth No More Shader Replacement
Comments Locked

127 Comments

View All Comments

  • vanish - Wednesday, June 22, 2005 - link

    From what i'm seeing the 6800U SLI beats the 7800GTX[SLI] in most normal resolutions. I don't know, but usually when a new generation comes out it should at least beat the previous generation. Sure, it works wonders on huge resolutions, but very few people actually have monitors that can display these types of resolutions. Most people don't have monitors above 1200x1000 resolution, much less 1600x1200.
  • Live - Wednesday, June 22, 2005 - link

    What’s up with the BF2 graphs? The 6800u SLI scores more at 1600x1200 4xAA (76,3) then it does at the same resolution without AA (68,3). That doesn’t make sense does it?

    Sorry for the extremely poor spelling…
  • Dukemaster - Wednesday, June 22, 2005 - link

    Over 2,5 than a 6800 ultra in Battlefield 2 and Splinter Cell, how the hell is that possbile??
  • vortmax - Wednesday, June 22, 2005 - link

    Good job Nvidia with the launch. Now lets see if ATI can match the performance and availability.
  • ryanv12 - Wednesday, June 22, 2005 - link

    From what I see, the 7800GTX is really of benefit to you if you have a monitor that is higher than a 1600x1200 resolution. Fairly impressive though, I must say. I also wasn't expecting double the performance of the 6800's since it only has 50% more pipes. I can't wait to see the 32 piped cards!
  • Live - Wednesday, June 22, 2005 - link

    Looks good. to bad i ahve to wiat a few month until ATI releases the competition.
  • bpt8056 - Wednesday, June 22, 2005 - link

    First post!! Congrats to nVidia for pulling off an official launch with retail availability.

Log in

Don't have an account? Sign up now