Inside The Pipes

The pixel pipe is made up of two vector units and a texture unit that all operate together to facilitate effective shader program execution. There are a couple mini-ALUs in each shader pipeline that allow operations such as a free fp16 normalize and other specialized features that relate to and assist the two main ALUs.



Even though this block diagram looks slightly different from ones shown during the 6800 launch, NVIDIA has informed us that these mini-ALUs were also present in NV4x hardware. There was much talk when the 6800 launched about the distinct functionality each of the main shader ALUs had. In NV4x, only one ALU had the ability to perform a single clock MADD (multiply-add). Similarly, only one ALU assisted in texture address operations for the texture unit. Simply having these two distinct ALUs (regardless of their functionality difference) is what was able to push the NV4x so much faster than the NV3x architecture.

In their ongoing research into commonly used shaders (and likely much of their work with shader replacement), NVIDIA discovered that a very high percentage of shader instructions were MADDs. Multiply-add is extremely common in 3D mathematics as linear algebra, matrix manipulation, and vector calculus are a huge part of graphics. G70 implements MADD on both main Shader ALUs. Taking into account the 50% increase in shader pipelines and each pipe's ability to compute twice as many MADD operations per clock, the G70 has the theoretical ability to triple MADD performance over the NV4x architecture (on a clock for clock basis).

Of course, we pressed the development team to tell us if both Shader ALUs featured identical functionality. The answer is that they do not. Other than knowing that only one ALU is responsible for assisting the texture hardware, we were unable to extract a detailed answer about how similar the ALUs are. Suffice it to say that they still don't share all features, but that NVIDIA certainly feels that the current setup will allow G70 to extract twice the shader performance for a single fragment over NV4x (depending on the shader of course). We have also learned that the penalty for branching in the pixel shaders is much less than in previous hardware. This may or may not mean that the pipelines are less dependent on following the exact same instruction path, but we really don't have the ability to determine what is going on at that level.

No More Memory Bandwidth No More Shader Replacement
Comments Locked

127 Comments

View All Comments

  • Regs - Wednesday, June 22, 2005 - link

    Yikes @ the graphs lol.

    I just came close to pushing the button to order one of these but then I said...what games can't play on a 6800GT at 16x12 res? There is none. Far Cry was the only game that comes close to doing it.

    Bravo to Nvidia, his and boo @ lagging game developers.
  • bob661 - Wednesday, June 22, 2005 - link

    #19
    Are you new to this market or do you have a short memory? Don't you remember that the initial 6800 Ultra's cost around $700-800? I sure as hell do. Why is everyone complaining about pricing? These are premium video cards and you will pay a premium price to buy them.
  • Barneyk - Wednesday, June 22, 2005 - link

    Yeah, not a single comment on any of the benchmarks, what is up with that?

    There were alot of wierd scenarios there, why is there NO performance increase in SLI some of the time?
    And why is 6800Ultra SLI faster then 7800GTX SLI??

    Alot of wierd stuff, and not a singel comment or analysis about it, I always read most new tests here on AT first becasue its usually the best, but this review was a double boogey to say the least...
  • Dukemaster - Wednesday, June 22, 2005 - link

    @21: The score of the X850XT PE in Wolfenstein still looks messed up to me...
  • shabby - Wednesday, June 22, 2005 - link

    Ya some of the scores dont make much sense, 7800 sli loosing to a single 7800?
  • yacoub - Wednesday, June 22, 2005 - link

    Hey, looks great! $350 and you've got a buyer here!
  • Lifted - Wednesday, June 22, 2005 - link

    Guys, they simply reversed the 6800 Ultra SLI and 7800 GTX SLI in all of the 1600 x 1200 - 4x AA graphs.

    Now everthing is kosher again.
  • Johnmcl7 - Wednesday, June 22, 2005 - link

    To 18 - I have to admit, I didn't bother looking closely at them, seeing the X850XT supposedly beating all the other cards by such a margin at those resolutions showed they were completely screwed up! I didn't notice the performance increase as you go up the resolution, maybe it's something I missed on my own X850XT? ;) I wish...that would be a neat feature, your performance increases as your resolution increases.

    I agree it needs pulled down and checked, not to be harsh on AT but this isn't the first time the bar graphs have been wrong - I would rather wait for a review that has been properly finished and checked rather than read a rushed one, as it stands it's no use to me because I have no idea if any of the performance figures are genuine.

    John
  • RyDogg1 - Wednesday, June 22, 2005 - link

    Wow, who exactly is paying for these video cards to warrant the pricing?
  • Lonyo - Wednesday, June 22, 2005 - link

    To #14, the X850XT performance INCREASED by 33% from 1600x1200 to 2048x1536 according to the grahics, so to me that just screams BULLSH!T.
    I think the review needs taking down, editing, and then being put up again.
    Or fixed VERY quickly.
    AT IMO has let people down a bit this time round, not the usual standard.

Log in

Don't have an account? Sign up now