Inside The Pipes

The pixel pipe is made up of two vector units and a texture unit that all operate together to facilitate effective shader program execution. There are a couple mini-ALUs in each shader pipeline that allow operations such as a free fp16 normalize and other specialized features that relate to and assist the two main ALUs.



Even though this block diagram looks slightly different from ones shown during the 6800 launch, NVIDIA has informed us that these mini-ALUs were also present in NV4x hardware. There was much talk when the 6800 launched about the distinct functionality each of the main shader ALUs had. In NV4x, only one ALU had the ability to perform a single clock MADD (multiply-add). Similarly, only one ALU assisted in texture address operations for the texture unit. Simply having these two distinct ALUs (regardless of their functionality difference) is what was able to push the NV4x so much faster than the NV3x architecture.

In their ongoing research into commonly used shaders (and likely much of their work with shader replacement), NVIDIA discovered that a very high percentage of shader instructions were MADDs. Multiply-add is extremely common in 3D mathematics as linear algebra, matrix manipulation, and vector calculus are a huge part of graphics. G70 implements MADD on both main Shader ALUs. Taking into account the 50% increase in shader pipelines and each pipe's ability to compute twice as many MADD operations per clock, the G70 has the theoretical ability to triple MADD performance over the NV4x architecture (on a clock for clock basis).

Of course, we pressed the development team to tell us if both Shader ALUs featured identical functionality. The answer is that they do not. Other than knowing that only one ALU is responsible for assisting the texture hardware, we were unable to extract a detailed answer about how similar the ALUs are. Suffice it to say that they still don't share all features, but that NVIDIA certainly feels that the current setup will allow G70 to extract twice the shader performance for a single fragment over NV4x (depending on the shader of course). We have also learned that the penalty for branching in the pixel shaders is much less than in previous hardware. This may or may not mean that the pipelines are less dependent on following the exact same instruction path, but we really don't have the ability to determine what is going on at that level.

No More Memory Bandwidth No More Shader Replacement
Comments Locked

127 Comments

View All Comments

  • Johnmcl7 - Wednesday, June 22, 2005 - link

    If they're too busy for the article, that's fair enough, the point is they should put it up when they've had time to check it over, rather than rush an article up that isn't ready to be published.

    John
  • IronChefMoto - Wednesday, June 22, 2005 - link

    Regarding the "shame on Anandtech" comments -- y'all ever think they were too busy sh*tting themselves at the performance of this card to really pay that much attention to the article? ;-)

    IronChefMorimoto
  • Johnmcl7 - Wednesday, June 22, 2005 - link

    The prices I've seen here in the UK for the 7800s here are around 400 pounds, the 6800 Ultras are currently around 300 pounds. So quite an increase over the NV40s but not unacceptable given the performance, I'm sure they'll come down in price once the early adopters have had their fill.

    John
  • yacoub - Wednesday, June 22, 2005 - link

    #26 - You must be new to the market, relatively speaking. I remember quite well the days when high-end new videocards were at MOST $400, usually $350 or less when they debuted. It was more than a year or two ago though, so it might have been before your time as a PC gamer.
  • rimshot - Wednesday, June 22, 2005 - link

    Not sure why the price is so high in North America, here in Aus you can get a 7800GTX for the same price as a 6800GT ($850AU).

  • nitromullet - Wednesday, June 22, 2005 - link

    "What no Crossfire benchies? I guess they didn't wany Nvidia to loose on their big launch day."

    Ummm... maybe because CrossFire was paper launched at Computex, and no one (not even AT) has a CrossFire rig to benchmark? nVidia is putting ATI to shame with this launch and the availability of the cards. Don't you think if ATI had anything worth a damn to put out there they would?

    All that aside... I was as freaked out as the rest of you by these benchmarks at first (well moreso than some actually, becuase I just pulled the $600 trigger last night on an eVGA 7800GTX from the egg). However, these graphs are clearly messed up, and some appear to have already been fixed. I guess someone should have cut Derek off at the launch party yesterday.
  • blckgrffn - Wednesday, June 22, 2005 - link

    Very disapointed at the fit and finish of this article. Anandtech is supposed to have the best one, not a half baked one :( I even liked HardOCP better even with their weird change the levels of everything approach - at least it has a very good discussion of the differences between MS and SS AA and shows some meaningful results at high res as well.

    Shame on Anandtech :(
  • fishbits - Wednesday, June 22, 2005 - link

    Good release.

    Can we get a couple of screen shots with the transparency AA?

    "Maybe this one of the factors that will lead to the Xbox360/PS3 becoming the new gaming standard as opposed to the Video Card market pushing the envelope."
    Yeah, because the graphics components in consoles don't require anything but three soybeans and a snippet of twine to make. They're ub3r and free! Wait, no, you pay for them too eventually even if not in the initial console purchase price. Actually I think the high initial price of next gen graphics cards is a sign of health for PC gaming. There are some folks not only willing to pay high dollars for bleeding edge performance, they're willing to pay even higher dollars than they were in the past for the top performers. Spurs ATI/Nvidia to keep the horsepower coming, which drives game devs to add better and better graphics, etc.

    "They only reveresed a couple of labels here and there, chill out. It's still VERY OBVIOUS which card is which just by looking at the performance!"
    Eh, I use benchmarks to learn more about a product than what my pre-conceived notions tell me it "ought" to be. I don't use my pre-conceived notions to accept and dismiss scientific benchmarks. If the benches are wrong, it is a big deal. Doesn't require ritual suicide, just fixing and maybe better quality control in the future.
  • Thresher - Wednesday, June 22, 2005 - link

    2x6800GT costs almost the same amount as this single card and gives up nothing in performance.

    The price of this thing is ridiculous.
  • rubikcube - Wednesday, June 22, 2005 - link

    Just wanted to say thanks for starting your benchmarks at 1600x1200. It really makes a difference in the usability of the benchmarks.

Log in

Don't have an account? Sign up now