The Pipeline Overview

First, let us take a second to run through NVIDIA's architecture in general. DirectX or OpenGL commands and HLSL and GLSL shaders are translated and compiled for the architectures. Commands and data are sent to the hardware where we go from numbers, instructions and artwork to a rendered frame.

The first major stop along the way is the vertex engine where geometry is processed. Vertices can be manipulated using math and texture data, and the output of the vertex pipelines is passed on down the line to the fragment (or pixel) engine. Here, every pixel on the screen is processed based on input from the vertex engine. After the pixels have been processed for all the geometry, the final scene must be assembled based on color and z data generated for each pixel. Anti-aliasing and blending are done into the framebuffer for final render output in what NVIDIA calls the render output pipeline (ROP). Now that we have a general overview, let's take a look at the G70 itself.



The G70 GPU is quite a large IC. Weighing in at 302 million transistors, we would certainly hope that NVIDIA packed enough power in the chip to match its size. The 110nm TSMC process will certainly help with die size, but that is quite a few transistors. The actual die area is only slightly greater than NV4x. In fact, NVIDIA is able to fit the same number of ICs on a single wafer.



A glance at a block diagram of the hardware gives us a first look at the methods by which NVIDIA increased performance this time around.



The first thing to notice is that we now have 8 (up from 6) vertex pipelines. We still aren't vertex processing limited (except in the workstation market), but this 33% upgrade in vertex power will help to keep the extra pixel pipelines fed as well as handle any added vertex load developers try to throw at games in the near future. There are plenty of beautiful things that can be done with vertex shaders that we aren't seeing come about in games yet like parallax and relief mapping as well as extended use of geometry instancing and vertex texturing.

Moving on to pixel pipelines, we see a 50% increase in the number of pipelines packed under the hood. Each of the 24 pixel pipes is also more powerful than those of NV4x. We will cover just why that is a little later on. For now though, it is interesting to note that we do not see an increase in the 16 ROPs. These pipelines take the output of the fragment crossbar (which aggregates all of the pixel shader output) and finalizes the rendering process. It is here where MSAA is performed, as well as the color and z/stencil operations. Not matching the number of ROPs to the number of pixel pipelines indicates that NVIDIA feels its fill rate and ability to handle current and near future resolutions is not an issue that needs to be addressed in this incarnation of the GeForce. As NVIDIA's UltraShadow II technology is driven by the hardware's ability to handle twice as many z operations per clock when a z only pass is performed, this also means that we won't see improved performance in this area.

If NVIDIA is correct in their guess (and we see no reason they should be wrong), we will see increasing amounts of processing being done per pixel in future titles. This means that each pixel will spend more time in the pixel pipeline. In order to keep the ROPs busy in light of a decreased output flow from a single pixel pipe, the ratio of pixel pipes to ROPs can be increased. This is in accord with the situation we've already described.

ROPs will need to be driven higher as common resolutions increase. This can also be mitigated by increases in frequency. We will also need more ROPs as the number pixel pipelines are able to saturate the fragment crossbar in spite of the increased time a pixel spends being shaded.

Index No More Memory Bandwidth
POST A COMMENT

127 Comments

View All Comments

  • Johnmcl7 - Wednesday, June 22, 2005 - link

    If they're too busy for the article, that's fair enough, the point is they should put it up when they've had time to check it over, rather than rush an article up that isn't ready to be published.

    John
    Reply
  • IronChefMoto - Wednesday, June 22, 2005 - link

    Regarding the "shame on Anandtech" comments -- y'all ever think they were too busy sh*tting themselves at the performance of this card to really pay that much attention to the article? ;-)

    IronChefMorimoto
    Reply
  • Johnmcl7 - Wednesday, June 22, 2005 - link

    The prices I've seen here in the UK for the 7800s here are around 400 pounds, the 6800 Ultras are currently around 300 pounds. So quite an increase over the NV40s but not unacceptable given the performance, I'm sure they'll come down in price once the early adopters have had their fill.

    John
    Reply
  • yacoub - Wednesday, June 22, 2005 - link

    #26 - You must be new to the market, relatively speaking. I remember quite well the days when high-end new videocards were at MOST $400, usually $350 or less when they debuted. It was more than a year or two ago though, so it might have been before your time as a PC gamer. Reply
  • rimshot - Wednesday, June 22, 2005 - link

    Not sure why the price is so high in North America, here in Aus you can get a 7800GTX for the same price as a 6800GT ($850AU).

    Reply
  • nitromullet - Wednesday, June 22, 2005 - link

    "What no Crossfire benchies? I guess they didn't wany Nvidia to loose on their big launch day."

    Ummm... maybe because CrossFire was paper launched at Computex, and no one (not even AT) has a CrossFire rig to benchmark? nVidia is putting ATI to shame with this launch and the availability of the cards. Don't you think if ATI had anything worth a damn to put out there they would?

    All that aside... I was as freaked out as the rest of you by these benchmarks at first (well moreso than some actually, becuase I just pulled the $600 trigger last night on an eVGA 7800GTX from the egg). However, these graphs are clearly messed up, and some appear to have already been fixed. I guess someone should have cut Derek off at the launch party yesterday.
    Reply
  • blckgrffn - Wednesday, June 22, 2005 - link

    Very disapointed at the fit and finish of this article. Anandtech is supposed to have the best one, not a half baked one :( I even liked HardOCP better even with their weird change the levels of everything approach - at least it has a very good discussion of the differences between MS and SS AA and shows some meaningful results at high res as well.

    Shame on Anandtech :(
    Reply
  • fishbits - Wednesday, June 22, 2005 - link

    Good release.

    Can we get a couple of screen shots with the transparency AA?

    "Maybe this one of the factors that will lead to the Xbox360/PS3 becoming the new gaming standard as opposed to the Video Card market pushing the envelope."
    Yeah, because the graphics components in consoles don't require anything but three soybeans and a snippet of twine to make. They're ub3r and free! Wait, no, you pay for them too eventually even if not in the initial console purchase price. Actually I think the high initial price of next gen graphics cards is a sign of health for PC gaming. There are some folks not only willing to pay high dollars for bleeding edge performance, they're willing to pay even higher dollars than they were in the past for the top performers. Spurs ATI/Nvidia to keep the horsepower coming, which drives game devs to add better and better graphics, etc.

    "They only reveresed a couple of labels here and there, chill out. It's still VERY OBVIOUS which card is which just by looking at the performance!"
    Eh, I use benchmarks to learn more about a product than what my pre-conceived notions tell me it "ought" to be. I don't use my pre-conceived notions to accept and dismiss scientific benchmarks. If the benches are wrong, it is a big deal. Doesn't require ritual suicide, just fixing and maybe better quality control in the future.
    Reply
  • Thresher - Wednesday, June 22, 2005 - link

    2x6800GT costs almost the same amount as this single card and gives up nothing in performance.

    The price of this thing is ridiculous.
    Reply
  • rubikcube - Wednesday, June 22, 2005 - link

    Just wanted to say thanks for starting your benchmarks at 1600x1200. It really makes a difference in the usability of the benchmarks. Reply

Log in

Don't have an account? Sign up now