The Pipeline Overview

First, let us take a second to run through NVIDIA's architecture in general. DirectX or OpenGL commands and HLSL and GLSL shaders are translated and compiled for the architectures. Commands and data are sent to the hardware where we go from numbers, instructions and artwork to a rendered frame.

The first major stop along the way is the vertex engine where geometry is processed. Vertices can be manipulated using math and texture data, and the output of the vertex pipelines is passed on down the line to the fragment (or pixel) engine. Here, every pixel on the screen is processed based on input from the vertex engine. After the pixels have been processed for all the geometry, the final scene must be assembled based on color and z data generated for each pixel. Anti-aliasing and blending are done into the framebuffer for final render output in what NVIDIA calls the render output pipeline (ROP). Now that we have a general overview, let's take a look at the G70 itself.



The G70 GPU is quite a large IC. Weighing in at 302 million transistors, we would certainly hope that NVIDIA packed enough power in the chip to match its size. The 110nm TSMC process will certainly help with die size, but that is quite a few transistors. The actual die area is only slightly greater than NV4x. In fact, NVIDIA is able to fit the same number of ICs on a single wafer.



A glance at a block diagram of the hardware gives us a first look at the methods by which NVIDIA increased performance this time around.



The first thing to notice is that we now have 8 (up from 6) vertex pipelines. We still aren't vertex processing limited (except in the workstation market), but this 33% upgrade in vertex power will help to keep the extra pixel pipelines fed as well as handle any added vertex load developers try to throw at games in the near future. There are plenty of beautiful things that can be done with vertex shaders that we aren't seeing come about in games yet like parallax and relief mapping as well as extended use of geometry instancing and vertex texturing.

Moving on to pixel pipelines, we see a 50% increase in the number of pipelines packed under the hood. Each of the 24 pixel pipes is also more powerful than those of NV4x. We will cover just why that is a little later on. For now though, it is interesting to note that we do not see an increase in the 16 ROPs. These pipelines take the output of the fragment crossbar (which aggregates all of the pixel shader output) and finalizes the rendering process. It is here where MSAA is performed, as well as the color and z/stencil operations. Not matching the number of ROPs to the number of pixel pipelines indicates that NVIDIA feels its fill rate and ability to handle current and near future resolutions is not an issue that needs to be addressed in this incarnation of the GeForce. As NVIDIA's UltraShadow II technology is driven by the hardware's ability to handle twice as many z operations per clock when a z only pass is performed, this also means that we won't see improved performance in this area.

If NVIDIA is correct in their guess (and we see no reason they should be wrong), we will see increasing amounts of processing being done per pixel in future titles. This means that each pixel will spend more time in the pixel pipeline. In order to keep the ROPs busy in light of a decreased output flow from a single pixel pipe, the ratio of pixel pipes to ROPs can be increased. This is in accord with the situation we've already described.

ROPs will need to be driven higher as common resolutions increase. This can also be mitigated by increases in frequency. We will also need more ROPs as the number pixel pipelines are able to saturate the fragment crossbar in spite of the increased time a pixel spends being shaded.

Index No More Memory Bandwidth
Comments Locked

127 Comments

View All Comments

  • CrystalBay - Wednesday, June 22, 2005 - link

    Does this card play Riddick smoothly @ shader 2++ ?????
  • fishbits - Wednesday, June 22, 2005 - link

    "In aboot 5 years i figure we'll be paying 1000 bucks for a video card. These prices are getting out of control, every generation is more expensive then the last. Dont make me switch to consoles damnit."

    Funny, I can't afford the very best TVs the minute they come out. Same for stereo components. But I don't cry about it and threaten "Don't make me switch to learning the ukelele and putting on my own puppet shows to entertain myself!" Every time a better component comes out, it means I get a price reduction and feature upgrade on the items that are affordable/justifiable for my budget.

    Seriously, where does the sense of entitlement come from? Do these people think they should be able to download top-of-the-line graphics cards through BitTorrent? Do they walk around Best Buy cursing out staff, manufacturers and customers for being so cruel as to buy and sell big-ass plasma TVs?

    On second thought, get your console and give up PC gaming. That way you can stop being miserable, and we can stop being miserable hearing about your misery.
  • tazdevl - Wednesday, June 22, 2005 - link

    Funny how the single card deltas here are higher than at any other site.

    Unwhelmed for the amount of money and lack of performance increase.

    Have to commend nVIDIA for ensuring retail availability at launch.
  • archcommus - Wednesday, June 22, 2005 - link

    Impressive, but I'm still happy with my X800 XL purchase for only $179. For what it seems, with a 1280x1024 display, I won't need the kind of power this card delivers for a very long time. And less than $200 compared to $600, with still excellent peformance for now and the forseeable future? Hmm, I'll take the former.
  • Lonyo - Wednesday, June 22, 2005 - link

    I would have liked some 1280x1024 benchmarks with 8xAA from the nVidia cards and 6xAA from ATi to see if it's worth getting something like a 7800GTX with 17/19" LCD's to run som esuper high quality settings in terms of AA/AF.
  • segagenesis - Wednesday, June 22, 2005 - link

    I'm not disappointed. For one thing the price of current cards will likely drop now, and there will also be mid-range parts soon to choose from. I think the transparency AA is a good idea for say... World of Warcraft. The game is loaded with them and too often can you see the blockyness of trees/grass/whatever.

    #44 - Actually are you new to the market? :) I remember when early "accelerated" VGA cards were nearly $1000. Or more.

    Everybody lambasted NVIDIA last year for the lack of product (6800GT/Ultra) to the market, so them actually making a presence this year instead of a paper launch should also be commended. Of course, now what is ATI gonna pull out of its hat?
  • KeDaHa - Wednesday, June 22, 2005 - link

    The screenshot shows very clearly that SSAA provides quite a quality improvement over no AA

    The difference is bloody miniscule, perhaps if you used an image SLIGHTLY larger than 640x480 to highlight the difference?
  • L3p3rM355i4h - Wednesday, June 22, 2005 - link

    Wowzers. Time to get rid of teh 9800...
  • shabby - Wednesday, June 22, 2005 - link

    In aboot 5 years i figure we'll be paying 1000 bucks for a video card. These prices are getting out of control, every generation is more expensive then the last.
    Dont make me switch to consoles damnit.
  • Xenoterranos - Wednesday, June 22, 2005 - link

    Hell, for the same price as an SLI setup I can go out and get a 23 inch cinema display...And since these cards can't handle the 30" native resolution anyway, it's a win-win. And yeah, whats up with the quality control on these benchmarks! I mean really, I almost decided to wait for the ATI next-gen part when I saw this (GeForce man since the GeForce2 GTS!)

Log in

Don't have an account? Sign up now