Pipeline Layout and Details

The general layout of the pipeline is very familiar. We have some number of vertex pipelines feeding through a setup engine into a number of pixel pipelines. After fragment processing, data is sent to the back end for things like fog, alpha blending and Z compares. The hardware can easily be scaled down at multiple points; vertex pipes, pixel pipes, Z compare units, texture units, and the like can all be scaled independently. Here's an overview of the high end case.



The maximum number of vertex pipelines in the X1000 series that it can handle is 8. Mid-range and budget parts incorporate 5 and 2 vertex units respectively. Each vertex pipeline is capable of one scalar and one vector operation per clock cycle. The hardware can support 1024 instruction shader programs, but much more can be done in those instructions with flow control for looping and branching.



After leaving the vertex pipelines and geometry setup hardware, the data makes its way to the "ultra threading" dispatch processor. This block of hardware is responsible for keeping the pixel pipelines fed and managing which threads are active and running at any given time. Since graphics architectures are inherently very parallel, quite a bit of scheduling work within a single thread can easily be done by the compiler. But as shader code is actually running, some instruction may need to wait on data from a texture fetch that hasn't completed or a branch whose outcome is yet to be determined. In these cases, rather than spin the clocks without doing any work, ATI can run the next set of instructions from another "thread" of data.

Threads are made up of 16 pixels each and up to 512 can be managed at one time (128 in mid-range and budget hardware). These threads aren't exactly like traditional CPU threads, as programmers do not have to create each one specifically. With graphics data, even with only one shader program running, a screen is automatically divided into many "threads" running the same program. When managing multiple threads, rather than requiring a context switch to process a different set of instructions running on different pixels, the GPU can keep multiple contexts open at the same time. In order to manage having any viable number of registers available to any of 512 threads, the hardware needs to manage a huge internal register file. But keeping as many threads, pixels, and instructions in flight at a time is key in managing and effectively hiding latency.

NVIDIA doesn't explicitly talk about hardware analogous to ATI's "ultra threading dispatch processor", but they must certainly have something to manage active pixels as well. We know from our previous NVIDIA coverage that they are able to keep hundreds of pixels in flight at a time in order to hide latency. It would not be possible or practical to give the driver complete control of scheduling and dispatching pixels as too much time would be wasted deciding what to do next.

We won't be able to answer specifically the question of which hardware is better at hiding latency. The hardware is so different and instructions will end up running through alternate paths on NVIDIA and ATI hardware. Scheduling quads, pixels, and instructions is one of the most important tasks that a GPU can do. Latency can be very high for some data and there is no excuse to let the vast parallelism of the hardware and dataset to go to waste without using it for hiding that latency. Unfortunately, there is just no test that we have currently to determine which hardware's method of scheduling is more efficient. All we can really do for now is look at the final performance offered in games to see which design appears "better".

One thing that we do know is that ATI is able to keep loop granularity smaller with their 16 pixel threads. Dynamic branching is dependant on the ability to do different things on different pixels. The efficiency of an algorithm breaks down if hardware requires that too many pixels follow the same path through a program. At the same time, the hardware gets more complicated (or performance breaks down) if every pixel were to be treated completely independently.

On NVIDIA hardware, programmers need to be careful to make sure that shader programs are designed to allow for about a thousand pixels at a time to take the same path through a shader. Performance is reduced if different directions through a branch need to be taken in small blocks of pixels. With ATI, every block of 16 pixels can take a different path through a shader. On G70 based hardware, blocks of a few hundred pixels should optimally take the same path. NV4x hardware requires larger blocks still - nearer to 900 in size. This tighter granularity possible on ATI hardware gives developers more freedom in how they design their shaders and take advantage of dynamic branching and flow control. Designing shaders to handle 32x32 blocks of pixels is more difficult than only needing to worry about 4x4 blocks of pixels.

After the code is finally scheduled and dispatched, we come to the pixel shader pipeline. ATI tightly groups pixel shaders in quads and is calling each block of pixel pipes a quad pixel shader core. This language indicates the tight grouping of quads that we already assumed existed on previous hardware.



Each pixel pipe in a quad is able to handle 6 instructions per clock. This is basically the same as R4xx hardware except that ATI is now able to accommodate dynamic branching on their dedicated branch hardware. The 2 scalar, 2 vector, 1 texture per clock arrangement seems to have worked with ATI in the past enough for them to stick with it again, only adding 1 branch operation that can be issued in parallel with these 5 other instructions.

Of course, branches won't happen nearly as often as math and texture operations, so this hardware will likely be idle most of the time. In any case, having separate hardware for branching that can work in parallel with the rest of the pipeline does make relatively tight loops more efficient than what they could be if no other work could be done while a branch was being handled.

All in all, one of the more interesting things about the hardware is its modularity. ATI has been very careful to make each block of the chip independent of the rest. With high end hardware, as much of everything is packed in as possible, but with their mid-range solution, they are much more frugal. The X1600 line will incorporate 3 quads with 12 pixel pipes alongside only 4 texture units and 8 Z compare units. Contrast this to the X1300 and its 4 pixel pipes, 4 texture units and 4 Z compare units and the "16 of everything" X1800 and we can see that the architecture is quite flexible on every level.

Feature Overview Memory Architectures
Comments Locked

103 Comments

View All Comments

  • Gigahertz19 - Wednesday, October 5, 2005 - link

    On the last page I will quote

    "With its 512MB of onboard RAM, the X1800 XT scales especially well at high resolutions, but we would be very interested in seeing what a 512MB version of the 7800 GTX would be capable of doing."

    Based on the results in the benchmarks I would say 512MB barely does anything. Look at the benchmarks on Page 10 the Geforce 7800GTX either beats the X1800 XT or loses by less then 1 FPS. SCALES WELL AT HIGH RESOLUTIONS? Not really, has the author of this article looked at their own benchmarks included? When the resolution is at 2048 x 1536 the 7800GTX creams the competition except in Farcry where it loses by .2FPS to the X1800XT and Splinter Cell it loses by .8FPS so basically it's a tie in those 2 games.

    You know why Nvidia does not have a 512MB version because look at the results...it does shit. 512Mb is pointless right now and if you argue you'll use it for the future then will till future games use it and then buy the best GPU then, not now. These new ATI's blow wookies, so much for competition.
  • NeonFlak - Wednesday, October 5, 2005 - link

    "In some cases, the X1800 XL is able to compete with the 7800 GTX, but not enough to warrant pricing on the same level."

    From the graphs in the review with all the cards present the x1800xl only beat the 7800gt once by 4fps... So beating the 7800gt in one graph by 4fps makes that statement even viable?
  • FunkmasterT - Wednesday, October 5, 2005 - link

    EXACTLY!!

    ATI's FPS numbers are a major disappointment!
  • Questar - Wednesday, October 5, 2005 - link

    Unless you want image quality.
  • bob661 - Wednesday, October 5, 2005 - link

    And the difference is worth the $100 eatra dollars PLUS the "lower" frame rates? Not good bang for the buck.
  • Powermoloch - Wednesday, October 5, 2005 - link

    Not the cards....Just the review. Really sad :(
  • yacoub - Wednesday, October 5, 2005 - link

    So $450 for the X1800XL versus $250 for the X800XL and the only difference is the new core that maybe provides a handful of additional frames per second, a new AA mode, and shader model 3.0?

    Sorry, that's not worth $200 to me. Not even close.
  • coldpower27 - Thursday, October 6, 2005 - link


    Perhaps a up to 20% performance improvement, looking at pixel fillrate alone.
    Shader Model 3.0 Support.
    ATI's Avivo Technology
    OpenEXR HDR Support.
    HQ Non-Angle Dependent AF User Choice

    You decide if that's worth the 200US price difference to you, Adaptive AA, I wouldn't count as apparently through ATI's driver all R3xx hardware and higher now have this capability not just R5xx derivatives, sort of like the launched with R4xx feature Temporal AA.
  • yacoub - Wednesday, October 5, 2005 - link

    So even if these cards were available in stores/online today, the best PCI-E card one can buy for ~$250 is still either an X800XL or a 6800GT. (Or an X800 GTO2 for $230 and flash and overclock it.)

    I find it disturbing that they even waste the time to develop, let alone release, low-end parts that price-wise can't even compete. Why bother wasting the development and processing to create a card that costs more and performs less? What a joke those two lower-end cards are (x1300 and x1600).
  • coldpower27 - Thursday, October 6, 2005 - link

    The Radeon X1600 XT is intended to replace the older X700 Pro, not the stop gap 6600 GT competitors, X800 GT, X800 GTO, which only came into being because ATI had leftoever supplies of R423/R480 & for X800 GTO only R430 cores and of course due to the fact that X700 Pro wasn't really competitive in performance to 600 GT in the firstp lace, due to ATI's reliance on Low-k technology for their high clock frequencies.

    I think these are sucessful replacements.

    Radeon X850/X800 is replaced by Radeon X1800 Technology.
    Radeon X700 is replaced by Radeon X1600 Technology.
    Radeon X550/X300 is replaced by Radeon X1300 Technology.

    X700 is 156mm2 on 110nm, X1600 is 132mm2 on 90nm
    X550 & X1300 are roughly around the same die size, sub 100mm2.

    Though the newer cards use more expensive memory types on their high end versions.

    They also finally bring ATI's entire family as having the same feature set, something that hasn't been seen ever before by ATI I believe. I mean having a high end, mainstream & budget core based on the same technology.

    Nvidia achieved this item first with the Geforce FX line.

Log in

Don't have an account? Sign up now