The Pixel Shader Engine

On par with what we have seen from NVIDIA, ATI's top of the line card is offering a GPU with a 16x1 pixel pipeline architecture. This means that it is able to render up to 16 single textured pixels in parallel per clock. As previously alluded to, R420 divides its pixel pipes into groups of four called quads. The new line of ATI GPUs will offer anywhere from one to four quad pipelines. The R3xx architecture offers an 8x1 pixel pipeline layout (grouped into two quad pipelines), delivering half of R420's pixel processing power per clock. For both R420 and R3xx, certain resources are shared between individual pixel pipelines in each quad. It makes a lot of sense to share local memory among quad members, as pixels near eachother on the screen should have (especially texture) data with a high locality of reference. At this level of abstraction, things are essentially the same as NV40's architecture.

Of course, it isn't enough to just look how many pixel pipelines are available: we must also discover how much work each pipeline is able to get done. As we saw in our final analysis of what went wrong with NV3x, the internals of a shader unit can have a very large impact on the ability of the GPU to schedule and execute shader code quickly and efficiently.

At our first introduction, the inside of R420's pixel pipeline was presented as a collection of 2 vector units, 2 scalar units, and one texture unit that can all work in parallel. We've seen the two math and one texture layout of NV40's pixel pipeline, but does this mean that R420 will be able to completely blow NV40 out of the water? In short, no: it's all about what kind of work these different units can do.

Lifting up the hood, we see that ATI has taken a different approach to presenting their architecture than NVIDIA. ATI's presentation of 2 vector units (which are 3 wide at 72bits), 2 scalar units (24bits), and a texture unit may be more reflective of their implementation than what NVIDIA has shown (but we really can't know this without many more low level details). NVIDIA's hardware isn't quite as straight forward as it may end up looking to software. The fact is that we could look at the shader units in NV40's pixel pipeline in the same way as ATI's hardware (with the exception of the fact that the texture unit shares some functionality with one of the math units). We could also look at NV40 architecture as being 4 2-wide vector units or 2 4-wide vector units (though this is still an over simplification as there are special cases NVIDIA's compiler can exploit that allow more work to be done in parallel). If ATI had decided to present it's architecture in the same way as NVIDIA, we would have seen 2 shader math units and one completely independent texture unit.

In order to gain better understanding, here is a diagram of the parallelism and functionality of the shader units within the pixel pipelines of R420 and NV40:


ATI has essentially three large blocks that can push up to 5 operations per clock cycle


NV40 can be seen two blocks of a more amorphous configuration (but there are special cases that allow some of these parts to work at the same time within each block.

Interestingly enough, there haven't been any changes to the block diagram of a pixel pipeline at this level of detail from R3xx to R420.

The big difference in the pixel pipe architectures that gives the R420 GPU a possible upper hand in performance over NV40 is that texture operations can be done entirely in parallel with the other math units. When NV40 needs to execute a texture operation, it looses much of its math processing power (the texturing unit cannot operate totally independently of the first shader unit in the NV40 pixel pipeline). This is also a feature of R3xx that carried over to R420.

Understanding what this all means in terms of shader performance depends on the kind of code developers end up writing. We wanted to dedicate some time to hand mapping some shader code to both architecture's pixel pipelines in order to explain how each GPU handled different situations. Trial and error have led us to the conclusion that video card drivers have their work cut out for them when trying to optimize code; especially for NV40. There are multiple special cases that allow NVIDIA's architecture to schedule instructions during texturing operations on the shared math/texture unit, and some of the "OR" cases from our previous diagram of parallelism can be massaged into "and" cases when the right instructions are involved. This also indicates that performance gains due to compiler optimizations could be in NV40's future.

Generally, when running code with mixed math and texturing (with a little more math than texturing) ATI will lead in performance. This case is probably the most indicative of real code.

The real enhancements to the R420 pixel pipeline are deep within the engine. ATI hasn't disclosed to us the number of internal registers their architectures have, or how many pixels each GPU can maintain in flight at any given time, or even cache hit/miss latencies. We do know that, in addition to the extra registers (32 constant and 32 temp registers up from 12) and longer length shaders (somewhere between 512 and 1536 depending on what's being done) available to developers on R420, the number of internal registers has increased and the maximum number of pixels in flight has increased. These facts are really important in understanding performance. The fundamental layout of the pixel pipelines in R420 and NV40 are not that different, but the underlying hardware is where the power comes from. In this case, the number of internal pipeline stages in each pixel pipeline, and the ability of the hardware to hide the latency of a texture fetch are of the utmost importance.

The bottom line is that R420 has the potential to execute more PS 2.0 instructions per clock than NVIDIA in the pixel pipeline because of the way it handles texturing. Even though NVIDIA's scheduler can help to allow more math to be done in parallel with texturing, NV40's texture and math parallelism only approaches that of ATI. Combine that with the fact that R420 runs at a higher clock speed than NV40, and even more pixel shader work can get done in the same amount of time on R420 (which translates into the possibility for frames being rendered faster under the right conditions).

Of course, when working with fp32 data, NV40 is doing 25% more "work" per operation, and it's likely that the support for fp32 from the front of the shader pipeline to the back contributes greatly to the gap in the transistor count (as well as performance numbers). When fp16 is enabled in NV40, internal register pressure is decreased, and less work is being done than in fp32 mode. This results in improved performance for NV40, but questions abound as to real world image quality from NVIDIA's compiler and precision optimized shaders (we are currently exploring this issue and will be following up with a full image quality analysis of now current generation hardware).

As an extension of the fp32 vs. fp24 vs. fp16 debate, NV40's support of Shader Model 3.0 puts it at a slight performance disadvantage. By supporting fp32 all the way through the shader pipeline, flow control, fp16 to the framebuffer and all the other bells and whistles that have come along for the ride, NV40 adds complexity to the hardware, and size to the die. The downside for R420 is that it now lags behind on the feature set front. As we pointed out earlier, the only really new features of the R420 pixel shaders are: higher instruction count shader programs, 32 temporary registers, and a polygon facing register (which can help enable two sided lighting).

To round out the enhancements to the R420's pixel pipeline, ATI's F-Buffer has been tweaked. The F-Buffer is what ATI calls the memory that stores pixels that have come out of the pixel shader but still require another pass (or more) thorough the pixel shader pipeline in order to finish being processed. Since the F-Buffer can require anywhere from no memory to enough memory to handle every pixel coming down the pipeline, ATI have built "improved" memory management hardware into the GPU rather than relegating this task to the driver.

The R420 Vertex Pipeline Depth and Stencil with Hyper Z HD
Comments Locked

95 Comments

View All Comments

  • TrogdorJW - Tuesday, May 4, 2004 - link

    Nice matchup we've got here! Just what we were all hoping for. Unfortunately, there are some disappointing trends I see developing....

    ShaderMark 2.0, we see many instances where the R420 is about 25% faster than the NV40. Let's see... 520 MHz vs 400 MHz. 'Nuf said, I think. Too bad for Nvidia that they have 222 million transistors, so they're not likely to be able to reach 500 MHz any time soon. (Or if they can, then ATI can likely reach 600+ MHz.)

    How about the more moderately priced card matchup? The X800 Pro isn't looking that attractive at $400. 25% more price gets you 33% more pipelines, which will probably help out on games that process a lot of pixels. And the Pro also has 4 vertex pipelines compared to 6? The optimizations make it better than a 9800XT, but not by a huge margin. The X800 SE with 8 pipelines is likely going to be about 20% faster than an 9800XT. Hopefully, it will come in at a $200 price point, but I'm not counting on that for at least six months. (Which is why I recently purchased a $200 9800 Pro 128.)

    The Nvidia lineup is currently looking a little nicer. The 6800 Ultra, Ultra Special, and GT all come with 16 pipelines, and there's talk of a lower clocked card for the future. If we can get a 16 pipeline card (with 6 vertex pipelines) for under $250, that would be pretty awesome. That would be a lot like the 5900 XT cards. Anyone else notice how fast the 9800 Pro prices dropped when Nvidia released the 5900 XT/SE? Hopefully, we'll see more of that in the future.

    Bottom line has to be that for most people, ATI is still the choice. (OpenGL gamers, Linux users, and professional 3D types would still be better with Nvidia, of course.) After all, the primary benefit of NV40 over R420 - Shader Model 3.0 - won't likely come into play for at least six months to a year. Not in any meaningful way, at least. By then, the fall refresh and/or next spring will be here, and ATI could be looking at SM3.0 support. Of course, adding SM3 might just knock the transistor counts of ATI's chips up into the 220 million range, which would kill their clock speed advantage.

    All told, it's a nice matchup. I figure my new 9800 Pro will easily last me until the next generation cards come out, though. By then I can look at getting an X800 Pro/XT for under $200. :)
  • NullSubroutine - Tuesday, May 4, 2004 - link

    I forgot to ask if anyone else noticed a huge difference (almost double) between AnandTechs UnrealTourment 2003 scores and that of Toms Hardware?

    (Its not the CPU difference, because the A64 3200+ had a baseline score of ~278 and the 3.2 P4 had a ~247 on a previous section.)

    So what gives?
  • NullSubroutine - Tuesday, May 4, 2004 - link

    The guy talking about the 400mhz and the 550mhz I have this to say.

    I agree with the other guy about the transistor count.

    Dont forget that ATi's cards used to be more powerful per clock speed compared to Nvidia a generation or two ago. So dont be babbling fanboy stuff.

    I would agree with that one guy (the # escapes me) about the fanboy stuff, but I said it first! On this thread anyways.
  • wassup4u2 - Tuesday, May 4, 2004 - link

    #30 & 38, I believe that while the ATI line is fabbed at TSMC, NVidia is using IBM for their NV40. I've heard also that yields at IBM aren't so good... which might not bode well for NVidia.
  • quanta - Tuesday, May 4, 2004 - link

    > #14, I think it has more to do with the fact those OpenGL benchmarks are based on a single engine that was never fast on ATI hardware to begin with.

    Not true. ATI's FireGL X2 and Quadro FX 1100 were evenly matched in workstation OpenGL tests[1], which do not use Quake engines. Considering FireGL X2 is based on the Radeon 9800XT and Quadro FX 1100 is based on GeForce FX 5700 Ultra, such result is unacceptable. If I were an ATI boss, I would have made sure the OpenGL driver team would not make such a blunder, especially when R420 still sucks in most OpenGL games compared to GeFocre 6800 Ultra cards.

    [1] http://www.tomshardware.com/graphic/20040323/index...
  • AlexWade - Tuesday, May 4, 2004 - link

    From my standpoint the message is clear: nVidia is no longer THE standard in graphic cards. Why do I say that? It half the size, it requires less power, it has less transistors, and the performance is about the same. Even if the performance was slightly less, ATI would still be winner. Anyway, whatever, its not like these benchmarks will deter the hardcore gotta-have-it-now fanboys.

    Its not like I'm going to buy either. Maybe this will lower the prices of all the other video cards. $Dreams$
  • rsaville - Tuesday, May 4, 2004 - link

    If any 6800 users are wondering how to make their 6800 run the same shadows as the 5950 in the benchmark see this post:
    http://forums.relicnews.com/showthread.php?p=39462...

    Also if you want to make your GeForceFX run the same shadows as the rest of the PS2.0 capable cards then find a file called driverConfig.lua in the homeworld2\bin directory and remove line 101 that disables fragment programs.
  • raskren - Tuesday, May 4, 2004 - link

    I wonder if this last line of AGP cards will ever completely saturate the AGP 8X bus. It would be interesting to see a true PCI-Express card compared to the same AGP 8X counterpart.

    Remember when Nvidia introduced the MX440 (or was it 460?) with an 8X AGP connector...what a joke.
  • sisq0kidd - Tuesday, May 4, 2004 - link

    That was the cheesiest line #46, but very true...
  • sandorski - Tuesday, May 4, 2004 - link

    There is only 1 clear winner here, the Consumer!

    ATI and NVidia are running neck-neck.

Log in

Don't have an account? Sign up now