Beyond the Shader: Coloring Pixels

We can't ignore the last few steps in the rendering pipeline, as AMD has also updated their render back ends (analogous to NVIDIA's ROPs) which are responsible for determining the visibility of each fragment and the final color of each pixel on the screen. Beyond this, the render back ends handle compression and decompression, render to texture functionality, MRTs, framebuffer formats, and usually AA.

Once again, one of the important things to note is that R600 only has four render back ends. This means we will only see 16 pixels complete per clock at maximum, just like the R580. However, AMD has included double the Z/stencil hardware so that we can get up to 32 total Z/stencil ops out of the render back ends to improve stencil shadow operations among other things. Pure fill rate hasn't really mattered in a while, while Z/stencil capability remains important. But will only four render back ends be enough?

Efficiency has been improved on the render back ends, but with the potential of completing 64 threads per clock from the shader hardware, they will need to really work to keep up. R600 has the ability to display floating point formats from 11:11:10 up to 128-bit fp. DX10 requires eight MRTs now, and we've got them. We also get more efficient render to texture features which should help enable more complex effects to process faster.

Z/Stencil Hardware

As far as Z/stencil hardware is concerned, compression has gotten a boost up to 16:1 rather than 8:1 on the X1k series. Depth tests can be limited to a specific range programmatically which can speed up stencil shadows. Our Z-buffer is now 32-bit floating point rather than 24-bit. Hierarchical Z has been enhanced to handle some situations where it was unable to assist in rendering, and AMD has added a hierarchical stencil buffer as well.

AMD is introducing something called Re-Z which is designed to also help with the problem Early-Z has in not being able to handle shaders that update Z data. R600 is able to check Z values before a shader runs as well as after the Z value has been changed in the shader. This allows AMD to throw out pixels that are updated to be out of view without sending them to the render back ends for evaluation.

If we compare this setup with G80, we're not as worried as we are about texture capability. G80 can complete 24 pixels per clock (4 pixels per ROP with six ROPs). Like R600, G80 is capable of 2x Z-only performance with 48 Z/stencil operations per clock with AA enabled. When AA is disabled, the hardware is capable of 192 Z-only samples per clock. The ratio of running threads to ROPs is actually worse on G80 than on R600. At the same time, G80 does offer a higher overall fill rate based on potential pixels per clock and clock speed.

Memory and Data Movement CFAA and No Fixed Resolve Hardware
Comments Locked

86 Comments

View All Comments

  • TA152H - Monday, May 14, 2007 - link

    Fanboy? What a dork.

    I've had success with ATI, not with NVIDIA, and I know ATI stuff a lot better so it's just easier for me to work with. It's not an irrational like or dislike. I bought one NVIDIA and it was a nightmare. Plus, I'm not as sure they'll be around for very long as I am ATI/AMD, although they had a good quarter, and AMD surely had a dreadful one.

    Selling discrete video cards alone might get a lot more difficult with the integration of CPUs, and GPUs.
  • yyrkoon - Monday, May 14, 2007 - link

    You are a fanboy, face it. 'I tried a nVidia card once . . .' How long ago was that ? Who made the card ? Did you have it configured properly? Once?! Details like this are important, and seemily/conviently left out. Anyhow, anyone claiming that nVIdia cards are 'junk' has definate issues with assembling/configuring hardware. I say this because my current system uses a nVidia based card, and is 100% rock solid. 'Person between the chair and keyboard' rings a bell.

    Ask any Linux user why they refuse to use ATI cards in their system . . . You are also one of these people out there that claims ATI driver support is superior to nVIdias driver support I suppose ? If you have truely been using ATI products for 20 years, then you know ATI has one of the worst reputations on the planet for driver support(and while it may have improved, it is not as good as nVidias still).

    Yeah, anyhow, ATI, and nVidia both can have problems with their hardware, it is not based 100% on their architecture, but the OEM releasing the products have a lot of effect here also. There are bad OEMs to buy from here on both sides of the fence, knowing who to stay away from, is half the work when building a PC, and probably had a lot more to do with your alleged 'bad nVIdia card', assuming you actually configured the card properly.

    I also had a problem with an nVIdia card once, I bought a brand new GF3 card about 7 years ago, and a few of the older games I had, would not display properly with it. What did I do ? I waited about a month, for a new driver, and the problem was solved. I have also had issues with ATI cards, one of which drew too much power from the AGP slot, and would cause the given system to crash 1-2 times a day. This was a design issue/oversight on ATI's behalf(the card was made by Saphire, who also makes ATIs cards). What did I do ? I replaced the card with an nVIdia card, and the system has been stable since.

    So you see, I too can skew things to make anyone look bad also, and in the end, it would only serve to make me look like the dork. But if you want to pay more, for less, that is perfectly fine by me.
  • Pirks - Monday, May 14, 2007 - link

    I've got all problems and crappy drivers (especially Linux ones) only from ATI while nVidia software was always much better in my experience. power hungry noisy monsters made by whom? by ATI! as always :) same shit as with their x1800/x1900 miserable power guzzling series

    discrete video cards are not going away any time soon. ever heard of integrated video used in games, besides ones from 2000, like old Quake 2? no? then please continue your lovefest with ATI, but for me - it looks like I'll pass on them this time again - since Radeon 9800Pro they went downhill and continue in that direction. they MAY make a decent integrated CPU/GPU budget-oriented vendor in a future, for all those office folks playing simple 2D office games, but real stuff? nope, ATI is still out of the game for me. let's see if they manage to come back with reincarnation of R300 in future.

    ironically, AMD CPUs on the other hand have best price/performance ratio, so intel won't see me as their customer. I wish ATI 3D chips were as good as AMD CPUs in that regard (and overclockers please shut up, I'm not bothering to OC my rig because I don't enjoy benchmark numbers, I enjoy REAL stuff like games, and Intel is out of the game for me as well, at least until their budget single core Conroes are out)
  • utube545 - Tuesday, May 22, 2007 - link

    Get a clue, you fucking cretin.
  • dragonsqrrl - Thursday, August 25, 2011 - link

    haha... lol, wow. facepalm.
  • dragonsqrrl - Thursday, August 25, 2011 - link

    Damn you're a fail noob of an ATI fanboy. Time has not been kind to the HD2900XT, and now you sound more ridiculous then ever... lol.
  • yzkbug - Monday, May 14, 2007 - link

    Not a word about new AVIVO HD and digital sound features?
  • DerekWilson - Wednesday, May 16, 2007 - link

    we mentioned this ...

    on the r600 overview page ...
  • photoguy99 - Monday, May 14, 2007 - link

    First to be clear and I do not condone the title of this article, there's no need to bring racism into this.

    But my point is NVidia can and will react by making the performance per dollar competitive for the R600 vs 8800GTS.

    Once the prices are comparable, why buy a more power hungry part (the ATI)?

    This is one disadvantage they can't correct until the next respin.

  • DrMrLordX - Monday, May 14, 2007 - link

    Based on the benchmarks results, the only reason I can see for getting 2900XTs is if a). you don't care about power consumption and b). want to run a Crossfire rig at a lower cost of entry than dual-8800 GTXs or 8800 Ultras.

    As others have said, some more benchmarks in mature DX10 titles might show who the real winner here is performance-wise, and that holds true for multi-GPU scenarios as well.

Log in

Don't have an account? Sign up now