Adaptive AA

Antialiasing is becoming more and more important as graphics cards get faster. With the 7800 GTX (and the X1800 XT when it comes along), there are very few monitors that can really stress the fill rate potential of these cards. Not everyone has 30" Apple Cinema Displays, and even at resolutions of 1600x1200 we see CPU limitedness start to become a factor. With many 1600x1200 flat panels out there on the market, this "monitor limitedness" will only get worse and worse as graphics cards continue to improve at a significantly faster rate than display technology. Quality enhancing features will get more and more attention in the meantime, and more power can be spent on enhancing a game rather than just rendering it. Thus more focus has recently been put into antialiasing algorithms.

Multisample AA (MSAA) has been the predominant method of antialiasing for quite a few years now, but it is not perfect. MSAA only works around polygon edges by smoothing lines when the area covered by one pixel falls over multiple triangles. SSAA oversamples everything at every pixel and is traditionally implemented by rendering a scene at a larger resolution and then down-sampling the image to fit the display. Lots of power is wasted with SSAA in areas that are covered by the same color, so MSAA wins out in performance while sacrificing a bit of quality.

One of the major down sides of MSAA is its inability to antialias the interior of polygons mapped with a texture that includes transparency. Things like wires, fences, and foliage are often rendered with huge triangles and transparent texture maps. Since MSAA only works on polygon edges, the areas between transparent and opaque parts inside these large polygons can look very jagged. The only way to combat this is to take multiple texture samples per pixel within the same polygon. This can be done by performing multiple texture lookups per pixel rather than simply rendering the scene at a huge resolution.

ATI is including a feature called Adaptive Antialiasing in the Catalyst release that comes along with the X1000 series. Adaptive AA is functionally similar to NVIDIA's Transparency AA. Rather than just doing multi-sample (MSAA), ATI is able to adaptively take multiple texture samples per pixel in areas that would benefit from including additional texture samples (essentially resulting in a combination of MSAA and SSAA where needed). Depending on where ATI determines it is necessary to perform multiple texture samples, poorly designed or easily aliased textures can benefit in addition to those that include transparency.

Unlike NVIDIA's Transparency AA, ATI's Adaptive AA will be available to all ATI hardware owners. How's that for value-add! This could be a very nice thing for X800/X850 series owners stuck with 1280x1024 panels. Apparently ATI has been tweaking this technology for a few years now, but held off on its introduction until this launch. The use of this feature on most older hardware won't be widespread as performance will degrade too much. In these cases, increasing resolution is almost always more effective than increasing AA quality. Here's a look at the Adaptive AA as compared to Transparency AA:

NVIDIA 7800 GTX 4xAA

NVIDIA 7800 GTX 4xAA

Mouse over to cycle images

ATI has also improved their hardware to the point where they can support MSAA on multiple render target (MRT) and fp16 render targets. No other hardware out now can perform MSAA in games that use these techniques. ATI is touting AA on fp16 targets as the ability to perform AA on HDR enabled games. While it is true that having front to back input and output textures and render targets composed of fp16 information is a very high quality way of doing HDR, it is also very memory bandwidth intensive and requires a lot of GPU horsepower (especially since there are no texture compression techniques that work on fp16 textures). Certainly support for MSAA on floating point and MRT output is a welcome addition to the feature set, but we don't currently have a good way to test the performance or quality of this feature as there aren't any good applications around to test them.

Continuing down the path to high quality AA, ATI has improved the sub-pixel accuracy of their antialiasing hardware. Rather than being limited to selecting samples on an 8x8 grid, ATI is now able to work with select samples from a 16x16 grid. Moving up from 64 to 256 potential sub-pixels per pixel, ATI has improved the accuracy of their AA algorithm. This accuracy improvement may not be directly noticeable, but this enhancement will also improve the quality of dense AA methods like CrossFire's SuperAA technology. Workstation users will also benefit as this will likely translate to improved point and line antialiasing quality.

The one thing I would ask for from ATI is the ability to turn off "gamma-correct" AA. Such methods only shift inaccuracies between overly dark and overly bright pixels. Consistent results would only be possible if all displays were the same. Since they are not, it's really a six of one half-dozen of the other choice. Putting the decision in the user's hands as to what looks better is always our favored suggestion.

As if all of these enhancements weren't enough to top off ATI's already industry leading antialiasing (NVIDIA's grid aligned sample patterns just can't touch ATI's fully programmable sample patterns in quality), ATI has also vastly improved antialiasing performance with the X1000 generation of hardware. Neither NVIDIA nor previous generation ATI hardware can match the minimal performance hit the X1000 series incurs when enabling standard AA. The performance we see is likely due to a combination of the improvements made to the AA hardware itself along side enhancements to the memory architecture that allow for higher bandwidth and the prioritization of data moving on the ring bus.
High Quality AF Test Setup and Power Performance
Comments Locked

103 Comments

View All Comments

  • Gigahertz19 - Wednesday, October 5, 2005 - link

    On the last page I will quote

    "With its 512MB of onboard RAM, the X1800 XT scales especially well at high resolutions, but we would be very interested in seeing what a 512MB version of the 7800 GTX would be capable of doing."

    Based on the results in the benchmarks I would say 512MB barely does anything. Look at the benchmarks on Page 10 the Geforce 7800GTX either beats the X1800 XT or loses by less then 1 FPS. SCALES WELL AT HIGH RESOLUTIONS? Not really, has the author of this article looked at their own benchmarks included? When the resolution is at 2048 x 1536 the 7800GTX creams the competition except in Farcry where it loses by .2FPS to the X1800XT and Splinter Cell it loses by .8FPS so basically it's a tie in those 2 games.

    You know why Nvidia does not have a 512MB version because look at the results...it does shit. 512Mb is pointless right now and if you argue you'll use it for the future then will till future games use it and then buy the best GPU then, not now. These new ATI's blow wookies, so much for competition.
  • NeonFlak - Wednesday, October 5, 2005 - link

    "In some cases, the X1800 XL is able to compete with the 7800 GTX, but not enough to warrant pricing on the same level."

    From the graphs in the review with all the cards present the x1800xl only beat the 7800gt once by 4fps... So beating the 7800gt in one graph by 4fps makes that statement even viable?
  • FunkmasterT - Wednesday, October 5, 2005 - link

    EXACTLY!!

    ATI's FPS numbers are a major disappointment!
  • Questar - Wednesday, October 5, 2005 - link

    Unless you want image quality.
  • bob661 - Wednesday, October 5, 2005 - link

    And the difference is worth the $100 eatra dollars PLUS the "lower" frame rates? Not good bang for the buck.
  • Powermoloch - Wednesday, October 5, 2005 - link

    Not the cards....Just the review. Really sad :(
  • yacoub - Wednesday, October 5, 2005 - link

    So $450 for the X1800XL versus $250 for the X800XL and the only difference is the new core that maybe provides a handful of additional frames per second, a new AA mode, and shader model 3.0?

    Sorry, that's not worth $200 to me. Not even close.
  • coldpower27 - Thursday, October 6, 2005 - link


    Perhaps a up to 20% performance improvement, looking at pixel fillrate alone.
    Shader Model 3.0 Support.
    ATI's Avivo Technology
    OpenEXR HDR Support.
    HQ Non-Angle Dependent AF User Choice

    You decide if that's worth the 200US price difference to you, Adaptive AA, I wouldn't count as apparently through ATI's driver all R3xx hardware and higher now have this capability not just R5xx derivatives, sort of like the launched with R4xx feature Temporal AA.
  • yacoub - Wednesday, October 5, 2005 - link

    So even if these cards were available in stores/online today, the best PCI-E card one can buy for ~$250 is still either an X800XL or a 6800GT. (Or an X800 GTO2 for $230 and flash and overclock it.)

    I find it disturbing that they even waste the time to develop, let alone release, low-end parts that price-wise can't even compete. Why bother wasting the development and processing to create a card that costs more and performs less? What a joke those two lower-end cards are (x1300 and x1600).
  • coldpower27 - Thursday, October 6, 2005 - link

    The Radeon X1600 XT is intended to replace the older X700 Pro, not the stop gap 6600 GT competitors, X800 GT, X800 GTO, which only came into being because ATI had leftoever supplies of R423/R480 & for X800 GTO only R430 cores and of course due to the fact that X700 Pro wasn't really competitive in performance to 600 GT in the firstp lace, due to ATI's reliance on Low-k technology for their high clock frequencies.

    I think these are sucessful replacements.

    Radeon X850/X800 is replaced by Radeon X1800 Technology.
    Radeon X700 is replaced by Radeon X1600 Technology.
    Radeon X550/X300 is replaced by Radeon X1300 Technology.

    X700 is 156mm2 on 110nm, X1600 is 132mm2 on 90nm
    X550 & X1300 are roughly around the same die size, sub 100mm2.

    Though the newer cards use more expensive memory types on their high end versions.

    They also finally bring ATI's entire family as having the same feature set, something that hasn't been seen ever before by ATI I believe. I mean having a high end, mainstream & budget core based on the same technology.

    Nvidia achieved this item first with the Geforce FX line.

Log in

Don't have an account? Sign up now