High Quality AF

One of the greatest things about the newest high end hardware from NVIDIA and ATI is that advanced filtering features can be enabled at any resolution while still maintaining playable framerates. It may take developers a little while to catch up to the capabilities of the X1800 and 7800 lines, but adding value through advanced quality features is definitely a welcome path for ATI and NVIDIA to take. For this launch, ATI has improved their AA and AF implementations. We also have two brand new features: Adaptive AA and Area Anisotropic filtering.

Starting with Area Anisotropic (or high quality AF as it is called in the driver), ATI has finally brought viewing angle independent anisotropic filtering to their hardware. NVIDIA introduced this feature back in the GeForce FX days, but everyone was so caught up in the FX series' abysmal performance that not many paid attention to the fact that the FX series had better quality anisotropic filtering than anything from ATI. Yes, the performance impact was larger, but NVIDIA hardware was differentiating the Euclidean distance calculation sqrt(x^2 + y^2 + z^2) in its anisotropic filtering algorithm. Current methods (NVIDIA stopped doing the quality way) simply differentiate an approximated distance in the form of (ax + by + cz). Math buffs will realize that the differential for this approximated distance simply involves constants while the partials for Euclidean distance are less trivial. Calculating a square root is a complex task, even in hardware, which explains the lower performance of the "quality AF" equation.

Angle dependant anisotropic methods produce fine results in games with flat floors and walls, as these textures are aligned on axes that are correctly filtered. Games that allow a broader freedom of motion (such as flying/space games or top down view games like the sims) don't benefit any more from anisotropic filtering than trilinear filtering. Rotating a surface with angle dependant anisotropic filtering applied can cause noticeable and distracting flicker or texture aliasing. Thus, angle independent techniques (such as ATI's area aniso) are welcome additions to the playing field. As NVIDIA previously employed a high quality anisotropic algorithm, we hope that the introduction of this anisotropic algorithm from ATI will prompt NVIDIA to include such a feature in future hardware as well.

We sat down with the D3DAFTester to show the difference between NVIDIA and ATI hardware with and without the high quality mode enabled. Here's what we ended up with:

NVIDIA 7800 GTX AF

NVIDIA 7800 GTX AF

Mouse over to cycle images

High Quality AF does come with a performance hit. We tested Far Cry at 1600x1200 on a Radeon X1800 XL and saw a performance drop from 76.3 fps to 71.2 fps. This is quite acceptable on high end hardware, but may not be a viable option for everyone.
Memory Architectures Adaptive AA
Comments Locked

103 Comments

View All Comments

  • Gigahertz19 - Wednesday, October 5, 2005 - link

    On the last page I will quote

    "With its 512MB of onboard RAM, the X1800 XT scales especially well at high resolutions, but we would be very interested in seeing what a 512MB version of the 7800 GTX would be capable of doing."

    Based on the results in the benchmarks I would say 512MB barely does anything. Look at the benchmarks on Page 10 the Geforce 7800GTX either beats the X1800 XT or loses by less then 1 FPS. SCALES WELL AT HIGH RESOLUTIONS? Not really, has the author of this article looked at their own benchmarks included? When the resolution is at 2048 x 1536 the 7800GTX creams the competition except in Farcry where it loses by .2FPS to the X1800XT and Splinter Cell it loses by .8FPS so basically it's a tie in those 2 games.

    You know why Nvidia does not have a 512MB version because look at the results...it does shit. 512Mb is pointless right now and if you argue you'll use it for the future then will till future games use it and then buy the best GPU then, not now. These new ATI's blow wookies, so much for competition.
  • NeonFlak - Wednesday, October 5, 2005 - link

    "In some cases, the X1800 XL is able to compete with the 7800 GTX, but not enough to warrant pricing on the same level."

    From the graphs in the review with all the cards present the x1800xl only beat the 7800gt once by 4fps... So beating the 7800gt in one graph by 4fps makes that statement even viable?
  • FunkmasterT - Wednesday, October 5, 2005 - link

    EXACTLY!!

    ATI's FPS numbers are a major disappointment!
  • Questar - Wednesday, October 5, 2005 - link

    Unless you want image quality.
  • bob661 - Wednesday, October 5, 2005 - link

    And the difference is worth the $100 eatra dollars PLUS the "lower" frame rates? Not good bang for the buck.
  • Powermoloch - Wednesday, October 5, 2005 - link

    Not the cards....Just the review. Really sad :(
  • yacoub - Wednesday, October 5, 2005 - link

    So $450 for the X1800XL versus $250 for the X800XL and the only difference is the new core that maybe provides a handful of additional frames per second, a new AA mode, and shader model 3.0?

    Sorry, that's not worth $200 to me. Not even close.
  • coldpower27 - Thursday, October 6, 2005 - link


    Perhaps a up to 20% performance improvement, looking at pixel fillrate alone.
    Shader Model 3.0 Support.
    ATI's Avivo Technology
    OpenEXR HDR Support.
    HQ Non-Angle Dependent AF User Choice

    You decide if that's worth the 200US price difference to you, Adaptive AA, I wouldn't count as apparently through ATI's driver all R3xx hardware and higher now have this capability not just R5xx derivatives, sort of like the launched with R4xx feature Temporal AA.
  • yacoub - Wednesday, October 5, 2005 - link

    So even if these cards were available in stores/online today, the best PCI-E card one can buy for ~$250 is still either an X800XL or a 6800GT. (Or an X800 GTO2 for $230 and flash and overclock it.)

    I find it disturbing that they even waste the time to develop, let alone release, low-end parts that price-wise can't even compete. Why bother wasting the development and processing to create a card that costs more and performs less? What a joke those two lower-end cards are (x1300 and x1600).
  • coldpower27 - Thursday, October 6, 2005 - link

    The Radeon X1600 XT is intended to replace the older X700 Pro, not the stop gap 6600 GT competitors, X800 GT, X800 GTO, which only came into being because ATI had leftoever supplies of R423/R480 & for X800 GTO only R430 cores and of course due to the fact that X700 Pro wasn't really competitive in performance to 600 GT in the firstp lace, due to ATI's reliance on Low-k technology for their high clock frequencies.

    I think these are sucessful replacements.

    Radeon X850/X800 is replaced by Radeon X1800 Technology.
    Radeon X700 is replaced by Radeon X1600 Technology.
    Radeon X550/X300 is replaced by Radeon X1300 Technology.

    X700 is 156mm2 on 110nm, X1600 is 132mm2 on 90nm
    X550 & X1300 are roughly around the same die size, sub 100mm2.

    Though the newer cards use more expensive memory types on their high end versions.

    They also finally bring ATI's entire family as having the same feature set, something that hasn't been seen ever before by ATI I believe. I mean having a high end, mainstream & budget core based on the same technology.

    Nvidia achieved this item first with the Geforce FX line.

Log in

Don't have an account? Sign up now