High IQ: AMD Fixes Texture Filtering and Adds Morphological AA

“There’s nowhere left to go for quality beyond angle-independent filtering at the moment.”

With the launch of the 5800 series last year, I had high praise for AMD’s anisotropic filtering. AMD brought truly angle-independent filtering to gaming (and are still the only game in town), putting an end to angle-dependent deficiencies and especially AMD’s poor AF on the 4800 series. At both the 5800 series launch and the GTX 480 launch, I’ve said that I’ve been unable to find a meaningful difference or deficiency in AMD’s filtering quality, and NVIDIA was only deficienct by being not quite angle-independent. I have held – and continued to hold until last week – the opinion that there’s no practical difference between the two.

It turns out I was wrong. Whoops.

The same week as when I went down to Los Angeles for AMD’s 6800 series press event, a reader sent me a link to a couple of forum topics discussing AF quality. While I still think most of the differences are superficial, there was one shot comparing AMD and NVIDIA that caught my attention: Trackmania.

Poor high frequency filtering

The shot clearly shows a transition between mipmaps on the road, something filtering is supposed to resolve. In this case it’s not a superficial difference; it’s very noticeable and very annoying.

AMD appears to agree with everyone else. As it turns out their texture mapping units on the 5000 series really do have an issue with texture filtering, specifically when it comes to “noisy” textures with complex regular patterns. AMD’s texture filtering algorithm was stumbling here and not properly blending the transitions between the mipmaps of these textures, resulting in the kind of visible transitions that we saw in the above Trackmania screenshot.

Radeon HD 5870 Radeon HD 6870 GeForce GTX 480

So for the 6800 series, AMD has refined their texture filtering algorithm to better handle this case. Highly regular textures are now filtered properly so that there’s no longer a visible transition between them. As was the case when AMD added angle-independent filtering we can’t test the performance impact of this since we don’t have the ability to enable/disable this new filtering algorithm, but it should be free or close to it. In any case it doesn’t compromise AMD’s existing filtering features, and goes hand-in-hand with their existing angle-independent filtering.

At this point we’re still working on recreating the Trackmania scenario for a proper comparison (which we’ll add to this article when it’s done), but so far it looks good – we aren’t seeing the clear texture transitions that we do on the 5800 series. In an attempt to not make another foolish claim I’m not going to call it perfect, but from our testing we can’t find any clear game examples of where the 6870’s texture filtering is deficient compared to NVIDIA’s – they seem to be equals once again. And even the 5870 with its regular texture problem still does well in everything we’ve tested except Trackmania. As a result I don’t believe this change will be the deciding factor for most people besides the hardcore Trackmania players, but it’s always great to see progress on the texture filtering front.

Moving on from filtering, there’s the matter of anti-aliasing. AMD’s AA advantage from the launch of the 5800 series has evaporated over the last year with the introduction of the GeForce 400 series. With the GTX 480’s first major driver update we saw NVIDIA enable their transparency supersampling mode for DX10 games, on top of their existing ability to use CSAA coverage samples for Alpha To Coverage sampling. The result was that under DX10 NVIDIA has a clear advantage in heavily aliased games such as Crysis and Bad Company 2, where TrSS could smooth out many of the jaggies for a moderate but reasonable performance hit.

For the 6800 series AMD is once again working on their AA quality. While not necessarily a response to NVIDIA’s DX10/DX11 TrSS/SSAA abilities, AMD is introducing a new AA mode, Morphological Anti-Aliasing (MLAA), which should make them competitive with NVIDIA on DX10/DX11 games.

In a nutshell, MLAA is a post-process anti-aliasing filter. Traditional AA modes operate on an image before it’s done rendering and all of the rendering data is thrown away; MSAA for example works on polygon edges, and even TrSS needs to know where alpha covered textures are. MLAA on the other hand is applied to the final image after rendering, with no background knowledge of how it’s rendered. Specifically MLAA is looking for certain types of high-contrast boundaries, and when it finds them it treats them as if they were an aliasing artifact and blends the surrounding pixels to reduce the contrast and remove the aliasing.

MLAA is not a new AA method, but it is the first time we’re seeing it on a PC video card. It’s already in use on video game consoles, where it’s a cheap way to implement AA without requiring the kind of memory bandwidth MSAA requires. In fact it’s an all-around cheap way to perform AA, as it doesn’t require too much computational time either.

For the 6800 series, AMD is implementing MLAA as the ultimate solution to anti-aliasing. Because it’s a post-processing filter, it is API-agonistic, and will work with everything. Deferred rendering? Check. Alpha textures? Done. Screwball games like Bad Company 2 that alias everywhere? Can do! And it should be fast too; AMD says it’s no worse than tier Edge Detect AA mode.

So what’s the catch? The catch is that it’s a post-processing filter; it’s not genuine anti-aliasing as we know it because it’s not operating on the scene as its being rendered. Where traditional AA uses the rendering data to determine exactly what, where, and how to anti-alias things, MLAA is effectively a best-guess at anti-aliasing the final image. Based on what we’ve seen so far we expect that it’s going to try to anti-alias things from time to time that don’t need it, and that the resulting edges won’t be quite as well blended as with MSAA/SSAA. SSAA is still going to offer the best image quality (and this is something AMD has available under DX9), while MSAA + transparency/adaptive anti-aliasing will be the next best method.

Unfortunately AMD only delivered the drivers that enable MLAA yesterday, so we haven’t had a chance to go over the quality of MLAA in-depth. As it’s a post-processing filter we can actually see exactly how it affects images (AMD provides a handy tool to do this)  so we’ll update this article shortly with our findings.

Finally, for those of you curious how this is being handled internally, this is actually being done by AMD’s drivers through a DirectCompute shader. Furthermore they’re taking heavy advantage of the Local Data Store of their SIMD design to keep adjacent pixels in memory to speed it up, with this being the biggest reason why it has such a low amount of overhead. Since it’s a Compute Shader, this also means that it should be capable of being back-ported to the 5000 series, although AMD has not committed to this yet. There doesn’t appear to be a technical reason why this isn’t possible, so ultimately it’s up to AMD and if they want to use it to drive 6800 series sales over 5000 series sales.

Seeing the Present: HDMI 1.4a, UVD3, and Display Correction What’s In a Name?
Comments Locked

197 Comments

View All Comments

  • Quidam67 - Friday, October 29, 2010 - link

    Well that's odd.

    After reading about the EVGA FTW, and its mind-boggling factory overclock, I went looking to see if I could pick one of these up in New Zealand.

    Seems you can, or maybe not. As per this example http://www.trademe.co.nz/Browse/Listing.aspx?id=32... the clocks are 763Mhz and 3.8 on the memory?!?

    What gives, how can EVGA give the same name to a card and then have different specifications on it? So good thing I checked the fine-print or else I would have been bumbed out if I'd bought it and then realised it wasn't clocked like I thought it would be..
  • Murolith - Friday, October 29, 2010 - link

    So..how about that update in the review checking out the quality/speed of MLAA?
  • CptChris - Sunday, October 31, 2010 - link

    As the cards were compared to the OC nVidia card I would be interested in seeing how the 6800 series also compares to a card like the Sapphire HD5850 2GB Toxic Edition. I know it is literally twice the price as the HD6850 but would it be enough of a performance margin to be worth the price difference?
  • gochichi - Thursday, November 4, 2010 - link

    You know, maybe I hang in the wrong circles but I by far keep up to date on GPUs more than anyone I know. Not only that, but I am eager to update my stuff if it's reasonable. I want it to be reasonable so badly because I simply love computer hardware (more than games per say, or as much as the games... it's about hardware for me in and of itself).

    Not getting to my point fast enough. I purchased a Radeon 3870 at Best Buy (Best Buy had an oddly good deal on these at the time, Best Buy doesn't tend to keep competitive prices on video cards at all for some reason). 10 days later (so I returned my 3870 at the store) I purchased a 4850, and wow, what a difference it made. The thing of it is, the 3870 played COD 4 like a champ, the 4850 was ridiculously better but I was already satisfied.

    In any case, the naming... the 3870 was no more than $200.00 I think it was $150.00. And it played COD4 on 24" 1900x1200 monitor with a few settings not maxed out, and played it so well. The 4850 allowed me to max out my settings. Crysis sucked, crysis still sucks and crysis is still a playable benchmark. Not to say I don't look at it as a benchmark. The 4850 on the week of its release was $199.99 at Best Buy.

    Then gosh oh golly there was the 4870 and the 4890, which simply took up too much power... I am simply unwilling to buy a card that uses more than one extra 6-pin connector just so I can go out of my way to find something that runs better. So far, my 4850 has left me wanting more in GTA IV, (notice again how it comes down to hardware having to overcome bad programming, the 4850 is fast enough for 1080p but it's not a very well ported game so I have to defer to better hardware). You can stop counting the ways my 4850 has left me wanting more at 1900 x 1200. I suppose maxing out Starcraft II would be nice also.

    Well, then came out the 5850, finally a card that would eclipse my 4850... but oh wait, though the moniker was the same (3850 = so awesome, so affordable, the 4850 = so awesome, so affordable, the 5850 = two 6-pin connectors, so expensive, so high end) it was completely out of line with what I had come to expect. The 4850 stood without a successor. Remember here that I was going from 3870 to 4850, same price range, way better performance. Then came the 5770, and it was marginally faster but just not enough change to merit a frivolous upgrade.

    Now, my "need" to upgrade is as frivolous as ever, but finally, a return to sanity with the *850 moniker standing for fast, and midrange. I am a *850 kind of guy through and through, I don't want crazy power consumption, I don't want to be able to buy a whole, really good computer for the price of just a video card.

    So, anyhow, that's my long story basically... that the strange and utterly upsetting name was the 5850, the 6850 is actually right in line with what the naming should have always staid as. I wouldn't know why the heck AMD tossed a curve ball for me via the 5850, but I will tell you that it's been a really long time coming to get a true successor in the $200 and under range.

    You know, around the time of the 9800GT and the 4850, you actually heard people talk about buying video cards while out with friends. The games don't demand much more than that... so $500 cards that double their performance is just silly silly stuff and people would rather buy an awesome phone, an iPad, etc. etc. etc.

    So anyhow, enough of my rambling, I reckon I'll be silly and get the true successor to my 4850... though I am assured that my Q6600 isn't up to par for Starcraft II... oh well.
  • rag2214 - Sunday, November 7, 2010 - link

    The 6800 series my not beat the 5870 yet but it is the start of the HDMI 1.4 for 3dHD not available in any other ATI graphics cards.
  • Philip46 - Monday, November 15, 2010 - link

    The review stated why was there a reson to buy a 460(not OC'ed).

    How about benchmarks of games using Physx?

    For instance Mafia 2 hits 32fps @ 1080p(I7-930 cpu) when using Physx on high, while the 5870 manages only 16.5fps, while i tested both cards.

    How about a GTA:IV benchmark?, because the Zotac 2GB GTX 460, runs the game more smoothly(the same avg fps, except the min fps on the 5850 are lower in the daytime) then the 5850 (2GB).

    How about even a Far Cry 2 benchmark?

    Co'me on anandtech!, lets get some real benchmarks that cover all aspects of gaming features.

    How about adding in driver stability? Ect..

    And before anyone calls me biased, i had both the Zotac GTX 460 and Saffire 5850 2GB a couple weeks back, and overall i went with the Zotac 460, and i play Crysis/Stalker/GTA IV/Mafia 2/Far Cry 2..ect @ 1080p, and the 460 just played them all more stable..even if Crysis/Stalker were some 10% faster on the 5850.

    BTW: Bad move by anandtech to include the 460 FTC !
  • animekenji - Saturday, December 25, 2010 - link

    Barts is the replacement for Juniper, NOT Cypress. Cayman is the replacement for Cypress. If you're going to do a comparison to the previous generation, then at least compare it to the right card. HD6850 replaces HD5750. HD6870 replaces HD5770. HD6970 replaces HD5870. You're giving people the false impression that AMD knocked performance down with the new cards instead of up when HD6800 vastly outperforms HD5700 and HD6900 vastly outperforms HD5800. Stop drinking the green kool-aid, Anandtech.

Log in

Don't have an account? Sign up now