Further Image Quality Improvements: SSAA LOD Bias and MLAA 2.0

The Southern Islands launch has been a bit atypical in that AMD has been continuing to introduce new AA features well after the hardware itself has shipped. The first major update to the 7900 series drivers brought with it super sample anti-aliasing (SSAA) support for DX10+, and starting with the Catalyst 12.3 beta later this month AMD is turning their eye towards further improvements for both SSAA and Morphological AA (MLAA).

On the SSAA side of things, since Catalyst 9.11 AMD has implemented an automatic negative Level Of Detail (LOD) bias in their drivers that gets triggered when using SSAA. As SSAA oversamples every aspect of a scene – including textures – it can filter out high frequency details in the process. By using a negative LOD bias, you can in turn cause the renderer to use higher resolution textures closer to the viewer, which is how AMD combats this effect.

With AMD’s initial release of DX10+ SSAA support for the 7900 series they enabled SSAA DX10+ games, but they did not completely port over every aspect of their DX9 SSAA implementation. In this case while there was a negative LOD bias for DX9 there was no such bias in place for DX10+. Starting with Catalyst 12.3 AMD’s drivers have a similar negative LOD bias for DX10+ SSAA, which will bring it fully on par with their DX9 SSAA implementation.

As far as performance and image quality goes, the impact to both is generally minimal. The negative LOD bias slightly increases the use of higher resolution textures, and thereby increases the amount of texels to be fetched, but in our tests the performance difference was non-existent. For that matter in our tests image quality didn’t significantly change due to the LOD bias. It definitely makes textures a bit sharper, but it’s a very subtle effect.


Original uncropped screenshots

4x SSAA 4x SSAA w/LOD Bias

Moving on, AMD’s other AA change is to Morphological AA, their post-process pseudo-AA method. AMD first introduced MLAA back in 2010 with the 6800 series, and while they were breaking ground in the PC space with a post-process AA filter, game developers quickly took the initiative 2011 to implement post-process AA directly into their games, which allowed it to be applied before HUD elements were drawn and avoiding the blurring of those elements.

Since then AMD has been working on refining their MLAA implementation, which will be replacing MLAA 1.0 and is being launched as MLAA 2.0. In short, MLAA 2.0 is supposed to be faster and have better image quality than MLAA 1.0, reflecting the very rapid pace of development for post-process AA over the last year and a half.

As far as performance goes the performance claims are definitely true. We ran a quick selection of our benchmarks with MLAA 1.0 and MLAA 2.0, and the performance difference between the two is staggering at times. Whereas MLAA 1.0 had a significant (20%+) performance hit in all 3 games we tested, MLAA 2.0 has virtually no performance hit (<5%) in 2 of the 3 games we tested, and in the 3rd game (Portal 2) the performance hit is still reduced by some. This largely reflects the advancements we’ve seen with games that implement their own post-process AA methods, which is that post-process AA is nearly free in most games.

Radeon HD 7970 MLAA Performance
  4x MSAA 4x MSAA + MLAA 1.0 4x MSAA + MLAA 2.0
Crysis: Warhead 54.7

43.5

53.2
DiRT 3 85.9 49.5 78.5
Portal 2 113.1 88.3 92

As for image quality, that’s not quite as straightforward. Since MLAA does not have access to any depth data and operates solely on the rendered image, it’s effectively a smart blur filter. Consequently like any post-process AA method there is a need to balance the blurring of aliased edges with the unintentional burring of textures and other objects, so quality is largely a product of how much burring you’re willing to put up for any given amount of de-aliasing. In other words, it’s largely subjective.


Original uncropped screenshots

  Batman AC #1 Batman AC #2 Crysis: Warhead Portal 2
MLAA 1.0 Old MLAA Old MLAA Old MLAA Old MLAA
MLAA 2.0 New MLAA New MLAA New MLAA New MLAA

From our tests, the one thing that MLAA 2.0 is clearly better at is identifying HUD elements in order to avoid blurring them – Portal 2 in particular showcases this well. Otherwise it’s a tossup; overall MLAA 2.0 appears to be less overbearing, but looking at Portal 2 again it ends up leaving aliasing that MLAA 1.0 resolved. Again this is purely subjective, but MLAA 2.0 appears to cause less image blurring at a cost of less de-aliasing of obvious aliasing artifacts. Whether that’s an improvement or not is left as an exercise to the reader.

Meet The Radeon HD 7870 & Radeon HD 7850 The Test
Comments Locked

173 Comments

View All Comments

  • chizow - Monday, March 5, 2012 - link

    Sorry, fact checking 101 says the 7850 is still clearly behind the 570, about 10% faster than the GTX 560Ti ($200). The 7870 may be the SKU you're referring to but it costs $350, which again, brings very little movement on pricing:performance relative to 14-16 month old parts.
  • CeriseCogburn - Thursday, March 8, 2012 - link

    Hey don't be so factual. medi01 can also look forward to blurred up MLAA, lack of LOD detail compensation, no PhysX, drivers crashing like mad inexplicably for a year or two until things get "ironed out", new games "not running", mouse cursors stuck in corners, whining about tessellation levels in games "it can't handle", and generally blaming nvidia for all it's failures...
    The "$80" imaginary dollars he saves he can "reinvest" in a "solid" with a "for sure" payoff - his endless hours "fixing" his 7850 "issues".
  • SlyNine - Monday, March 5, 2012 - link

    That's a very 1 dimensional opinion. Compared to the 5870 node change which equaled 2x performance for around the same price, A fab shrink that doesn't double your performance for the money is disappointing.

    We are comparing it to how past node changes changed the price/performance model. This one is HORRIBLE because it basically slides right in to the old one, so now we have a node change that does very little for price.

    I'm running a 5870 which is basically 75% the performance of a 7970, and I paid 379 for the 5870. Which is also 75% of the cost of a 7970. The price of a 7970 is basically the exact same price structure as the 2 1/2 year old 5870, So we are stuck where we were in 2009, yay.
  • morfinx - Monday, March 5, 2012 - link

    75% performance of 7970 would mean that it's 33% faster than a 5870. And that's just not accurate. I have a 5870 as well, so I was paying a lot of attention on how much faster the 7970 is in various reviews. Everything I've read indicates that it's anywhere from 70-110% faster at 2560x1600 resolution (I run 3600x1920, so likely even even more of a difference). That's not even even considering the massive overclocking headroom of the 7970 vs barely any OC headroom of the 5870. Overclocked, a 7970 is easily twice as fast as a 5870.
  • SlyNine - Thursday, March 8, 2012 - link

    I should have said 66% for 66% of the price. Point being the price/Performance has not improved...

    Its around 40-60% faster according to Anandtech's benchmarks.

    Overclocked, don't make me laugh.
  • chizow - Monday, March 5, 2012 - link

    @Kiste: Agreed, don't worry about the criticism you're taking. This site has a lot of readers with very low standards or very limited perspective when it comes to the GPU industry.

    7000 series pricing and performance is a disappointment so far, there's no doubt about it. You can throw as much historical perspective and factual pricing/performance at them but you''ll just be greeted with blank stares and accusations of fanboyism.

    Bottomline is this, if Nvidia follows this price and performance structure, EVERYONE would be disappointed.

    If Nvidia took 14-16 months and only improved their entire product stack 15-25% on a new architecture and new process node with Kepler while increasing prices accordingly, it'd be a colossal failure.

    It makes you wonder why the AMD fans don't see it the same way?
  • Kaboose - Monday, March 5, 2012 - link

    you're acting like Kepler has been released and Nvidia won't be doing the same exact thing, I really doubt we will see Nvidia releasing higher performing cards then AMD at much lower prices like you see to think.
  • chizow - Monday, March 5, 2012 - link

    I didn't say we know what Kepler holds, what I'm saying is *IF* Nvidia did this, it would be a colossal failure and we could just write off 28nm entirely. That's why it makes you wonder why AMD fans are giving them a pass for such little improvement in performance and pricing.

    Honestly, with 15-25% improvement top to bottom over existing SKUs, Nvidia could have simply refreshed their entire Fermi line-up and hit those targets with just clockspeed increases from the smaller process.

    At no other point in the history of GPUs has a new process/architecture from either IHV brought so little movement in price and performance. There's no innovation here and no incentive for anyone who bought in the last few generations to bother upgrading.

    If you have a 5850/470 or better, there is VERY little reason to upgrade right now especially at the asking prices.
  • Kaboose - Monday, March 5, 2012 - link

    What % of people have 5850/470 or better GPUs though? Im going to say not many. 7850/7870 are good cards for people entering the market on mid range desktops that want to add a GPU for gaming. As well as HTPC use. And for system builders who are looking for a good OC and current gen. technology.
  • Ananke - Monday, March 5, 2012 - link

    See Steam statistics. The 5th series was the majority. The largest target market are owners of 5850/5870. I have 5850 and I can afford any card, but I see no reason to upgrade. At this price point gaming is not the only usage, I want quality hardware video encoding acceleration by MANY software packages, GPGPU applications. Today, in those areas, AMD has actually even less applications than when they launched the 5th series.

    For SO much money and NO applications outside gaming the ONLY reason not to use NVidia would be heat dissipation. If Kepler cards are colder, NVidia has a win.

Log in

Don't have an account? Sign up now