AMD’s Catalyst 11.1a Hotfix

If the first 2 legs of AMD’s GTX 560 Ti counter-offensive were the 6950 1GB and factory overclocked 6870s, then the final leg of that offensive are the drivers. With barely a month between the launch of the 6900 series and today, AMD has only delivered their original launch drivers for the 6900 series. Meanwhile the 6800 series launched with the Catalyst 10.10 hotfix drivers, which due to how AMD organizes their driver branches are little different than the 10.12 drivers currently posted. So in spite of the nearly 2 month gap between the launches of these two card families, AMD is effectively providing the first real driver update for both.

Launching tomorrow will be the Catalyst 11.1a Hotfix drivers. As far as performance goes they contain the usual mix of game-specific performance improvements, with AMD specifically targeting Call of Duty: Black Ops, BattleForge, Metro 2033, and Aliens vs. Predators performance among other games. Having tested these drivers, overall we’re not seeing any significant performance impact in our benchmark suite, even with games that area on AMD’s list. In fact the only area where we are seeing a major change is with our SmallLuxGPU compute benchmark, which looks to be on the receiving end of some shader compiler optimizations by AMD. SLG performance on the 6900 series is up 25%-30%, providing some validity to AMD’s earlier claims that their VLIW4 shader compiler still has some room to grow as AMD learns how to optimize it like they did with the VLIW5 compiler in the past.

The bigger news is what AMD is doing to their control panel, and what it means to you.

Let me first introduce you to a new section of AMD’s 3D Application Settings control panel called Tessellation. With this new control panel feature AMD is implementing a tessellation factor override in to their drivers, allowing AMD and/or the user to clamp down on games and applications using high tessellation factors. The purpose of this feature is to deal with games such as HAWX 2, which uses very high tessellation factors and offers no tessellation configuration besides turning the feature on and off.

As we’ve already well established, NVIDIA has much better tessellation performance than AMD at high tessellation factors even with the 6900 series. This position leaves AMD on the defensive much of the time (“they’re overpowered” doesn’t have the same ring as “they’re underpowered”), but more so than that games like HAWX 2 are particularly damaging to AMD; they don’t just make AMD’s hardware underperform, but they leave users with only the option to accept poor tessellation performance or to turn tessellation off altogether.

The crux of AMD’s argument – and a point that we agree with – is that tessellation is supposed to be easily scalable. It is in fact the basis of tessellation, that a developer can use it to easily scale up a model based on the available hardware, using a combination of mip-chained displacement maps and an appropriate tessellation factor. The end-game of this scenario would be that a game would use low amounts of tessellation on low-end hardware (e.g. APUs), and large amounts of tessellation on high-end hardware such as GeForce GTX 580s and Radeon HD 6970s. But for that to happen game developers need to take advantage of the flexibility of tessellation by having their games and engines use multiple tessellation factors and displacement maps.

Ultimately games like HAWX2 that do not implement these kinds of controls are not easily scalable. This is the choice of the developer, but in long standing tradition both AMD and NVIDIA will override developer wishes in their drivers when they see fit. In this case AMD believes they are helping their customers by having their drivers cap the tessellation factor in some situations, so that their customers can use tessellation without very high tessellation factors bogging down performance.

And while we agree with AMD’s argument, AMD’s implementation leaves us uneasy. Having this feature available is great, just as is the ability to override v-sync, purposely use poor texture filtering quality for speed purposes, and clamping LOD biases .The bit that makes us uneasy is where the default will lie. AMD offers 3 different “modes”: AMD Optimized, which uses an AMD chosen tessellation factor, user control, and Use Application Settings. AMD intends to make the default “AMD Optimized”, which is to say that in the future all games would use the tessellation factor AMD chooses.

We sincerely believe AMD is doing what they think is best for their users even if they also stand to gain in benchmarks, however we find ourselves in disagreement with their choice. While the actions of games like HAWX2 are unfortunate for users, tessellation is well defined in the DirectX 11 specification. We’re more than willing to entertain creative interpretations of matters like texture filtering where the standard doesn’t dictate a single filtering algorithm, but DX11 doesn’t leave any ambiguity here. As such there’s little room in our opinion for drivers to override a game’s request by default. Drivers should not automatically be substituting a lower tessellation factor on their own – this is a power that should be reserved for a user.


Tessellation in action

Admittedly this is a minefield – modern GPUs are all about taking shortcuts, as these are necessary to get reasonable performance with the kind of complexity modern games are shooting for. But it’s our opinion that there’s no better time to take a stand than before an optimization like this is implemented, as once it’s done then it’s almost impossible to change course, or even to have a meaningful discourse about the issue.

At this time AMD has not defined any tessellation factors in their profiles, and as a result the AMD Optimized setting is no different than the Use Application Settings setting. At some point this will change. We would like to see AMD build this feature in to their drivers and leave the choice up to the user, but only time will tell how they proceed.

On that note, tessellation factors are not the only minefield AMD is dabbling in. With the Catalyst 10.10 drivers AMD began playing with their texture filtering quality at different levels. Previously at High Quality (previously known as Catalyst AI Off) AMD would disable all optimizations, while at the default setting of Quality (Catalyst AI Standard) AMD would use a small set of optimizations that had little-to-any impact on image quality, and at Performance (Catalyst AI Advanced) they would use a number of optimizations to improve performance. Texture filtering optimizations are nothing new (having been around practically as long as the 3D accelerator), but in a 2-player market any changes will make a wave.

In the case of AMD’s optimizations, for Quality mode they picked a new set of optimizations that marginally improved performance but at the same time marginally changed the resulting image quality. Many tomes about the issue have already been written, and there’s very little I believe we can add to the subject – meaningful discourse is difficult to have when you believe there’s room for optimizations while at the same time believing there is a point where one can go too far.


AMD Radeon HD 6870, Catalyst 10.10e

In any case while we have found very little to add to the subject, this has not been the case elsewhere on the internet. As such after 3 months AMD is largely reverting their changes to texture filtering, and will be returning it to similar quality levels as what we saw with the Catalyst 10.9 drivers – which is to say they’re now once again shooting for a level of texture filtering quality similar to NVIDIA.

As we have little to add beyond this, here are AMD’s full notes on the matter:

The Quality setting has now been improved to match the HQ setting in all respects except for one – it enables an optimization that limits trilinear anisotropic filtering to areas surrounding texture mipmap level transitions, while doing bilinear anisotropic filtering elsewhere.  Sometimes referred to as “brilinear” filtering, it offers a way to improve filtering performance without visibly affecting image quality.  It has no impact on texture sharpness or shimmering, and this can be verified by comparing it visually with the High Quality setting.

We continue to recommend the Quality setting as the best one to use for competitive testing for the following reasons:

  • It should be visually indistinguishable from the High Quality setting for real textures (with the exception of special test patterns using colored mip levels)
  • Visual quality should now be equal to the default setting used on HD 5800 series GPUs with Catalyst 10.9 and earlier drivers, or better when used on HD 6800/6900 series GPUs due to other hardware filtering improvements
  • It matches the default texture filtering quality setting currently implemented on our competitor’s GPUs, which make use of the same trilinear filtering optimization
Meet The Radeon HD 6950 1GB and XFX Radeon HD 6870 Black Edition The Test & Gaming Performance
Comments Locked

111 Comments

View All Comments

  • 7Enigma - Tuesday, January 25, 2011 - link

    Here's the point. There is no measurable difference with it on or not from a framerate perspective. So in this case it doesn't matter. That should tell you that the only possible difference in this instance would be a possible WORSENING of picture quality since the GPU wars are #1 about framerate and #2 about everything else. I'm sure a later article will delve into what the purpose of this setting is for but right now it clearly has no benefit from the test suite that was chosen.

    I agree with you though that I would have liked a slightly more detailed description of what it is supposed to do...

    For instance is there any power consumption (and thus noise) differences with it on vs. off?
  • Ryan Smith - Tuesday, January 25, 2011 - link

    For the time being it's necessary that we use Use Application Setting so that newer results are consistent with our existing body of work. As this feature did not exist prior to to the 11.1a drivers, using it would impact our results by changing the test parameters - previously it wasn't possible to cap tessellation factors like this so we didn't run our tests with such a limitation.

    As we rebuild our benchmark suite every 6 months, everything is up for reevaluation at that time. We may or may not continue to disable this feature, but for the time being it's necessary for consistent testing.
  • Dark Heretic - Wednesday, January 26, 2011 - link

    Thanks for the reply Ryan, that's a very valid point on keeping the testing parameters consistent with current benchmark results.

    Would it be possible to actually leave the drivers at default settings for both Nvidia and AMD in the next benchmark suite. I know there will be some inconsistent variations between both sets of drivers, but it would allow for a more accurate picture on both hardware and driver level (as intended by Nvidia / AMD when setting defaults)

    I use both Nvidia and AMD cards, and do find differences between picture quality / performances from both sides of the fence. However i also tend to leave drivers at default settings to allow both Nvidia and AMD the benefit of knowing what works best with their hardware on a driver level, i think it would allow for a more "real world" set of benchmark results.

    @B3an, perhaps you should have used the phrase "lacking in cognitive function", it's much more polite. You'll have to forgive the oversight of not thinking about the current set of benchmarks overall as Ryan has politely pointed out.
  • B3an - Wednesday, January 26, 2011 - link

    You post is simply retarded for lack of a better word.

    Ryan is completely right in disabling this feature, even though it has no effect on the results (yet) in the current drivers. And it should always be disabled in the future.

    The WHOLE point of articles like this is to get the results as fair as possible. If you're testing a game and it looks different and uses different settings on one card to another, how is that remotely fair? What is wrong with you?? Bizarre logic.
    It would be the exact same thing as if AMD was to disable AA by default in all games even if the game settings was set to use AA, and then having the nVidia card use AA in the game tests while the AMD card did not. The results would be absolutely useless, no one would know which card is actually faster.
  • prdola0 - Thursday, January 27, 2011 - link

    Exactly. We should compare apples-to-apples. And let's not forget about the FP16 Demotion "optimization" in the AMD drivers that reduces the render target width from R16G16B16A16 to R11G11B10, effectively reducing bandwidth from 64bits to 32bits at the expense of quality. All this when the Catalyst AI is turned on. AMD claims it doesn't have any effect on the quality, but multiple sources already confirmed that it is easily visible without much effort in some titles, while in some others it doesn't have. However it affects performance for up to 17%. Just google "fp16 demotion" and you will see a plenty of articles about it.
  • burner1980 - Tuesday, January 25, 2011 - link

    Thanks for not listening to your readers.

    Why do you have to include an apple to orange comparison again ?

    Is it so hard to test Non-OC vs Non-OC and Oc vs. OC ?

    The article itself is fine, but please stop this practice.

    Proposal for an other review: Compare ALL current factory stock graphic card models with their highest "reasonable" overclock against each other. Which valus does the customer get when taking OC into (buying) consideration ?
  • james.jwb - Tuesday, January 25, 2011 - link

    quite a good idea if done correctly. Sort of 460's and above would be nice to see.
  • AnnonymousCoward - Thursday, January 27, 2011 - link

    Apparently the model number is very important to you. What if every card above 1MHz was called OC? Then you wouldn't want to consider them. But the 6970@880MHz and 6950@800MHz are fine! Maybe you should focus on price, performance, and power, instead of the model name or color of the plastic.

    I'm going to start my own comments complaint campaign: Don't review cards that contain any blue in the plastic! Apples to apples, people.
  • AmdInside - Tuesday, January 25, 2011 - link

    Can someone tell me where to find a 6950 for ~$279? Sorry but after rebates do not count.
  • Spoelie - Tuesday, January 25, 2011 - link

    If you look at the numbers, the 6870BE is more of a competitor than the article text would make you believe - in the games where the nvidia cards do not completely trounce the competition.

    Look at the 1920x1200 charts of the following games and tell me the 6870BE is outclassed:
    *crysis warhead
    *metro
    *battlefield (except waterfall? what is the point of that benchmark btw)
    *stalker
    *mass effect2
    *wolfenstein

    If you now look at the remaining games where the NVIDIA card owns:
    *hawx (rather inconsequential at these framerates)
    *civ5
    *battleforge
    *dirt2
    You'll notice in those games that the 6950 is just as outclassed. So you're better of with an nvidia card either way.

    It all depends on the games that you pick, but a blanket statement that 6870BE does not compete is not correct either.

Log in

Don't have an account? Sign up now