AMD’s Catalyst 11.1a Hotfix

If the first 2 legs of AMD’s GTX 560 Ti counter-offensive were the 6950 1GB and factory overclocked 6870s, then the final leg of that offensive are the drivers. With barely a month between the launch of the 6900 series and today, AMD has only delivered their original launch drivers for the 6900 series. Meanwhile the 6800 series launched with the Catalyst 10.10 hotfix drivers, which due to how AMD organizes their driver branches are little different than the 10.12 drivers currently posted. So in spite of the nearly 2 month gap between the launches of these two card families, AMD is effectively providing the first real driver update for both.

Launching tomorrow will be the Catalyst 11.1a Hotfix drivers. As far as performance goes they contain the usual mix of game-specific performance improvements, with AMD specifically targeting Call of Duty: Black Ops, BattleForge, Metro 2033, and Aliens vs. Predators performance among other games. Having tested these drivers, overall we’re not seeing any significant performance impact in our benchmark suite, even with games that area on AMD’s list. In fact the only area where we are seeing a major change is with our SmallLuxGPU compute benchmark, which looks to be on the receiving end of some shader compiler optimizations by AMD. SLG performance on the 6900 series is up 25%-30%, providing some validity to AMD’s earlier claims that their VLIW4 shader compiler still has some room to grow as AMD learns how to optimize it like they did with the VLIW5 compiler in the past.

The bigger news is what AMD is doing to their control panel, and what it means to you.

Let me first introduce you to a new section of AMD’s 3D Application Settings control panel called Tessellation. With this new control panel feature AMD is implementing a tessellation factor override in to their drivers, allowing AMD and/or the user to clamp down on games and applications using high tessellation factors. The purpose of this feature is to deal with games such as HAWX 2, which uses very high tessellation factors and offers no tessellation configuration besides turning the feature on and off.

As we’ve already well established, NVIDIA has much better tessellation performance than AMD at high tessellation factors even with the 6900 series. This position leaves AMD on the defensive much of the time (“they’re overpowered” doesn’t have the same ring as “they’re underpowered”), but more so than that games like HAWX 2 are particularly damaging to AMD; they don’t just make AMD’s hardware underperform, but they leave users with only the option to accept poor tessellation performance or to turn tessellation off altogether.

The crux of AMD’s argument – and a point that we agree with – is that tessellation is supposed to be easily scalable. It is in fact the basis of tessellation, that a developer can use it to easily scale up a model based on the available hardware, using a combination of mip-chained displacement maps and an appropriate tessellation factor. The end-game of this scenario would be that a game would use low amounts of tessellation on low-end hardware (e.g. APUs), and large amounts of tessellation on high-end hardware such as GeForce GTX 580s and Radeon HD 6970s. But for that to happen game developers need to take advantage of the flexibility of tessellation by having their games and engines use multiple tessellation factors and displacement maps.

Ultimately games like HAWX2 that do not implement these kinds of controls are not easily scalable. This is the choice of the developer, but in long standing tradition both AMD and NVIDIA will override developer wishes in their drivers when they see fit. In this case AMD believes they are helping their customers by having their drivers cap the tessellation factor in some situations, so that their customers can use tessellation without very high tessellation factors bogging down performance.

And while we agree with AMD’s argument, AMD’s implementation leaves us uneasy. Having this feature available is great, just as is the ability to override v-sync, purposely use poor texture filtering quality for speed purposes, and clamping LOD biases .The bit that makes us uneasy is where the default will lie. AMD offers 3 different “modes”: AMD Optimized, which uses an AMD chosen tessellation factor, user control, and Use Application Settings. AMD intends to make the default “AMD Optimized”, which is to say that in the future all games would use the tessellation factor AMD chooses.

We sincerely believe AMD is doing what they think is best for their users even if they also stand to gain in benchmarks, however we find ourselves in disagreement with their choice. While the actions of games like HAWX2 are unfortunate for users, tessellation is well defined in the DirectX 11 specification. We’re more than willing to entertain creative interpretations of matters like texture filtering where the standard doesn’t dictate a single filtering algorithm, but DX11 doesn’t leave any ambiguity here. As such there’s little room in our opinion for drivers to override a game’s request by default. Drivers should not automatically be substituting a lower tessellation factor on their own – this is a power that should be reserved for a user.


Tessellation in action

Admittedly this is a minefield – modern GPUs are all about taking shortcuts, as these are necessary to get reasonable performance with the kind of complexity modern games are shooting for. But it’s our opinion that there’s no better time to take a stand than before an optimization like this is implemented, as once it’s done then it’s almost impossible to change course, or even to have a meaningful discourse about the issue.

At this time AMD has not defined any tessellation factors in their profiles, and as a result the AMD Optimized setting is no different than the Use Application Settings setting. At some point this will change. We would like to see AMD build this feature in to their drivers and leave the choice up to the user, but only time will tell how they proceed.

On that note, tessellation factors are not the only minefield AMD is dabbling in. With the Catalyst 10.10 drivers AMD began playing with their texture filtering quality at different levels. Previously at High Quality (previously known as Catalyst AI Off) AMD would disable all optimizations, while at the default setting of Quality (Catalyst AI Standard) AMD would use a small set of optimizations that had little-to-any impact on image quality, and at Performance (Catalyst AI Advanced) they would use a number of optimizations to improve performance. Texture filtering optimizations are nothing new (having been around practically as long as the 3D accelerator), but in a 2-player market any changes will make a wave.

In the case of AMD’s optimizations, for Quality mode they picked a new set of optimizations that marginally improved performance but at the same time marginally changed the resulting image quality. Many tomes about the issue have already been written, and there’s very little I believe we can add to the subject – meaningful discourse is difficult to have when you believe there’s room for optimizations while at the same time believing there is a point where one can go too far.


AMD Radeon HD 6870, Catalyst 10.10e

In any case while we have found very little to add to the subject, this has not been the case elsewhere on the internet. As such after 3 months AMD is largely reverting their changes to texture filtering, and will be returning it to similar quality levels as what we saw with the Catalyst 10.9 drivers – which is to say they’re now once again shooting for a level of texture filtering quality similar to NVIDIA.

As we have little to add beyond this, here are AMD’s full notes on the matter:

The Quality setting has now been improved to match the HQ setting in all respects except for one – it enables an optimization that limits trilinear anisotropic filtering to areas surrounding texture mipmap level transitions, while doing bilinear anisotropic filtering elsewhere.  Sometimes referred to as “brilinear” filtering, it offers a way to improve filtering performance without visibly affecting image quality.  It has no impact on texture sharpness or shimmering, and this can be verified by comparing it visually with the High Quality setting.

We continue to recommend the Quality setting as the best one to use for competitive testing for the following reasons:

  • It should be visually indistinguishable from the High Quality setting for real textures (with the exception of special test patterns using colored mip levels)
  • Visual quality should now be equal to the default setting used on HD 5800 series GPUs with Catalyst 10.9 and earlier drivers, or better when used on HD 6800/6900 series GPUs due to other hardware filtering improvements
  • It matches the default texture filtering quality setting currently implemented on our competitor’s GPUs, which make use of the same trilinear filtering optimization
Meet The Radeon HD 6950 1GB and XFX Radeon HD 6870 Black Edition The Test & Gaming Performance
Comments Locked

111 Comments

View All Comments

  • JPForums - Tuesday, January 25, 2011 - link

    It looks to me like the 560 Ti only has the edge over the 6950 1GB tessellation with high factors. Even the 6870 bests the 560 Ti in the DirectX 11 Tessellation Sample test at the medium setting. See anandtech's 560 Ti launch article.

    What I find even more interesting is that when you consider only the higher resolutions, the 6950 seems to be superior to the 560 Ti. I realize most people still use lower resolutions, but it doesn't make sense to judge between the potential of two cards at any resolutions that both can produce more than playable frame rates at the settings in question. This creates a misleading conclusion in situations where the winner reverses at higher resolutions. Hawx, for instance, shows that the 560 Ti has clearly superior frames rates at lower resolutions where the 6950 scales much better and edges it out at 2560x1600. Neither dip below 80 fps, so you can't really say the gameplay differs, however, it appears the 6950 is the one that has the muscle when it counts. Battlefield BC2 shows a similar reversal (reference anandtech's 560 Ti launch article). Of course, there are situations where nVidia turns tables at higher resolutions as well, they just aren't present in anandtech's launch article (unless I missed it).
  • softdrinkviking - Wednesday, January 26, 2011 - link

    I believe that Ryan replied in the comments for the 560 Ti card to a commentor who inquired about the repeatability of FPS results with the 6950 1GB while playing Crisis at high resolutions, and it may pertain to your argument.
    He said that the results are "highly variable."
    If you are going by the avarage frame rate, and only at high res., the 6950 looks better than the 560 Ti but...
    Perhaps the 560 Ti produces more consistant results than the 6950?
  • JPForums - Thursday, January 27, 2011 - link

    Ryan does a really good job with articles, so I don't want to come off as bashing him. However, if that was a major concern, I really wish he would have mentioned it in the article. Taking it a step further, he could post charts with min, max, and average. Alternately, if he felt particularly generous, he could post a graph of the frame rates over the course of the benchmark for the cases where one companies cards are less consistent than the others. Of course, that would be a lot of work to do for every benchmark and would incur unnecessary delays in getting the articles out. I would only include such charts/graphs to back myself up when I felt it changed the outcome. That said, even if these never show up, I'll still enjoy reading Ryan's articles.

    On a personal note, the idea that the GTX560 Ti may be more consistent than the HD6950 makes me feel better about my decisions to purchase a GTX460 and GTX470 given the similarities in architecture. That said, I haven't noticed abnormal inconsistencies in frame rate with the HD6870 I bought as a Home Theater/Gaming card for the living room. I hope any inconsistencies in the frame rate of the HD6950 are driver related and not architectural, or we may loose some of the wonderful competition that has characterized the graphics market as of late.
  • britjh22 - Tuesday, January 25, 2011 - link

    I see the 6950 pricing as sort of strange. Currently you pay $10 after MIR's to move up to 2 GB. A small price for a sometimes useful boost. But if you take unlocking into consideration, the ability to unlock with the 2Gb version, and not with the 1Gb, I'd say that's quite a massive difference.

    Of course, I don't have a good feel about the success rate of the 6950 to 6970 unlock, or what % of cards it's possible with, but the pricing seems quite strange in that light.

    Oh, and let's see dropping prices on a 6870 please!
  • MeanBruce - Tuesday, January 25, 2011 - link

    Dude, 6870s are as low as $199- right now over at Newegg at least for the Sapphire, yup!
  • buildingblock - Tuesday, January 25, 2011 - link

    My local hardware dealer has several GTX 560s in stock today, including 900Mhz factory overclocked models. The Gigabyte Super OC 1Gb is listed and promised soon.... But the 1Gb AMD 6950 - no sign whatever. I see elsewhere references to the fact that this card is likely to be a short run special by AMD as a GTX 560 launch spoiler, and that certainly seems to be the case. I look forward to the Anandtech review of factory overclocked GTX 560s at some point.
  • Ryan Smith - Tuesday, January 25, 2011 - link

    At this point the only place you're going to find them is at Newegg and other e-tailers. With the launch pulled in by this much this soon, they won't be on B&M store shelves yet. This isn't all that rare, in fact I would say it's much more rare to find newly launched cards available in B&M stores.
  • TonyB - Tuesday, January 25, 2011 - link

    Competition is a wonderful thing ain't it?
  • prdola0 - Tuesday, January 25, 2011 - link

    Hello Ryan,
    after such a nice review of the GTX560 Ti, I am quite disappointed that you included the overclocked HD6870 in this test. First, after you reviewed the GTX460 and included an OCed model, you get bashed by AMD fans crying foul. So you ask readers to say if it's ok to include an OCed model and from the count of posts you draw a conclusion not to include OCed models. I was suprised then, because measuring such a thing by mere post count is quite inadequate, considering that unhappy people usually shout the loudest and the happy ones don't need to. So of course you'd have more posts against it, no surprise there. But then I kinda let it go. However, seeing now that you did include an OCed model again, but this time something that is not so common, unlike OCed GTX460, I was very upset. Why didn't you review the OCed Gigabyte or Asus GTX560 cards? And considering reviews from other sides and the great results the OCed cards have, will you prepare a new review article dedicated to the OCed GTX560 to fix this bias?

    Here is where I remember how I though saying things like "I am not going to visit this site anymore" is quite silly after the OCed GTX460 case. But seeing how you turn 180 for reasons unknown to me, I must say the very same thing.

    Best regards,
    Prdola
  • Aikouka - Tuesday, January 25, 2011 - link

    I really don't get the persistent whining from some of you over this topic. You're so "hurt" over Ryan spending *his* time on benchmarking a card that was *originally billed as the 560 Ti's competitor*... it's completely inane.

    If you don't think it should be considered, then simply ignore the card in the charts, and you'll get what you consider to be a pure "OC free" comparison.

    As for my stance on it, if something is purchased off the shelf **with the configuration that was tested**, then it's fine to put it on there in a normal (i.e. not overclocking specific) article and/or section.

Log in

Don't have an account? Sign up now