Seeing the Present: HDMI 1.4a, UVD3, and Display Correction

DisplayPort wasn’t the only aspect of AMD’s display controller that got an overhaul however, AMD’s HDMI capabilities have also been brought up to modern standards. Coming from Cypress with support for HDMI 1.3, AMD now supports HDMI 1.4a on the Barts based 6800 series and presumably they will do so on the rest of the 6800 series too. With HDMI 1.4a support AMD can now support full resolution (1080p) 3D stereoscopy for movies, and 720p for games and other material that require 60Hz/eye, along with 4k x 2k resolution for monitors and TVs that have equivalent support. Unlike DP this has less to do with monitors and more to do with TVs, so the importance of this will be seen more on future AMD cards when AMD refreshes their lower-end parts that we normally use with HTPCs.

Launching alongside support for displaying full resolution 3D stereoscopic video is the hardware necessary to decode such video, in the form of the latest version of AMD’s Unified Video Decoder: UVD3. The last time UVD received a major update was with UVD2, which launched alongside the Radeon HD 4000 series and added partial MPEG-2 decoding support by moving IDCT and MoComp from shaders in to the UVD fixed function hardware.


Click to enlarge

With the Radeon 6800 series AMD is releasing UVD3, which like UVD2 before it builds on the existing UVD feature set. UVD3 is adding support for 3 more-or-less new codecs: MPEG-2, MVC, and MPEG-4 ASP (better known as DivX/XviD). Starting with MPEG-4 ASP, it’s the only new codec in supported by UVD3 that’s actually new, as previously all MPEG-4 ASP decoding was done in software when it came to AMD GPUs. With UVD3 AMD can now completely offload MPEG-4 ASP decoding to the GPU, bringing forth the usual advantages of greatly reducing the amount of work the CPU needs to do and ideally reducing power consumption in the process.

AMD adding MPEG-4 ASP support gives us an interesting chance to compare and contrast them to NVIDIA, who added similar support a year ago in the GT21x GPUs. AMD  is a good bit behind NVIDIA here, but they’re making up for it by launching with much better software support for this feature than NVIDIA did; NVIDIA still does not expose their MPEG-4 ASP decoder in most situations, and overall did a poor job of advertising it. When we talked with DivX (who is AMD’s launch partner for this feature) they didn’t even know that NVIDIA had MPEG-4 ASP support. Meanwhile AMD is launching with DivX and had a beta version of the DivX codec with UVD3 support ready to test, and furthermore AMD is fully exposing their MPEG-4 ASP capabilities in their drivers as we see in this DXVA Checker screenshot.

The only downside at this time is that even with Microsoft’s greater focus on codecs for Windows 7, Windows 7 doesn’t know what to do with DXVA acceleration of MPEG-4 ASP. So while Win7 can play MPEG-4 ASP in software, you’re still going to need a 3rd party codec like the DivX codec to get hardware support for MPEG-4 ASP.

The other bit worth mentioning is that while AMD is launching support for MPEG-4 ASP decoding here on the 6800 series, much like HDMI 1.4a it’s not going to be a big deal for the 6800 series market. MPEG-4 ASP is a fairly lightweight codec, so support for it is going to be a bigger deal on low-end products, particularly AMD’s APUs if Llano and Bobcat end up using UVD3, as MPEG-4 ASP decoding in software requires a much greater share of resources on those products.

Up next is MPEG-2, which has been a codec stuck in limbo for quite some time over at AMD. MPEG-2 is even older and easier to decode than MPEG-4 ASP, and while GPUs have supported MPEG-2 decode acceleration as early as last decade, CPUs quickly became fast enough that when combined with low levels of hardware decode acceleration (inverse discrete cosine transform) was more than enough to play MPEG-2 content. Thus AMD hasn’t done much with MPEG-2 over the years other than moving IDCT/MoComp from the shaders to UVD for UVD2.

Because of the similarities between MPEG-4 ASP and MPEG-2, when AMD added support for full MPEG-4 ASP decode acceleration they were able to easily add support for full MPEG-2 decode acceleration, as they were able to reuse the MPEG-4 ASP entropy decode block for MPEG-2. As a result of including full MPEG-4 ASP decode acceleration, AMD now supports full MPEG-2 decode acceleration. Even more so than MPEG-4 ASP however, the benefits for this are going to lie with AMD’s low-end products where getting MPEG-2 off of the CPU should be a boon for battery life.

The final addition to UVD3 is support for Multiview Video Coding, which isn’t a new codec per se, but rather is an extension to H.264 for 3D stereoscopy. H.264 needed to be amended to support the packed frame formats used to store and transmit 3D stereoscopic videos, so with UVD3 AMD is adding support for MVC so that UVD can handle Blu-Ray 3D.

Finally, coupled with support for new codecs and new display outputs in AMD’s display controller is a refinement of AMD’s existing color correction capabilities in their display controller. Cypress and the rest of the 5000 series could do color correction directly on their display controllers, but they could only do so after gamma correction was applied, meaning they had to work in the non-linear gamma color space. Technically speaking this worked, but color accuracy suffered as a result. With the 6800 series’ new display controller, AMD can now perform color calibration in linear space by converting the image from gamma to linear color space for the color correction, before converting it back to gamma color space for display purposes.

As color correction is being used to correct for wide-gamut monitors the importance of this change won’t be seen right away for most users, but as wide-gamut monitors become more widespread color correction becomes increasingly important since wide-gamut monitors will misinterpret the normal sRGB colorspace that most rendering is done in.


Click to enlarge

Seeing the Future: DisplayPort 1.2 High IQ: AMD Fixes Texture Filtering and Adds Morphological AA
Comments Locked

197 Comments

View All Comments

  • Chris Peredun - Friday, October 22, 2010 - link

    Not bad, but consider that the average OC from the AT GTX 460 review was 24% on the core. (No memory OC was tried.)

    http://www.anandtech.com/show/3809/nvidias-geforce...
  • thaze - Friday, October 22, 2010 - link

    German magazine "PC Games Hardware" states the 68xx need "high quality" driver settings in order to reach 58xx image quality. Supposedly AMD confirmed changes regarding the driver's default settings.
    Therefore they've tested in "high quality" mode and got less convincing results.

    Details (german): http://www.pcgameshardware.de/aid,795021/Radeon-HD...
  • Ryan Smith - Friday, October 22, 2010 - link

    Unfortunately I don't know German well enough to read the article, and Google translations of technical articles are nearly worthless.

    What I can tell you is that the new texture quality slider is simply a replacement for the old Catalyst AI slider, which only controlled Crossfire profiles and texture quality in the first place. High quality mode disables all texture optimizations, which would be analogous to disabling CatAI on the 5800 series.So the default setting of Quality would be equivalent to the 5800 series setting of CatAT Standard.
  • thaze - Saturday, October 30, 2010 - link

    "High quality mode disables all texture optimizations, which would be analogous to disabling CatAI on the 5800 series.So the default setting of Quality would be equivalent to the 5800 series setting of CatAT Standard. "

    According to computerbase.de, this is the case with Catalyst 10.10. But they argue that the 5800's image quality suffered in comparison to previous drivers and the 6800 just reaches this level of quality. Both of them now need manual tweaking (6800: high quality mode; 5800: CatAI disabled) to deliver the Catalyst 10.9's default quality.
  • tviceman - Friday, October 22, 2010 - link

    I would really like more sites (including Anandtech) to investigate this. If the benchmarks around the web using default settings with the 6800 cards are indeed NOT apples to apples comparisons vs. Nvidia's default settings, then all the reviews aren't doing fair comparisons.
  • thaze - Saturday, October 30, 2010 - link

    computerbase.de also subscribes to this view after having invested more time into image quality tests.

    Translation of a part of their summary:
    " [...] on the other hand, the textures' flickering is more intense. That's because AMD has lowered the standard anisotropic filtering settings to the level of AI Advanced in the previous generation. An incomprehensible step for us, because modern graphics cards provide enough performance to improve the image quality.

    While there are games that hardly show any difference, others suffer greatly to flickering textures. After all, it is (usually) possible to reach the previous AF-quality with the "High Quality" function. The Radeon HD 6800 can still handle the quality of the previous generation after manual switching, but the standard quality is worse now!

    Since we will not support such practices, we decided to test every Radeon HD 6000 card with the about five percent slower high-quality settings in the future, so the final result is roughly comparable with the default setting from Nvidia."

    (They also state that Catalyst 10.10 changes the 5800's AF-quality to be similar to the 6800's, both in default settings, but again worse than default settings in older drivers.)
  • Computer Bottleneck - Friday, October 22, 2010 - link

    The boost in low tessellation factor really caught my eye.

    I wonder what kind of implications this will have for game designers if AMD and Nvidia decide to take different paths on this?

    I have been under the impression that boosting lower tessellation factor is good for System on a chip development because tessellating out a low quality model to a high quality model saves memory bandwidth.
  • DearSX - Friday, October 22, 2010 - link

    Unless the 6850 overclocks a good 25%, what 460s reference 460s seem to overclock on average, it seems to not be any better overall to me. Less noise, heat, price and power, but also less overclocked performance? I'll need to wait and see. Overclocking a 460 presents a pretty good deal at current prices, which will probably continue to drop too.
  • Goty - Friday, October 22, 2010 - link

    Did you miss the whole part where the stock 6870 is basically faster (or at worst on par with) the overclocked 460 1GB? What do you think is going to happen when you overclock the 5870 AT ALL?
  • DominionSeraph - Friday, October 22, 2010 - link

    The 6870 is more expensive than the 1GB GTX 460. Apples to apples would be DearSX's point -- 6850 vs 1GB GTX 460. They are about the same performance at about the same price -- $~185 for the 6850 w/ shipping and ~$180 for the 1GB GTX 460 after rebate.
    The 6850 has the edge in price/performance at stock clocks, but the GTX 460 overclocks well. The 6850 would need to consistently overclock ~20% to keep its advantage over the GTX 460.

Log in

Don't have an account? Sign up now