Display Matters: Virtual Super Resolution, Frame Rate Targeting, and HEVC Decoding

Wrapping up our look at the technical underpinnings of the Fiji GPU, we’ll end things with a look at the display and I/O stack for AMD’s latest GPU.

As a GCN 1.2 part, Fiji inherits most of its capabilities in-place from Tonga. There is one notable exception to this, HEVC, which we’ll get to in a bit, otherwise from a features standpoint you’re looking at the same display feature set as was on Tonga.

For Display I/O this means 6 display controllers capable of driving DVI, HDMI 1.4a, and DisplayPort 1.2a. Unfortunately because Tonga lacked support for HDMI 2.0, the same is true for Fiji, and as a result you can only drive 4k@60Hz displays either via DisplayPort, or via tandem HDMI connections. The good news here is that it will be possible to do active conversion from DisplayPort to HDMI 2.0 later this year, so Fiji is not permanently cut-off from HDMI 2.0, however those adapters aren’t here quite yet and there are still some unresolved questions to be addressed (e.g. HDCP 2.2).

On the multimedia front, Fiji brings with it an enhanced set of features from Tonga. While the video encode side (VCE) has not changed – AMD still supports a wide range of H.264 encode settings – the video decode side has seen a significant upgrade. Fiji is the first AMD discrete GPU to support full hardware HEVC decoding, coinciding with the launch of that feature on the GCN 1.2-based Carrizo APU as well.

A look at DXVA Checker confirms the presence of Main Profile (HEVC_VLD_Main) support, the official designation for 8-bit color support. Main profile is expected to be the most common profile level for HEVC content, so Fiji’s support of just Main profile should cover many use cases.

Unfortunately what you won’t find here is Main10 profile support, which is the profile for 10-bit color, and AMD has confirmed that 10-bit color support is not available on Fiji. As our in-house video guru Ganesh T S pointed out when looking at these results, Main10 is already being used in places you wouldn’t normally expect to see it, such as Netflix streaming. So there is some question over how useful Fiji’s HEVC decoder will be with commercial content, ignoring for now the fact that lack of Main10 support essentially rules out good support for some advanced color space features such as Rec. 2020, which needs higher bit depths to support the larger color space without extensive banding.

Meanwhile the state of AMD’s drivers with respect to video playback is hit and miss. DXVA Checker crashed when attempting to enumerate 4K resolution support on Fiji, and 4K has been something of a thorn in AMD’s side. This is also likely why Media Player Classic Home Cinema and its built-in LAV Filters are currently ignoring 4K support on Fiji and are falling back to software decoding. As a result 1080p hardware decoding works great on Fiji – both H.264 and HEVC – but getting Fiji to decode 4K content is a lot harder. Using Windows’ built-in H.264 decoder works for 4K H.264 decoding, and in the meantime it’s a bit harder to test Fiji’s HEVC capabilities at 4K since Windows 8 lacks an HEVC decoder.

Decoding 1080p HEVC In MPC-HC on Fiji

With full hardware decode support for HEVC still being relatively new in the PC space, I expect we’ll see some teething issues for some time yet. For the moment AMD needs to resolve any crashing issues and get off of LAV’s blacklist, since the LAV filters are by our estimation the most commonly used for generic HEVC media playback.

On a side note, given the fact that the Tonga GPU (R9 285) is the only GCN 1.2 GPU without HEVC decoding, I also took the liberty of quickly loading up a modified copy of the Catalyst 15.15 launch drivers for the R9 300/Fury series, and seeing if HEVC support may have been hidden in there the entire time. Even with these latest drivers, R9 285 does not support HEVC, and while I admittedly wasn’t expecting it to, I suspect there’s more to Tonga’s UVD block given its nature as the odd man out.

Last but not least, TrueAudio support is also included with Fiji. First introduced on AMD’s GCN 1.1 family, TrueAudio is AMD’s implementation of advanced hardware audio processing, powered by a cluster of Tensilica’s HiFi EP DSPs. Despite these DSPs being similar to what’s found on the PS4, we have not seen much in the way of support for TrueAudio in the last year outside of a few AMD-sponsored demos/titles, so thus far it remains an underutilized hardware feature.

Moving on, let’s talk software features. Back in December with their Omega Drivers, AMD introduced Virtual Super Resolution. VSR is AMD’s implementation of downsampling and is essentially the company’s answer to NVIDIA’s DSR technology.

However while VSR and DSR are designed to solve the same problem, the two technologies go about solving it in very different ways. With DSR NVIDIA implemented it as a shader program; it gave NVIDIA a lot of resolution flexibility in exchange for a slight performance hit, and for better or worse they threw in a Gaussian blur by default as well. AMD however opted to implement VSR directly against their display controllers, skipping the shading pipeline and the performance hit at a cost of flexibility.

Due to the nature of VSR and the fact that it heavily relies on the capabilities of AMD’s display controllers, only AMD’s newest generation display controllers offer the full range of virtual resolutions. The GCN 1.1 display controller, for example, could not offer 4K virtual resolutions, so the R9 290X and other high-end Hawaii cards topped out at a virtual resolution of 3200x1800 for 1080p and 1440p displays. With GCN 1.2 however, AMD’s newer display controller supports downsampling from 4K in at least some limited scenarios, and while this wasn’t especially useful for the R9 285, this is very useful for the R9 Fury X.

Overall for the R9 Fury X, the notable downsampling modes supported for the card are 3200x1800 (2.77x) and 3840x2160 (4.0x) for a native resolution of 1080p, 2560x1600 (1.77x) and 3840x2400 (4.0x) for a native resolution of 1200p, and unfortunately just 3200x1800 (1.56x) for a native resolution of 1440p. As a result VSR still can’t match the flexibility of DSR when it comes to resolutions, but AMD can finally offer 4K downsampling for 1080p panels, which allows for a nice (but expensive) 2x2 oversampling pattern, very similar to 4x ordered grid SSAA.

Finally, with AMD’s latest drivers they are also introducing a new framerate capping feature they are calling Frame Rate Target Control (FRTC). FRTC itself is not a new concept – 3rd party utilities such as MSI Afterburner and Radeon Pro have supported such functionality for a number of years now – however the change here is that AMD is finally bringing the technology into their drivers rather than requiring users to seek out 3rd party tools to do the job.

Frame Rate Target Control: From 55 fps to 95 fps

The purpose of FRTC is to allow users to cap the maximum framerate of a game without having to enable v-sync and the additional latency that can come from it, making for an effective solution that not v-sync and yet still places a hard cap on framerates. Note however that this is not a dynamic technology (ala NVIDIA’s Adaptive Sync), so there is no ability to dynamically turn v-sync on and off here. As for why users might want to cap their framerates, this is primarily due to the fact that video cards like the R9 Fury X can run circles around most older games, rendering framerates in to the hundreds at a time when even the fastest displays top out at 144Hz. Capping the frame rate serves to cut down on unnecessary work as a result, keeping the GPU from rendering frames that will never be seen.

AMD is only advertising FRTC support for the 300/Fury series at this time, so there is some question over whether we will see it brought over to AMD’s older cards. Given that AMD’s drivers are essentially split at the moment, I suspect we won’t have our final answer until the drivers get re-unified in a later release (most likely this month).

HBM: The 4GB Question The Four Faces of Fiji, & Quantum Too


View All Comments

  • bennyg - Saturday, July 4, 2015 - link

    Marketing performance. Exactly.

    Except efficiency was not good enough across the generations of 28nm GCN in an era where efficiency + thermal/power limits constrain performance, and look what Nvidia did over a similar era from Fermi (which was at market when GCN 1.0 was released) to Kepler to Maxwell. Plus efficiency is kind of the ultimate marketing buzzword in all areas of tech and not having any ability to mention it (plus having generally inferor products) hamstrung their marketing all along
  • xenol - Monday, July 6, 2015 - link

    Efficiency is important because of three things:

    1. If your TDP is through the rough, you'll have issues with your cooling setup. Any time you introduce a bigger cooling setup because your cards run that hot, you're going to be mocked for it and people are going to be weary of it. With 22nm or 20nm nowhere in sight for GPUs, efficiency had to be a priority, otherwise you're going to ship cards that take up three slots or ship with water coolers.

    2. You also can't just play to the desktop market. Laptops are still the preferred computing platform and even if people are going for a desktop, AIOs are looking much more appealing than a monitor/tower combo. So you want to have any shot in either market, you have to build an efficient chip. And you have to convince people they "need" this chip, because Intel's iGPUs do what most people want just fine anyway.

    3. Businesses and such with "always on" computers would like it if their computers ate less power. Even if you can save a handful of watts, multiplying that by thousands and they add up to an appreciable amount of savings.
  • xenol - Monday, July 6, 2015 - link

    (Also by "computing platform" I mean the platform people choose when they want a computer) Reply
  • medi03 - Sunday, July 5, 2015 - link

    ATI is the reason both Microsoft and Sony use AMDs APUs to power their consoles.
    It might be the reason why APUs even exist.
  • tipoo - Thursday, July 2, 2015 - link

    That was then, this is now. Now, AMD together with the acquisition, has a lower market cap than Nvidia. Reply
  • Murloc - Thursday, July 2, 2015 - link

    yeah, no. Reply
  • ddriver - Thursday, July 2, 2015 - link

    ATI wasn't bigger, AMD just paid a preposterous and entirely unrealistic amount of money for it. Soon after the merger, AMD + ATI was worth less than what they paid for the latter, ultimately leading to the loss of its foundries, putting it in an even worse position. Let's face it, AMD was, and historically has always been betrayed, its sole purpose is to create the illusion of competition so that the big boys don't look bad for running unopposed, even if this is what happens in practice.

    Just when AMD got lucky with Athlon a mole was sent to make sure AMD stays down.
  • testbug00 - Sunday, July 5, 2015 - link

    foundries didn't go because AMD bought ATI. That might have accelerated it by a few years however.

    Foundry issue and cost to AMD dates back to the 1990's and 2000-2001.
  • 5150Joker - Thursday, July 2, 2015 - link

    True, AMD was at a much better position in 2006 vs NVIDIA, they just got owned. Reply
  • 3DVagabond - Friday, July 3, 2015 - link

    When was Intel the underdog? Because that's who's knocked them down (The aren't out yet.). Reply

Log in

Don't have an account? Sign up now