Unified Video Decoder and Playback Pathways

A typical consumer user experience revolves a lot around video, and AMD identified for Carrizo a big potential to decrease power consumption and increase performance in a couple of different ways. First up is adjusting the path by which data is moved around the system, particularly as not a lot of video matches up with the native resolution of the screen or is scaled 1:1.

When a video exhibits a form of scaling, either it is made full screen and scaled up or it is a higher resolution video that scales down, that scaling is typically performed by the GPU. The data leaves the decoder (either hardware or software), enters system memory, moves into the graphics memory, is processed by the GPU, moves back out to memory, and then is transferred to the display. This requires multiple read/write commands to memory, requires the GPU to be active but underutilized, and this happens for every frame. AMD’s solution to this is to provide some simple scaling IP in the display engine itself, allowing for scaled video to go from the decoder to the display engine, leaving the GPU in a low power state.

The video playback paths at the bottom of this side show the explanation graphically, and AMD is quoting a 4.8W down to 1.9W movement in power consumption for these tasks. Note that the 4.8W value is for Kaveri, so there are other enhancements in there is as well, but the overall picture is a positive one and AMD quotes a 500mW of APU power savings.

The Unified Video Decoder (UVD) has been built to support the above codecs, with HEVC decode on die as well as native 4K H.264 decode as well. I’ll come back to the 4K element in a second, but what is perhaps missing from this list is VP9, the codec used by Google for YouTube. YouTube is still the number one source for video content on the web, and as Google is transitioning more to VP9, as well as AMD’s competition advertising it as a perk on their latest hardware, it was perhaps confusing for AMD to miss it out. I did ask on this, and was told that they picked HEVC over VP9 as they believe it will be the more important codec going forward, particularly when you consider that the majority of the popular streaming services (NetFlix, Hulu, Amazon) will be using HEVC for their high definition titles.

Back onto the 4K equation, and this is possible because AMD has increased the decode bandwidth of the UVD from 1080p to 4K. This affords two opportunities – 4K video on the fly, or 1080p video decoded in a quarter of the time, allowing the race to sleep for both the UVD and DRAM. Despite a 75% reduction in work, as the UVD does not use that much power, it results in only 30 minutes of extra video playback time, but it is welcome and contributes to that often marketed ‘video playback’ number.

Power Saving and Power Consumption Graphics
Comments Locked

137 Comments

View All Comments

  • FlushedBubblyJock - Tuesday, June 9, 2015 - link

    amazing how a critically correct comment turns into an angry ranting conspiracy from you
  • BillyONeal - Wednesday, June 3, 2015 - link

    This is a preview piece. They don't have empirical data because the hardware isn't in actual devices yet. Look at any of AT's IDF coverage and you'll see basically the exact same thing.
  • Refuge - Wednesday, June 3, 2015 - link

    nothing has been released yet. but it was announced. This is a news site, you think they are just going to ignore AMD's product announcement? That would be considered "Not doing their job"

    They go through the claims, explain them, try to see if they are plausible with what little information they have. I like these articles, it gives me something to digest while I wait for a in depth review, and when I go to read said review I know exactly what information I'm most interested in.
  • KaarlisK - Wednesday, June 3, 2015 - link

    About adaptive clocking.
    Power is not saved by reducing frequency by 5% for 1% of the time.
    Power is saved by reducing the voltage margin (increasing frequency at the same voltage) _all_ the time.
    Also, when the voltage instability occurs, only frequency is reduced. The requested voltage, IMHO, does not change.
  • ingwe - Wednesday, June 3, 2015 - link

    Interesting. That makes more sense for sure.
  • name99 - Monday, June 8, 2015 - link

    It seems like a variant of this should be widely applicable (especially if AMD have patents on exactly what they do). What I have in mind is that when you detect droop rather than dynamically change the frequency (which is hard and requires at least some cycles) you simply freeze the entire chip's clock at the central distribution point --- for one cycle you just hold everything at zero rather than transitioning to one and back. This will give the capacitors time to recover from the droop (and obviously the principle can be extended to freeze the clock for two cycles or even more if that's how long it takes for the capacitors to recover).

    This seems like it should allow you to run pretty damn close to the minimum necessary voltage --- basically all you now need is enough margin to ensure that you don't overdraw within a worst case single-cycle. But you don't need to provision for 3+ worst-case cycles, and you don't need the alternative of fancy check-point and recovery mechanisms.
  • KaarlisK - Wednesday, June 3, 2015 - link

    About that power plane.
    "In yet more effort to suction power out of the system, the GPU will have its own dedicated voltage plane as part of the system, rather than a separate voltage island requiring its own power delivery mechanism as before"
    As I understand it, "before" = same power plane/island as other parts of the SoC.
  • Gadgety - Wednesday, June 3, 2015 - link

    Great read and analysis given the fact that actual units are not available for testing.

    As a consumer looking for use of Carrizo beyond laptops, provided AMD releases it for consumers, it could be a nice living room HTPC/light gaming unit.
  • Laxaa - Wednesday, June 3, 2015 - link

    I would buy a Dell XPS13-esque machine with this(i.e. high quality materials, good design and a high res screen)
  • Will Robinson - Wednesday, June 3, 2015 - link

    According to ShintelDK and Chizow...the above article results are from an Intel chip and AT have been paid to lie and say its Carrizo because their lives would have no meaning if it is a good product from AMD.

Log in

Don't have an account? Sign up now