In the last few HTPC reviews, we have incorporated video decoding and rendering benchmarks. The Ivy Bridge review carried a table of values with the CPU and GPU usage. The Vision 3D 252B review made use of HWInfo's sensor graphs to provide a better perspective. In the latter review, it was easier to visualize the extent of stress that a particular video decode + render combination gave to the system. Unfortunately, HWInfo doesn't play well with the A10-5800K / Radeon HD 7660D yet. In particular, GPU loading and CPU package power aren't available for AMD-based systems yet.

The tables below present the results of running our HTPC rendering benchmark samples through various decoder and renderer combinations. Entries in bold with a single star indicate that there were dropped frames as per the renderer status reports in the quiescent state, while double stars indicate that the number of dropped frames made the video unwatchable. The recorded values include the GPU loading and power consumed by the system at the wall. An important point to note here is that the system was set to optimized defaults in the BIOS (GPU at 800 MHz, DRAM at 1600 MHz and CPU cores at 3800 MHz).

madVR :

madVR was configured with the settings mentioned in the software setup page. All the video post processing options in the Catalyst Control Center were disabled except for deinterlacing and pulldown detection. In our first pass, we used a pure software decoder (avcodec / wmv9 dmo, through LAV Video Decoder) to supply madVR with the decoded frames.

LAV Video Decoder Software Fallback + madVR
Stream GPU Usage % Power Consumption
480i60 MPEG-2 38 77.9 W
576i50 H.264 24 68.2 W
720p60 H.264 49 106.6 W
1080i60 H.264 81 128.1 W
1080i60 MPEG-2 85 115.4 W
1080i60 VC-1 84 131.7 W
1080p60 H.264 51 116.6 W

madVR takes up more than 80% of the resources when processing 60 fps interlaced material. The software decode penalty is reflected in the power consumed at the wall, with the 1080i60 VC-1 stream consuming more than 130W on an average. The good news is that all the streams played without any dropped frames with the optimized default settings.

The holy grail of HTPCs, in our opinion, is to obtain hardware accelerated decode for as many formats as possible. A year or so back, it wasn't possible to use any hardware decoders with the madVR renderer. Thanks to Hendrik Leppkes's LAV Filters, we now have a DXVA2 Copy-Back (DXVA2CB) decoder which enables usage of DXVA2 acceleration with madVR. The table below presents the results using DXVA2CB and madVR.

LAV Video Decoder DXVA2 Copy-Back + madVR
Stream GPU Usage % Power Consumption
480i60 MPEG-2 44 76.8 W
576i50 H.264 24 66.2 W
720p60 H.264 54 102.4 W
1080i60 H.264 ** 72 111.1 W
1080i60 MPEG-2 * 82 111.8 W
1080i60 VC-1 * 84 111.6 W
1080p60 H.264 ** 64 110.4 W

There is a slight improvement in power consumption for the first few streams. We still have a bit of power penalty compared to pure hardware decode because the decoded frames have to get back to the system memory and then go back into the GPU for madVR to process. An unfortunate point to note here is that none of the 1080i60 / 1080p60 streams could play properly with our optimized default settings (rendering their GPU usage and power consumption values meaningless). We did boost up the memory speeds to DDR3-2133 and saw some improvements with respect to the number of dropped frames. However, we were unable to make the four streams play perfectly even with non-default settings.

EVR-CP :

For non-madVR renderers, we set Catalyst 12.8 to the default settings. The table below presents the results obtained with LAV Video Decoder set to DXVA2 Native mode. All the streams played perfectly, but the power numbers left us puzzled.

LAV Video Decoder DXVA2 Native + EVR-CP
Stream GPU Usage % Power Consumption
480i60 MPEG-2 26 78.1 W
576i50 H.264 22 78.1 W
720p60 H.264 38 90.1 W
1080i60 H.264 69 103.9 W
1080i60 MPEG-2 69 102.2 W
1080i60 VC-1 69 104.2 W
1080p60 H.264 60 98.4 W

For SD streams, the power consumed is almost as much as madVR with software decode. However, the HD streams pull back the numbers a little. This is something worth investigating, but outside the scope of this article. However, we wanted to dig a bit into this, and decided to repeat the tests with the EVR renderer.

EVR :

With Catalyst 12.8 in default settings and LAV Video Decoder set to DXVA2 Native mode, all the streams played perfectly with low power consumption. All post processing steps were also visible (as enabled in the drivers)

LAV Video Decoder DXVA2 Native + EVR
Stream GPU Usage % Power Consumption
480i60 MPEG-2 27 60.6 W
576i50 H.264 25 60.1 W
720p60 H.264 35 65.7 W
1080i60 H.264 67 80.1 W
1080i60 MPEG-2 67 80.6 W
1080i60 VC-1 67 82.5 W
1080p60 H.264 59 79.2 W

A look at the above table indicates that hardware decode with the right renderer can make for a really power efficient HTPC. In some cases, we have more than 20 W difference depending on the renderer used, and as much as 40 W difference between software and hardware decode with additional renderer steps.

Custom Refresh Rates Acceleration for Flash and Silverlight
POST A COMMENT

49 Comments

View All Comments

  • ganeshts - Thursday, September 27, 2012 - link

    Hmmm.. all vendors tag 23.976 Hz as 23 Hz in the monitor / GPU control panel settings. So, when I set the panel to 23 Hz, I am actually expecting 23.976 Hz. However, this platform gives me 23.977 Hz which is a departure from the usually accurate AMD cards that I have seen so far. Reply
  • ChronoReverse - Thursday, September 27, 2012 - link

    23.977 and 23.976 are so close that it's basically the same (the error in measuring tools would be as large as the difference). I'd only be concerned if it were 23.970.

    In any case, from looking at the screenshots in the gallery, the only frequency looking rather off is 60Hz (although my AMD card has always given similar lower than 60Hz results anyway).
    Reply
  • ganeshts - Thursday, September 27, 2012 - link

    Note that these are in Hz, not MHz. So, the margin for error is quite large. In fact, madVR statistics deliver accurate refresh rates up to 6 decimal digits (as the screenshots show).

    To read more on why the 0.001 Hz difference matters for SOME people, look this up: http://www.anandtech.com/show/4380/discrete-htpc-g...

    In short, with the 0.001 Hz difference, the renderer might need to repeat a frame every ~17 minutes. I am NOT saying that this is a serious issue for everyone, but there are some readers who do care about this (as evidenced by the range of opinions expressed in this thread: http://www.avsforum.com/t/1333324/lets-set-this-st...
    Reply
  • ChronoReverse - Thursday, September 27, 2012 - link

    That thread on avsforum is talking about 24FPS playback where if you got 23.97x instead, it's a stutter about every 42 seconds which is terrible and clearly not acceptable (to my eye anyway).

    Still, I do admit that even a single stutter every 17 minutes is noticeable.

    Also, I had misread that part of the review a bit since for some reason I had the impression it was saying the performance of AMD has diminished when it's still about the same +/- 0.002Hz
    Reply
  • jeremyshaw - Thursday, September 27, 2012 - link

    Wasn't AMD's first APU Brazos, not Llano? Or was it too small to really count!? Reply
  • ganeshts - Thursday, September 27, 2012 - link

    Technically correct, but it didn't compete in the same level as the Clarkdales / Arrandales / Sandy Bridge lineup :) Reply
  • jamawass - Thursday, September 27, 2012 - link

    "The video industry is pushing 4K and it makes more sense to a lot of people compared to the 3D push. 4K will see a much faster rate of adoption compared to 3D, but Trinity seems to have missed the boat here. AMD's Southern Islands as well as NVIDIA's Kepler GPUs support 4K output over HDMI, but Trinity doesn't have 4K video decode acceleration or 4K display output over HDMI."
    Although this statement is technically correct it has no real world relevance. At this time people who can afford 4k TVs ( if there any commercially available ones at this time) won't be messing around with cheap htpcs. It's an inconsequential statement made just to detract from AMD's overall superiority with this product in the htpc market.
    If I was in AMD's shoes why would I dedicate resources to a nonexistent market ? Has anyone actually tested Nvidia or Intel's 4K output over HDMI to see whether they actually work? In the early days of HDCP all the video card manufacturers were claiming compliance but real world compatibility was a different matter.

    '
    Reply
  • ganeshts - Thursday, September 27, 2012 - link

    I had the same caveat in the Ivy Bridge HTPC review. Surprised you didn't notice that, but you notice this :) Ivy Bridge doesn't support 4K over HDMI yet.

    Anyways, yes, we have test 4K output from both NV and AMD. When AMD 7750 was released, we didn't have access to a 4K display, but things changed when the GT 640 was released:

    http://www.anandtech.com/show/5969/zotac-geforce-g...

    I don't have a sample image ready for the 7750, but I can assure you that it works as well as NVIDIA's and I have personally tested it. In fact, AMD was the first to 4K output over HDMI.
    Reply
  • JNo - Saturday, September 29, 2012 - link

    More importantly do you have any 4K films to watch? No. Will you in the immediate future? No. Even then, when will *most* new films coming out be available in 4K? Probably in 5 years time when you'd build a new HTPC anyway.

    The 4K thing is absolutely irrelevant at this point (unlike 3D I'd argue because you can go into plenty of shops and buy actual 3D media).

    After Hi Def came out hardware (TVs) were available quickly but it took a *long* time before there was plenty of 1080p material anyway (note use of the word, 'plenty'). Hell, most people I know are still watching stuff in SD. Laughably, 4K isn't even close to being out yet, let alone the content.

    The whole thing's a red herring right now and for a long while.
    Reply
  • Cotita - Thursday, September 27, 2012 - link

    I'm not sure I'd go for an A10.

    Even a A4 3420 would do pretty much the same.

    Heck, If I don't care about HD flash or silverlight even a E-350 is enough
    Reply

Log in

Don't have an account? Sign up now