In the last few HTPC reviews, we have incorporated video decoding and rendering benchmarks. The Ivy Bridge review carried a table of values with the CPU and GPU usage. The Vision 3D 252B review made use of HWInfo's sensor graphs to provide a better perspective. In the latter review, it was easier to visualize the extent of stress that a particular video decode + render combination gave to the system. Unfortunately, HWInfo doesn't play well with the A10-5800K / Radeon HD 7660D yet. In particular, GPU loading and CPU package power aren't available for AMD-based systems yet.

The tables below present the results of running our HTPC rendering benchmark samples through various decoder and renderer combinations. Entries in bold with a single star indicate that there were dropped frames as per the renderer status reports in the quiescent state, while double stars indicate that the number of dropped frames made the video unwatchable. The recorded values include the GPU loading and power consumed by the system at the wall. An important point to note here is that the system was set to optimized defaults in the BIOS (GPU at 800 MHz, DRAM at 1600 MHz and CPU cores at 3800 MHz).

madVR :

madVR was configured with the settings mentioned in the software setup page. All the video post processing options in the Catalyst Control Center were disabled except for deinterlacing and pulldown detection. In our first pass, we used a pure software decoder (avcodec / wmv9 dmo, through LAV Video Decoder) to supply madVR with the decoded frames.

LAV Video Decoder Software Fallback + madVR
Stream GPU Usage % Power Consumption
480i60 MPEG-2 38 77.9 W
576i50 H.264 24 68.2 W
720p60 H.264 49 106.6 W
1080i60 H.264 81 128.1 W
1080i60 MPEG-2 85 115.4 W
1080i60 VC-1 84 131.7 W
1080p60 H.264 51 116.6 W

madVR takes up more than 80% of the resources when processing 60 fps interlaced material. The software decode penalty is reflected in the power consumed at the wall, with the 1080i60 VC-1 stream consuming more than 130W on an average. The good news is that all the streams played without any dropped frames with the optimized default settings.

The holy grail of HTPCs, in our opinion, is to obtain hardware accelerated decode for as many formats as possible. A year or so back, it wasn't possible to use any hardware decoders with the madVR renderer. Thanks to Hendrik Leppkes's LAV Filters, we now have a DXVA2 Copy-Back (DXVA2CB) decoder which enables usage of DXVA2 acceleration with madVR. The table below presents the results using DXVA2CB and madVR.

LAV Video Decoder DXVA2 Copy-Back + madVR
Stream GPU Usage % Power Consumption
480i60 MPEG-2 44 76.8 W
576i50 H.264 24 66.2 W
720p60 H.264 54 102.4 W
1080i60 H.264 ** 72 111.1 W
1080i60 MPEG-2 * 82 111.8 W
1080i60 VC-1 * 84 111.6 W
1080p60 H.264 ** 64 110.4 W

There is a slight improvement in power consumption for the first few streams. We still have a bit of power penalty compared to pure hardware decode because the decoded frames have to get back to the system memory and then go back into the GPU for madVR to process. An unfortunate point to note here is that none of the 1080i60 / 1080p60 streams could play properly with our optimized default settings (rendering their GPU usage and power consumption values meaningless). We did boost up the memory speeds to DDR3-2133 and saw some improvements with respect to the number of dropped frames. However, we were unable to make the four streams play perfectly even with non-default settings.

EVR-CP :

For non-madVR renderers, we set Catalyst 12.8 to the default settings. The table below presents the results obtained with LAV Video Decoder set to DXVA2 Native mode. All the streams played perfectly, but the power numbers left us puzzled.

LAV Video Decoder DXVA2 Native + EVR-CP
Stream GPU Usage % Power Consumption
480i60 MPEG-2 26 78.1 W
576i50 H.264 22 78.1 W
720p60 H.264 38 90.1 W
1080i60 H.264 69 103.9 W
1080i60 MPEG-2 69 102.2 W
1080i60 VC-1 69 104.2 W
1080p60 H.264 60 98.4 W

For SD streams, the power consumed is almost as much as madVR with software decode. However, the HD streams pull back the numbers a little. This is something worth investigating, but outside the scope of this article. However, we wanted to dig a bit into this, and decided to repeat the tests with the EVR renderer.

EVR :

With Catalyst 12.8 in default settings and LAV Video Decoder set to DXVA2 Native mode, all the streams played perfectly with low power consumption. All post processing steps were also visible (as enabled in the drivers)

LAV Video Decoder DXVA2 Native + EVR
Stream GPU Usage % Power Consumption
480i60 MPEG-2 27 60.6 W
576i50 H.264 25 60.1 W
720p60 H.264 35 65.7 W
1080i60 H.264 67 80.1 W
1080i60 MPEG-2 67 80.6 W
1080i60 VC-1 67 82.5 W
1080p60 H.264 59 79.2 W

A look at the above table indicates that hardware decode with the right renderer can make for a really power efficient HTPC. In some cases, we have more than 20 W difference depending on the renderer used, and as much as 40 W difference between software and hardware decode with additional renderer steps.

Custom Refresh Rates Acceleration for Flash and Silverlight
POST A COMMENT

49 Comments

View All Comments

  • Oxford Guy - Friday, September 28, 2012 - link

    4K strikes me as being completely unnecessary. 1080p is enough resolution. Reply
  • brookheather - Friday, September 28, 2012 - link

    Is this a typo? "Intel and NVIDIA offer 50 Hz, 59 Hz and 60 Hz settings which are exactly double of the above settings" - 59 is not double 29 - did you mean 58? Reply
  • ganeshts - Friday, September 28, 2012 - link

    Nope :) 29 Hz is 'control panel speak' for 29.97 Hz and 59 Hz is 'control panel speak' for 59.94 Hz. So, if you have a file at 29.97 fps, it can be played back without any dropped or unsymmetrical repetition at 59.94 Hz since each frame has to be just 'painted' twice at that refresh rate. Reply
  • cjs150 - Friday, September 28, 2012 - link

    This is exact the standard of article I read AT for.

    I remain complete bewildered that chip manufacturers cannot get the frame rates right. It may be an odd frame rate but it is a standard rate that has remained the same forever.

    However, the problem for AMD remains the TDP of the processors. Heat requires to be dealt with, usually by fans and that means noise. An HTPC needs to be as close to silent as possible.

    TDP of 65W is simply too high. You can (as I have) buy a ridiculously over powered i7-3770T which has a TDP of 45W. AMD need to reduce the TDP to no more than 35-45W. At that point there are various HTPC cases which can cool that completely passively.

    Overall this is yet another step forward in the ideal HTPC but we are still short of the promised land
    Reply
  • wwwcd - Saturday, September 29, 2012 - link

    i7-3770T too expensive against Trynity models and have a double weakness video. For poor peoples it not be choice. Reply
  • cjs150 - Saturday, September 29, 2012 - link

    I agree that the i7-3770T is too expensive at the moment compared to AMD alternatives but it does not have video weaknesses check out the review on Anandtech.

    The refresh rate is close to the correct rate but close is not good enough it should be spot on.

    There is still a lot of work to be done to get to an ideal HTPC CPU. Both AMD and intel are close. If anything AMD has slightly better video but, as I said, TDP is too high.

    Of course the other option is something like the Raspberry pi, unfortunately whilst hardware is promising the software still needs a lot of work
    Reply
  • Burticus - Friday, September 28, 2012 - link

    Put one of these on a mini-itx board and cram it into something the size of the Shuttle HX61 that I just got and I am interested. I am so spoiled by having a small, silent, cool HTPC I will never go back to anything louder or bigger than a 360. Reply
  • LuckyKnight - Saturday, September 29, 2012 - link

    AMD are missing a market here, working 23.976Hz with a 35W TDP for a passive cooled case. That would be my choice, if it existed.

    Shame Intel can't get 23.976 to work properly, despite their alleged promise!
    Reply
  • Esskay02 - Saturday, September 29, 2012 - link

    "Intel started the trend of integrating a GPU along with the CPU in the processor package with Clarkdale / Arrandale. The GPU moved to the die itself in Sandy Bridge. Despite having a much more powerful GPUs at its disposal (from the ATI acquisition), AMD was a little late in getting to the CPU - GPU party."

    According to my readings, it was AMD not Intel, first to talk and initiated APU(cpu+gpu). Intel found the threat used it manpower and resources , came out release cpu+Gpu chip.
    Reply

Log in

Don't have an account? Sign up now