HTPC Credentials - Local Media Playback and Video Processing

Evaluation of local media playback and video processing is done by playing back files encompassing a range of relevant codecs, containers, resolutions, and frame rates. A note of the efficiency is also made by tracking GPU usage and power consumption of the system at the wall. Users have their own preference for the playback software / decoder / renderer, and our aim is to have numbers representative of commonly encountered scenarios. Towards this, we played back the test streams using the following combinations:

  • MPC-HC x64 1.8.5 + LAV Video Decoder (DXVA2 Native) + Enhanced Video Renderer - Custom Presenter (EVR-CP)
  • MPC-HC x64 1.8.5 + LAV Video Decoder (D3D11) + madVR 0.92.17 (DXVA-Focused)
  • MPC-HC x64 1.8.5 + LAV Video Decoder (D3D11) + madVR 0.92.17 (Lanczos-Focused)
  • VLC 3.0.8
  • Kodi 18.5

The thirteen test streams (each of 90s duration) were played back from the local disk with an interval of 30 seconds in-between. Various metrics including GPU power consumption and at-wall power consumption were recorded during the course of this playback. Prior to looking at the metrics, a quick summary of the decoding capabilities of the Intel UHD Graphics is useful to have for context.

The Intel UHD Graphics GPU is no different from the GPUs in the Bean Canyon and Baby Canyon NUCs as far as video decoding capabilities are concerned. We have hardware acceleration for all common codecs including VP9 Profile 2.

All our playback tests were done with the desktop HDR setting turned on. It is possible for certain system configurations to have madVR automatically turn on/off the HDR capabilities prior to the playback of a HDR video, but, we didn't take advantage of that in our testing.

VLC and Kodi

VLC is the playback software of choice for the average PC user who doesn't need a ten-foot UI. Its install-and-play simplicity has made it extremely popular. Over the years, the software has gained the ability to take advantage of various hardware acceleration options. Kodi, on the other hand, has a ten-foot UI making it the perfect open-source software for dedicated HTPCs. Support for add-ons make it very extensible and capable of customization. We played back our test files using the default VLC and Kodi configurations, and recorded the following metrics.

Video Playback Efficiency - VLC and Kodi

VLC doesn't seem to take advantage of VP9 Profile 2 hardware acceleration, while Kodi is able to play back all streams without any hiccups.

MPC-HC

MPC-HC offers an easy way to test out different combinations of decoders and renderers. The first configuration we evaluated is the default post-install scenario, with only the in-built LAV Video Decoder forced to DXVA2 Native mode. Two additional passes were done with different madVR configurations. In the first one (DXVA-focused), we configured madVR to make use of the DXVA-accelerated video processing capabilities as much as possible. In the second (Lanczos-focused), the image scaling algorithms were set to 'Lanczos 3-tap, with anti-ringing checked'. Chroma upscaling was configured to be 'BiCubic 75 with anti-ringing checked' in both cases. The metrics collected during the playback of the test files using the above three configurations are presented below.

Video Playback Efficiency - MPC-HC with EVR-CP and madVR

LAV Filters with EVR-CP is able to play back all streams without dropped frames, but madVR is a different story. Almost all streams 1080p and higher see varying levels of significant spikes in power consumption pointing to the decode and display chain struggling to keep up with the required presentation frame rate. Given that the GPU is weaker than the one in Bean Canyon, this is not a surprise. Overall, the Frost Canyon NUC is acceptable for a vanilla decode and playback device without extensive video post-processing.

HTPC Credentials - YouTube and Netflix Streaming Power Consumption and Thermal Performance
Comments Locked

85 Comments

View All Comments

  • The_Assimilator - Monday, March 2, 2020 - link

    It's not, but the point is still valid: nobody buying these things is doing so because they expect them to be graphics powerhouses.
  • HStewart - Monday, March 2, 2020 - link

    But some people are so naive and don't realize the point. I came up in days when your purchase card that didn't even have GPU's on it. Not sure what level iGPU's are but they surely can run business graphics fine and even games a couple of years ago.
  • notb - Thursday, March 5, 2020 - link

    Horrible?
    These iGPUs can drive 3 screens with maybe 1-2W power draw. Show me another GPU that can do this.

    This is an integrated GPU made for efficient 2D graphics. There's very little potential to make it any better.
  • PaulHoule - Monday, March 2, 2020 - link

    Well, Intel's horrible iGPUs forced Microsoft to walk back the graphical complexity of Windows XP. They kept the GPU dependent architecture, but had to downgrade to "worse than cell phone" visual quality because Intel kneecaped the graphics performance of the x86 platform. (Maybe you could get something better, but developers can't expect you to have it)
  • HStewart - Monday, March 2, 2020 - link

    I think we need actual proof on these bias statements. I think there is big difference of running a screen at 27 or more inches than 6 to 8 inches no matter what the resolution.
  • Korguz - Monday, March 2, 2020 - link

    we need proof of your bias statements, but yet, you very rarely provide any.. point is ??
  • Samus - Monday, March 2, 2020 - link

    What does screen size have to do with anything? Intel can't make an iGPU that can drive a 4K panel fluidly, meanwhile mainstream Qualcomm SoC's have GPU performance able to drive 4K panels using a watt of power.
  • HStewart - Tuesday, March 3, 2020 - link

    Can Qualcomm actually drive say a 32 in 4k screen efficiently. Also what is being measure here, Videos or actually games and that depends on how they are written.
  • erple2 - Saturday, March 14, 2020 - link

    I'm not sure that I understand your statement here, as it doesn't seem to make any sense. I was not aware that they physical dimensions of the screen mattered at all to the GPU, apart from how many pixels it has to individually manage/draw. If your implication is the complexity and quantity of information that can be made significant on a 32" screen is different from a 5.7" screen, then I suppose you can make that argument. However, I have to make guesses as to what you meant for this to come to that conclusion.

    Generally the graphical load to display 4k resolution is independent of whether the actual screen is 6" or 100". Unless I'm mistaken?
  • PeachNCream - Monday, March 2, 2020 - link

    For once, I agree with HStewart (feels like I've been shot into the Twilight Zone to even type that). To the point though, Windows XP was released in 2001. Phones in that time period were still using black and white LCD displays. Intel's graphics processors in that time period were the Intel Extreme series built into the motherboard chipset (where they would remain until around 2010, after the release of WIndows 7). Sure those video processors are slow compared to modern cell phones, but nothing a phone could do when XP was in development was anything close to what a bottom-feeder graphics processor could handle. I mean crap, Doom ran (poorly) on a 386 with minimal video hardware and that was in the early 1990s whereas phones eight years later still didn't have color screens.

Log in

Don't have an account? Sign up now