Decoding and Rendering Benchmarks

Our decoding and rendering benchmarks consists of standardized test clips (varying codecs, resolutions and frame rates) being played back through MPC-HC. GPU usage is tracked through GPU-Z logs and power consumption at the wall is also reported. The former provides hints on whether frame drops could occur, while the latter is an indicator of the efficiency of the platform for the most common HTPC task - video playback.

Enhanced Video Renderer (EVR) / Enhanced Video Renderer - Custom Presenter (EVR-CP)

The Enhanced Video Renderer is the default renderer made available by Windows 8. It is a lean renderer in terms of usage of system resources since most of the aspects are offloaded to the GPU drivers directly. EVR is mostly used in conjunction with native DXVA2 decoding. The GPU is not taxed much by the EVR despite hardware decoding also taking place. Deinterlacing and other post processing aspects were left at the default settings in the Intel HD Graphics Control Panel (and these are applicable when EVR is chosen as the renderer). EVR-CP is the default renderer used by MPC-HC. It is usually used in conjunction with MPC-HC's video decoders, some of which are DXVA-enabled. However, for our tests, we used the DXVA2 mode provided by the LAV Video Decoder. In addition to DXVA2 Native, we also used the QuickSync decoder developed by Eric Gur (an Intel applications engineer) and made available to the open source community. It makes use of the specialized decoder blocks available as part of the QuickSync engine in the GPU.

Power consumption shows a tremendous decrease across all streams. Admittedly, the passive Ivy Bridge HTPC uses a 55W TDP Core i3-3225, but, as we will see later, the power consumption at full load for the Haswell build is very close to that of the Core i3-3225 build despite the lower TDP of the Core i7-4765T.

In general, using the QuickSync decoder results in a higher power consumption because the decoded frames are copied back to the DRAM before being sent to the renderer. Using native DXVA decoding, the frames are directly passed to the renderer without the copy-back step. The odd-man out in the power numbers is the interlaced VC-1 clip, where QuickSync decoding is more efficient compared to 'native DXVA2'. This is because there is currently no support in the open source native DXVA2 decoders for interlaced VC-1 on Intel GPUs, and hence, it is done in software. On the other hand, the QuickSync decoder is able to handle it with the VC-1 bitstream decoder in the GPU.

The GPU utilization numbers follow a similar track to the power consumption numbers. EVR is very lean on the GPU, as discussed earlier. The utilization numbers provide proof of the same. QuickSync appears to stress the GPU more, possibly because of the copy-back step for the decoded frames.

madVR

Videophiles often prefer madVR as their renderer because of the choice of scaling algorithms available as well as myriad other features. In our recent Ivy Bridge HTPC review, we found that with DDR3-1600 DRAM, it was straightforward to get madVR working with the default scaling algorithms for all materials 1080p60 or lesser. In the meanwhile, Mathias Rauen (developer of madVR) has developed more features. In order to alleviate the ringing artifacts introduced by the Lanczos algorithm, an option to enable an anti-ringing filter was introduced. A more intensive scaling algorithm (Jinc) was also added. Unfortunately, enabling either the anti-ringing filter with Lanczos or choosing any variant of Jinc resulted in a lot of dropped frames. Haswell's HD4600 is simply not powerful enough for these madVR features.

It is not possible to use native DXVA2 decoding with madVR because the decoded frames are not made available to an external renderer directly. (Update: It is possible to use DXVA2 Native with madVR since v0.85. Future HTPC articles will carry updated benchmarks) To work around this issue, LAV Video Decoder offers three options. The first option involves using software decoding. The second option is to use either QuickSync or DXVA2 Copy-Back. In either case, the decoded frames are brought back to the system memory for madVR to take over. One of the interesting features to be integrated into the recent madVR releases is the option to perform DXVA scaling. This is particularly interesting for HTPCs running Intel GPUs because the Intel HD Graphics engine uses dedicated hardware to implement support for the DXVA scaling API calls. AMD and NVIDIA apparently implement those calls using pixel shaders. In order to obtain a frame of reference, we repeated our benchmark process using DXVA2 scaling for both luma and chroma instead of the default settings.

One of the interesting aspects to note here is the fact that the power consumption numbers show a much larger shift towards the lower end when using DXVA2 scaling. This points to more power efficient updates in the GPU video post processing logic.

DXVA scaling results in much lower GPU usage for SD material in particular with a corresponding decrease in average power consumption too. Users with Intel GPUs can continue to enjoy other madVR features while giving up on the choice of a wide variety of scaling algorithms.

Refresh Rate Handling - 23.976 Hz Works! Network Streaming Performance - Netflix and YouTube
Comments Locked

95 Comments

View All Comments

  • heffeque - Monday, June 3, 2013 - link

    Well... the AMD A4-5000 seems to be perfect for HTPC and I don't see in this comparison.
    Why not try comparing what the AMD A4-5000 can do (4k, 23Hz, etc) versus this Haswell system?
    The CPU isn't that good, but there's no need for much CPU on HTPC systems, and also... the price, just look at the price.
  • meacupla - Monday, June 3, 2013 - link

    when you playback hi10 or silverlight content, having a fast cpu helps immensely, since those formats don't have dxva support.
  • halbhh2 - Tuesday, June 4, 2013 - link

    Consider prices, at $122 suggested, the new A10 6700 is going to be interesting as the real competition to this Intel chip.
  • majorleague - Wednesday, June 5, 2013 - link

    Here is a youtube link showing 3dmark11 and windows index rating for the 4770k 3.5ghz Haswell. Not overclocked.
    This is apparently around 10-20fps slower than the 6800k in most games. And almost twice the price!!
    Youtube link:
    http://www.youtube.com/watch?v=k7Yo2A__1Xw
  • JDG1980 - Monday, June 3, 2013 - link

    You can't use madVR on ARM. And most ARM platforms are highly locked down so you may be stuck with sub-par playback software from whoever the final vendor is.
  • HisDivineOrder - Tuesday, June 4, 2013 - link

    Because we don't live in next year, Doc Brown?
  • BMNify - Wednesday, June 12, 2013 - link

    for the same reason that QS isn't being used far more today, that being Intel and arm devs talk the talk but don't listen to or even stay in contact with the number one video quality partners ,that being the x264 and ffmpeg devs and provide their arm patches for review and official inclusion in these two key Cecil app code bases to actually use the arm/intel Low Level video encode/decode API's
  • MrSpadge - Monday, June 3, 2013 - link

    Use an i5 and the price almost drops in half. Then undervolt it a bit and each regular CPU will only draw 40 - 50 W under sustained load. Which media playback doesn't create anyway.
  • Mayuyu - Sunday, June 2, 2013 - link

    2-Pass encodes do not offer any improvements in compression efficiency in x264. The only time you would want to use a 2-Pass encode is to hit a certain file size.

    Quicksync is irrelevant because their h264 encodes are inferior in quality to xvid (which has been outdated for a long time now).
  • raulizahi - Thursday, August 29, 2013 - link

    @Mayuyu, 2-pass x264 encodes using VBR do offer improvements in compression efficiency at the same video quality. I have proven it many times. An example: target 720p50 at 3Mbps VBR, first pass I get a certain quality, second pass I get noticeably better quality.

Log in

Don't have an account? Sign up now