Testbed and Software Setup

Instead of going for the usual high end CPU (77W / 95W TDPs), we have opted for the Core i7-4765T for today's review. This is a 35W TDP CPU with four cores / eight threads, expected to retail with a MSRP of $303. Intel has a number of GPU configurations doing the rounds at Haswell launch. The i7-4765T sports the HD 4600 GPU, and it is the best GPU available in a LGA 1150 configuration (The Iris Pro 5200 GPUs are reserved for BGA configurations and unavailable to system builders).

The table below presents the hardware components of our Haswell HTPC testbed

Haswell HTPC Testbed Setup
Processor Intel Core i7-4765T - 2.00 GHz (Turbo to 3.0 GHz)
Intel HD Graphics HD4600 - Up to 1200 MHz
Motherboard ASRock Z87E-ITX mITX
OS Drive Seagate 600 SSD ST240HM000 240GB
Memory G.SKILL Ares Series 8GB (2 x 4GB) SDRAM DDR3 2133 (PC3 17000) F3-2133C9Q-16GAB CAS 9-11 -10-28 2N
Optical Drive ASUS 8X Blu-ray Drive Model BC-08B1ST
Case Antec Skeleton ATX Open Air Case
Power Supply Antec VP-450 450W ATX
Operating System Windows 8 Professional x64
Displays / AVRs Onkyo TX-SR606 + Acer H243H
Pioneer Elite VSX-32 + Sony Bravia KDL46EX720
Sony XBR-84X900
Seiki Digital SE50UY04

The ASRock Z87E-ITX board comes with a Broadcom-based 802.11ac 2T2R solution. Connected to a Buffalo WZR-D1800H 802.11ac router, I was able to consistently obtain 173 Mbps of practical throughput. Streaming Blu-ray ISOs over Wi-Fi from a NAS worked without issues. The board was very simple to get up and running and given its form factor and the CPU currently installed, I hope to migrate it to a passive HTPC build soon.

The Haswell platform officially supports DDR3-1600. Towards this, we obtained a 16 GB DDR3-2133 Ares kit from G.Skill for our testbed. The Ares kit supports XMP 1.2 and the ASRock Z87E-ITX had it running at 2133 MHz flawlessly on first boot. However, we made sure to run the memory at the suggested 1600 MHz in order to obtain results consistent with what an average system builder (non-overclocker) would obtain. The Ares kit makes it possible to study HTPC behaviour from a memory bandwidth perspective, but we will not cover that aspect in this launch piece.

The software setup for the Haswell HTPC testbed involved the following:

Haswell HTPC Testbed Software Setup
Intel Graphics Driver 9.18.10.3107 (Version on ASRock Motherboard DVD)
Blu-ray Playback Software CyberLink PowerDVD 13
Media Player MPC-HC v1.6.7.7114
Splitter / Decoder LAV Filters 0.57
Renderers EVR / EVR-CP (integrated in MPC-HC v1.6.7.7114)
madVR v0.86.1

The madVR renderer settings were fixed as below for testing purposes:

  • Decoding features disabled
  • Deinterlacing set to:
    • automatically activated when needed (activate when in doubt)
    • automatic source type detection (i.e, disable automatic source type detection is left unchecked)
    • only look at pixels in the frame center
  • Scaling algorithms were set as below:
    • Chroma upscaling set to SoftCubic with softness of 100
    • Luma upscaling set to Lanczos with 4 taps with anti-ringing filter left deactivated and scale in linear light left unchecked / DXVA2
    • Luma downscaling set to Lanczos with 4 taps with anti-ringing filter left deactivated and scale in linear light left unchecked / DXVA2
  • Rendering parameters were set as below:
    • Automatic fullscreen exclusive mode was used
    • CPU and GPU queue sizes were set to 32 and 24 respectively
    • Under exclusive mode settings, the seek bar was enabled, switch to exclusive mode from windowed mode was delayed by 3 seconds and 16 frames were configured to be presented in advance. The GPU flushing modes were set to default
    • Smooth motion was left disabled
    • The 'trade quality for performance' settings were left at default (i.e, linear light was left disabled for smooth motion frame blending and custom pixel shader results were stored in 16-bit buffers instead of 32-bit)

Unlike our Ivy Bridge setup, we found the windowed mode to be generally bad in terms of performance compared to exclusive mode.

MPC-HC and LAV Filters settings were altered from the defaults as below for testing purposes:

  • DirectShow Video Output was configured as EVR / EVR-CP / madVR under Options > Playback > Output
  • All internal source and transform filters were disabled under Options > Internal Filters
  • Under Options > External Filters, LAV Splitter, LAV Audio Decoder and LAV Video Decoder were added as Preferred filters
  • LAV Audio Decoder was set to bitstream all applicable formats
  • LAV Video Decoder were altered from the defaults as below
    • Hardware Acceleration was set to DXVA2 Native / QuickSync / None depending on the aspect being tested. UHD (4K) was enabled in all the cases
    • Deinterlacing mode was set to 'Aggressive'
Introduction Video Post Processing and HTPC Configuration Options
Comments Locked

95 Comments

View All Comments

  • HisDivineOrder - Tuesday, June 4, 2013 - link

    I've heard this song and dance before. It never happens. Plus, limiting people to GDDR5 of pre-determined amounts for a HTPC seems like an exercise in being stupid.
  • Spunjji - Tuesday, June 4, 2013 - link

    Yeah, I'm not buying that rumour. Doesn't make much sense.
  • JDG1980 - Sunday, June 2, 2013 - link

    It's good to see that Intel finally got around to fixing the 23.976 fps bug, which was the biggest show-stopper for using their integrated graphics in a HTPC.

    Regarding MadVR, I'd be interested to see more benchmarks. How good can you run the settings before hitting a wall with GPU utilization? How about on the GT3e - if this ever shows up in an all-in-one Mini-ITX board or NUC, it might be a great choice for HTPCs. Can it handle the good scaling algorithms?

    My own experience is that anti-ringing doesn't add that much GPU load. I recently upgraded to a Radeon HD 7750, and it can handle anti-ringing filters on both luma and chroma with no problem. Chroma upscaling works fine with 3-tap Jinc, and luma also can do this with SD content (even interlaced), but for the most demanding test clip I have (1440x1080 interlaced 60 fields per second) I have to downgrade luma scaling to either Lanczos 3-tap or SoftCubic 80 to avoid dropping frames. (The output destination is a 1080p TV.) I suspect a 7790 or 7850 could handle 3-tap Jinc for both chroma and luma at all resolutions and frame rates up to full HD.

    By the way, I found a weird problem with madVR - when I ran GPU-Z in the background to monitor load, all interlaced content dropped frames. Didn't matter what settings I used. Closing GPU-Z ended the problem. I was still able to monitor GPU load with Microsoft's "Process Explorer" application and this did not cause any problems.

    Regarding 4K output, did you test whether DisplayPort 60 Hz 4K works properly? This might be of interest to some users, especially if the upcoming Asus 4K monitor is released at a reasonable price point. I know people have had to use some odd tricks to get the Sharp 4K monitor to do native resolution at 60 Hz with existing cards.
  • ganeshts - Monday, June 3, 2013 - link

    This is very interesting.. What version of GPU-Z were you using? I will check whether my Jinc / anti-ringing dropped frames were due to GPU-Z running in the background. I did do the initial setup when GPU-Z wasn't active, but obviously the benchmark runs were run with GPU-Z active in the background. Did you see any difference in GPU load between GPU-Z and Process Explorer when playing interlaced content with dropped frames?
  • JDG1980 - Monday, June 3, 2013 - link

    I was using the latest version (0.7.1) of GPU-Z. The strange part is that the GPU load calculation was correct - it was just dropping frames for no reason, it wasn't showing the GPU as being maxed out. For the video card, I was using the newest stable Catalyst driver (13.4, I believe) from AMD's website. The OS is Windows 7 Ultimate (64-bit).

    The only reason I suspected GPU-Z is because after searching a bunch of forums to try to find out why interlaced content (even SD with low madVR settings) wouldn't play properly, I found one other user who said he had to turn off GPU-Z. I cannot say if this is a widespread issue and it's possible it may be limited to certain system configurations or certain GPUs. Still worth trying, though. Thanks for the follow-up!
  • tential - Sunday, June 2, 2013 - link

    I don't understand the H.264 Transcoding Performance chart at all can someone help?

    QuickSync does more FPS at 720p than 1080p. This makes sense.

    The x264 on the Core i3 and core i7 post higher FPS in 1080p but lower in 720p. Why is this?
  • ganeshts - Monday, June 3, 2013 - link

    Maybe the downscaling of the frame from 1080p to 720p sucks up more resources, causing the drop in FPS? Remember that the source is 1080p...
  • tential - Monday, June 3, 2013 - link

    Ok so if I'm downscaling to 720p, why does FPS increase with quicksync, but decrease with the processor?

    It's OPPOSITE directions one increases (quicksync) one decreases (cpu). Wouldn't it be the same both ways?
  • ganeshts - Monday, June 3, 2013 - link

    Downscaling is also hardware accelerated in QS mode. Hardware transcode is faster for 720p decoded frames rather than 1080p decoded frames. The time taken to downscale is much lower than the time taken to transcode the 'extra pixels' in a 1080p version.
  • elian123 - Monday, June 3, 2013 - link

    Ganesh, you mention "The Iris Pro 5200 GPUs are reserved for BGA configurations and unavailable to system builders". Does that imply that there won't be motherboards for sale with the 4770R integrated? Will the 4770R only be available in complete systems?

Log in

Don't have an account? Sign up now