Concluding Remarks

The Haswell platform ticks all the checkboxes for the mainstream HTPC user. It fixes some nagging bugs left behind in Ivy Bridge. Setting up MPC-HC with LAV Filters was a walk in the park. With good and stable support for DXVA2 APIs in the drivers, even softwares like XBMC can take advantage of the GPU's capabilities. Essential video processing steps such as chroma upsampling, cadence detection and deinterlacing work beautifully. For advanced users, the GPU is capable of supporting madVR for most usage scenarios even with DDR3-1600 memory in the system.

Admittedly, there doesn't seem to be much improvement in madVR capabilities over the HD4000 in Ivy Bridge. The madVR developer has also added more complicated algorithms to the mix and made further refinements to existing ones (such as the anti-ringing filter). The improvements in the Intel GPU capabilities haven't kept up with the requirements of these updates. That said, madVR with DXVA2 scaling works well and looks good, satifying some of the HTPC users who have moved to it from the default renderers. We could certainly complain about some missing driver features and the lack of hardware decode capabilities for 10b H.264 streams. HEVC (H.265) decode acceleration is absent too. However, let us be reasonable and accept the fact that despite  anime's adoption of 10b H.264 in a big way, it is yet to gain mass-market appeal. HEVC was standardized pretty recently, and Haswell's GPU would have long been past the design stage by that time. To further Intel's defense, neither NVIDIA nor AMD support these two features.

Talking of display refresh rate support, Intel has finally fixed the 23.976 Hz bug which has been plaguing Intel-based HTPCs since 2008. This is going to make HTPC enthusiasts really happy. The fact that Intel manages the best match for the required refresh rate compared to AMD and NVIDIA cards is just icing on the cake. The 4K H.264 decode and output support from Haswell seems very promising for the 4K ecosystem. It also strengthens H.264's relevance for some time to come in the 4K arena.

The biggest disappointment with Haswell in the media department is the regression in QuickSync video transcode quality. The salt in the wound is really Intel's claims before launch of significant increases in QS video quality. Ivy Bridge definitely produces better quality QSV accelerated video transcodes.  Combine that with a lack of significant progress on the software support side until recently (hooray for Handbrake, boo for no substantial OS X deployment) and you'd almost get the impression that Intel was trying its best to ruin one of the most promising features of its Core microprocessors. Haswell doesn't ruin QuickSync, the technology is still a great way of getting your content quickly transcoded for use on mobile devices. However, in its current implementation, Haswell does absolutely nothing to further QuickSync - in fact, it's a definitely step in the wrong direction.

The low power consumption of the Haswell system makes it ideal for HTPC builds, and we are very bullish on the NUC as well as the capabilities of completely passive builds as HTPC platforms. Our overall conclusion is that Haswell takes discrete GPUs out of the equation for a vast majority of HTPC users. The few who care about advanced madVR scaling algorithms (such as Jinc and the anti-ringing filters for Lanczos) may need to fork out for a discrete GPU, but even those will probably be of the higher end variety rather than the entry level GT 640s and AMD 7750s that we have been suggesting so far.

Power Consumption
Comments Locked

95 Comments

View All Comments

  • HisDivineOrder - Tuesday, June 4, 2013 - link

    I've heard this song and dance before. It never happens. Plus, limiting people to GDDR5 of pre-determined amounts for a HTPC seems like an exercise in being stupid.
  • Spunjji - Tuesday, June 4, 2013 - link

    Yeah, I'm not buying that rumour. Doesn't make much sense.
  • JDG1980 - Sunday, June 2, 2013 - link

    It's good to see that Intel finally got around to fixing the 23.976 fps bug, which was the biggest show-stopper for using their integrated graphics in a HTPC.

    Regarding MadVR, I'd be interested to see more benchmarks. How good can you run the settings before hitting a wall with GPU utilization? How about on the GT3e - if this ever shows up in an all-in-one Mini-ITX board or NUC, it might be a great choice for HTPCs. Can it handle the good scaling algorithms?

    My own experience is that anti-ringing doesn't add that much GPU load. I recently upgraded to a Radeon HD 7750, and it can handle anti-ringing filters on both luma and chroma with no problem. Chroma upscaling works fine with 3-tap Jinc, and luma also can do this with SD content (even interlaced), but for the most demanding test clip I have (1440x1080 interlaced 60 fields per second) I have to downgrade luma scaling to either Lanczos 3-tap or SoftCubic 80 to avoid dropping frames. (The output destination is a 1080p TV.) I suspect a 7790 or 7850 could handle 3-tap Jinc for both chroma and luma at all resolutions and frame rates up to full HD.

    By the way, I found a weird problem with madVR - when I ran GPU-Z in the background to monitor load, all interlaced content dropped frames. Didn't matter what settings I used. Closing GPU-Z ended the problem. I was still able to monitor GPU load with Microsoft's "Process Explorer" application and this did not cause any problems.

    Regarding 4K output, did you test whether DisplayPort 60 Hz 4K works properly? This might be of interest to some users, especially if the upcoming Asus 4K monitor is released at a reasonable price point. I know people have had to use some odd tricks to get the Sharp 4K monitor to do native resolution at 60 Hz with existing cards.
  • ganeshts - Monday, June 3, 2013 - link

    This is very interesting.. What version of GPU-Z were you using? I will check whether my Jinc / anti-ringing dropped frames were due to GPU-Z running in the background. I did do the initial setup when GPU-Z wasn't active, but obviously the benchmark runs were run with GPU-Z active in the background. Did you see any difference in GPU load between GPU-Z and Process Explorer when playing interlaced content with dropped frames?
  • JDG1980 - Monday, June 3, 2013 - link

    I was using the latest version (0.7.1) of GPU-Z. The strange part is that the GPU load calculation was correct - it was just dropping frames for no reason, it wasn't showing the GPU as being maxed out. For the video card, I was using the newest stable Catalyst driver (13.4, I believe) from AMD's website. The OS is Windows 7 Ultimate (64-bit).

    The only reason I suspected GPU-Z is because after searching a bunch of forums to try to find out why interlaced content (even SD with low madVR settings) wouldn't play properly, I found one other user who said he had to turn off GPU-Z. I cannot say if this is a widespread issue and it's possible it may be limited to certain system configurations or certain GPUs. Still worth trying, though. Thanks for the follow-up!
  • tential - Sunday, June 2, 2013 - link

    I don't understand the H.264 Transcoding Performance chart at all can someone help?

    QuickSync does more FPS at 720p than 1080p. This makes sense.

    The x264 on the Core i3 and core i7 post higher FPS in 1080p but lower in 720p. Why is this?
  • ganeshts - Monday, June 3, 2013 - link

    Maybe the downscaling of the frame from 1080p to 720p sucks up more resources, causing the drop in FPS? Remember that the source is 1080p...
  • tential - Monday, June 3, 2013 - link

    Ok so if I'm downscaling to 720p, why does FPS increase with quicksync, but decrease with the processor?

    It's OPPOSITE directions one increases (quicksync) one decreases (cpu). Wouldn't it be the same both ways?
  • ganeshts - Monday, June 3, 2013 - link

    Downscaling is also hardware accelerated in QS mode. Hardware transcode is faster for 720p decoded frames rather than 1080p decoded frames. The time taken to downscale is much lower than the time taken to transcode the 'extra pixels' in a 1080p version.
  • elian123 - Monday, June 3, 2013 - link

    Ganesh, you mention "The Iris Pro 5200 GPUs are reserved for BGA configurations and unavailable to system builders". Does that imply that there won't be motherboards for sale with the 4770R integrated? Will the 4770R only be available in complete systems?

Log in

Don't have an account? Sign up now