HTPC Credentials

The higher TDP of the processor in Skull Canyon, combined with the new chassis design, makes the unit end up with a bit more noise compared to the traditional NUCs. It would be tempting to say that the extra EUs in the Iris Pro Graphics 580, combined with the eDRAM, would make GPU-intensive renderers such as madVR operate more effectively. That could be a bit true in part (though, madVR now has a DXVA2 option for certain scaling operations), but, the GPU still doesn't have full HEVC 10b decoding, or stable drivers for HEVC decoding on WIndows 10. In any case, it is still worthwhile to evaluate basic HTPC capabilities of the Skull Canyon NUC6i7KYK.

Refresh Rate Accurancy

Starting with Haswell, Intel, AMD and NVIDIA have been on par with respect to display refresh rate accuracy. The most important refresh rate for videophiles is obviously 23.976 Hz (the 23 Hz setting). As expected, the Intel NUC6i7KYK (Skull Canyon) has no trouble with refreshing the display appropriately in this setting.

The gallery below presents some of the other refresh rates that we tested out. The first statistic in madVR's OSD indicates the display refresh rate.

Network Streaming Efficiency

Evaluation of OTT playback efficiency was done by playing back our standard YouTube test stream and five minutes from our standard Netflix test title. Using HTML5, the YouTube stream plays back a 1080p H.264 encoding. Since YouTube now defaults to HTML5 for video playback, we have stopped evaluating Adobe Flash acceleration. Note that only NVIDIA exposes GPU and VPU loads separately. Both Intel and AMD bundle the decoder load along with the GPU load. The following two graphs show the power consumption at the wall for playback of the HTML5 stream in Mozilla Firefox (v 46.0.1).

YouTube Streaming - HTML5: Power Consumption

GPU load was around 13.71% for the YouTube HTML5 stream and 0.02% for the steady state 6 Mbps Netflix streaming case. The power consumption of the GPU block was reported to be 0.71W for the YouTube HTML5 stream and 0.13W for Netflix.

Netflix streaming evaluation was done using the Windows 10 Netflix app. Manual stream selection is available (Ctrl-Alt-Shift-S) and debug information / statistics can also be viewed (Ctrl-Alt-Shift-D). Statistics collected for the YouTube streaming experiment were also collected here.

Netflix Streaming - Windows 8.1 Metro App: Power Consumption

Decoding and Rendering Benchmarks

In order to evaluate local file playback, we concentrate on EVR-CP, madVR and Kodi. We already know that EVR works quite well even with the Intel IGP for our test streams. Under madVR, we used the DXVA2 scaling logic (as Intel's fixed-function scaling logic triggered via DXVA2 APIs is known to be quite effective). We used MPC-HC 1.7.10 x86 with LAV Filters 0.68.1 set as preferred in the options. In the second part, we used madVR 0.90.19.

In our earlier reviews, we focused on presenting the GPU loading and power consumption at the wall in a table (with problematic streams in bold). Starting with the Broadwell NUC review, we decided to represent the GPU load and power consumption in a graph with dual Y-axes. Nine different test streams of 90 seconds each were played back with a gap of 30 seconds between each of them. The characteristics of each stream are annotated at the bottom of the graph. Note that the GPU usage is graphed in red and needs to be considered against the left axis, while the at-wall power consumption is graphed in green and needs to be considered against the right axis.

Frame drops are evident whenever the GPU load consistently stays above the 85 - 90% mark. We did not hit that case with any of our test streams. Note that we have not moved to 4K officially for our HTPC evaluation. We did check out that HEVC 8b decoding works well (even 4Kp60 had no issues), but HEVC 10b hybrid decoding was a bit of a mess - some clips worked OK with heavy CPU usage, while other clips tended to result in a black screen (those clips didn't have any issues with playback using a GTX 1080).

Moving on to the codec support, the Intel Iris Pro Graphics 580 is a known quantity with respect to the scope of supported hardware accelerated codecs. DXVA Checker serves as a confirmation for the features available in driver version 15.40.23.4444.

It must be remembered that the HEVC_VLD_Main10 DXVA profile noted above utilizes hybrid decoding with both CPU and GPU resources getting taxed.

On a generic note, while playing back 4K videos on a 1080p display, I noted that madVR with DXVA2 scaling was more power-efficient compared to using the EVR-CP renderer that MPC-HC uses by default.

Networking and Storage Performance Power Consumption and Thermal Performance
Comments Locked

133 Comments

View All Comments

  • Kimo19 - Monday, May 23, 2016 - link

    thanks a lot for the review. I am thinking to get this machine for photos editing (lightroom) and mobile development. would it be a good choice ? I was thinking the processor/ram/ssd are good enough to provide great performance for the next 2/3 years and the iris pro can be a good gpu to support monitor with high resolution
  • TheinsanegamerN - Monday, May 23, 2016 - link

    For the price you could get something much more powerful, or something along the same power with better GPU support for a cheaper price then this NUC.
  • fanofanand - Monday, May 23, 2016 - link

    Keep in mind that you are bringing your own RAM and SSD, the kit does not include those items for the consumer. As for the iGPU providing support for high resolution, I think that will depend entirely on your workload.
  • alpha64 - Monday, May 23, 2016 - link

    Ganesh, Did you get confirmation directly from Intel that the PCI-E is limited on this system because it runs through the H170? From my research on ARK and other places, it appears that the H170 acts as a PCI Express passthrough, with a PCI express 3.0 x16 connection to the CPU, and the ability to split the configuration off to smaller widths and more ports coming off the H170. It would seem the DMI3 connection is for other (non-PCI express) peripherals. Granted, from the block diagram, it is not apparent that the H170 connects to the CPU's PCI-E x16 connection, but my guess is that it does.

    I would just like clarification, as this is a pretty big deal.
  • ganeshts - Monday, May 23, 2016 - link

    I have confirmation from the technical marketing manager for NUC products at Intel that the communication link between the H170 and the CPU is only effectively PCIe 3.0 x4 for bandwidth purposes. It is definitely not a PCIe 3.0 x16.

    H170 itself can act as a PCIe switch, but, for anything that talks to the CPU, it has to go through the DMI 3.0 lanes.
  • alpha64 - Monday, May 23, 2016 - link

    Thanks for the clarification!
  • extide - Monday, May 23, 2016 - link

    The 16 CPU lanes are entirely un-used in this device. The PCH (H170 in this case) is NEVER connected by a PCIe x16 link -- it is always connected via DMI 3.0 in the H, Q, B and Z platforms. DMI 3.0 has the same B/W as PCIe 3.0 x4. All of the stuff hanging off the H170 shares that same DMI 3.0 link.
  • alpha64 - Monday, May 23, 2016 - link

    Great to know, can you tell me what "Processor PCI Express Port" under "I/O Specifications" details are for on the Intel's ARK for the H170 part? I thought they were for connecting to the PCI Express on the CPU, but would be happy to learn if I am incorrect.
  • Valantar - Monday, May 23, 2016 - link

    I'm disappointed in the lack of teardown pictures. I was at the very least expecting a look at the cpu side of the board. Is that too much to ask?

    Also, considering the massive power throttling seen in your testing, and the torture test nature of the testing, I'd love it if you could monitor clocks and temps during gaming too - I'd be interested in seeing what kind of cpu clocks this can maintain in a low-threaded gaming workload.
  • allanmac - Monday, May 23, 2016 - link

    Please run SGEMM on the HD 580 ... ASAP! :)

    https://software.intel.com/en-us/articles/sgemm-fo...

Log in

Don't have an account? Sign up now