HTPC Credentials

The higher TDP of the processor in Skull Canyon, combined with the new chassis design, makes the unit end up with a bit more noise compared to the traditional NUCs. It would be tempting to say that the extra EUs in the Iris Pro Graphics 580, combined with the eDRAM, would make GPU-intensive renderers such as madVR operate more effectively. That could be a bit true in part (though, madVR now has a DXVA2 option for certain scaling operations), but, the GPU still doesn't have full HEVC 10b decoding, or stable drivers for HEVC decoding on WIndows 10. In any case, it is still worthwhile to evaluate basic HTPC capabilities of the Skull Canyon NUC6i7KYK.

Refresh Rate Accurancy

Starting with Haswell, Intel, AMD and NVIDIA have been on par with respect to display refresh rate accuracy. The most important refresh rate for videophiles is obviously 23.976 Hz (the 23 Hz setting). As expected, the Intel NUC6i7KYK (Skull Canyon) has no trouble with refreshing the display appropriately in this setting.

The gallery below presents some of the other refresh rates that we tested out. The first statistic in madVR's OSD indicates the display refresh rate.

Network Streaming Efficiency

Evaluation of OTT playback efficiency was done by playing back our standard YouTube test stream and five minutes from our standard Netflix test title. Using HTML5, the YouTube stream plays back a 1080p H.264 encoding. Since YouTube now defaults to HTML5 for video playback, we have stopped evaluating Adobe Flash acceleration. Note that only NVIDIA exposes GPU and VPU loads separately. Both Intel and AMD bundle the decoder load along with the GPU load. The following two graphs show the power consumption at the wall for playback of the HTML5 stream in Mozilla Firefox (v 46.0.1).

YouTube Streaming - HTML5: Power Consumption

GPU load was around 13.71% for the YouTube HTML5 stream and 0.02% for the steady state 6 Mbps Netflix streaming case. The power consumption of the GPU block was reported to be 0.71W for the YouTube HTML5 stream and 0.13W for Netflix.

Netflix streaming evaluation was done using the Windows 10 Netflix app. Manual stream selection is available (Ctrl-Alt-Shift-S) and debug information / statistics can also be viewed (Ctrl-Alt-Shift-D). Statistics collected for the YouTube streaming experiment were also collected here.

Netflix Streaming - Windows 8.1 Metro App: Power Consumption

Decoding and Rendering Benchmarks

In order to evaluate local file playback, we concentrate on EVR-CP, madVR and Kodi. We already know that EVR works quite well even with the Intel IGP for our test streams. Under madVR, we used the DXVA2 scaling logic (as Intel's fixed-function scaling logic triggered via DXVA2 APIs is known to be quite effective). We used MPC-HC 1.7.10 x86 with LAV Filters 0.68.1 set as preferred in the options. In the second part, we used madVR 0.90.19.

In our earlier reviews, we focused on presenting the GPU loading and power consumption at the wall in a table (with problematic streams in bold). Starting with the Broadwell NUC review, we decided to represent the GPU load and power consumption in a graph with dual Y-axes. Nine different test streams of 90 seconds each were played back with a gap of 30 seconds between each of them. The characteristics of each stream are annotated at the bottom of the graph. Note that the GPU usage is graphed in red and needs to be considered against the left axis, while the at-wall power consumption is graphed in green and needs to be considered against the right axis.

Frame drops are evident whenever the GPU load consistently stays above the 85 - 90% mark. We did not hit that case with any of our test streams. Note that we have not moved to 4K officially for our HTPC evaluation. We did check out that HEVC 8b decoding works well (even 4Kp60 had no issues), but HEVC 10b hybrid decoding was a bit of a mess - some clips worked OK with heavy CPU usage, while other clips tended to result in a black screen (those clips didn't have any issues with playback using a GTX 1080).

Moving on to the codec support, the Intel Iris Pro Graphics 580 is a known quantity with respect to the scope of supported hardware accelerated codecs. DXVA Checker serves as a confirmation for the features available in driver version 15.40.23.4444.

It must be remembered that the HEVC_VLD_Main10 DXVA profile noted above utilizes hybrid decoding with both CPU and GPU resources getting taxed.

On a generic note, while playing back 4K videos on a 1080p display, I noted that madVR with DXVA2 scaling was more power-efficient compared to using the EVR-CP renderer that MPC-HC uses by default.

Networking and Storage Performance Power Consumption and Thermal Performance
Comments Locked

133 Comments

View All Comments

  • FlyingAarvark - Monday, May 23, 2016 - link

    All of the Skull Canyon reviews so far online have been relative failures. I hate to bash this work but a few points that need to be said.

    To only test this with 2133Mhz is a shame. Intel stated that 2400Mhz works without a FSB OC and you can run up to 3000Mhz with one. That would change the gaming performance tremendously but not a single site has bothered testing this.

    We just didn't learn anything that wasn't easily known from just looking at this on Newegg. We knew it would be really fast for the size/power requirements. We knew it would be fairly hot on-load due to past NUCs. But we didn't know how it would react with DDR4 2400/2800/3000+.

    The other problem is that there's concern expressed in the review that bidirectional 4GB/sec bandwidth isn't enough. Its been proven and known if you look into it that PCIE 1.0 x16 (4GB/sec) does not bottleneck a GTX 980. Skull Canyon should be closer to 5GB/sec than 4GB as well. This wasn't tested with a Razer Core, but I think there's a really good change this is the fastest stock gaming CPU on the market today when paired with a discrete GPU due to the 128MB L4. It was shown that Broadwell 5775Cs were already holding that crown in the past.

    Considering how incredibly impressive this NUC is already with its small size, low power draw, various Thunderbolt3 options (storage/GPU/docks): both of these points on faster DDR4 and 128MB L4 impact with a dGPU would make it even more impressive than it already is and probably result in slam dunk territory.

    I think everyone in the tech community is massively missing the mark on this one! It just hasn't been properly tested. Intel absolutely nailed this product but is failing to properly instruct reviewers on what to test. Send me a sample, forumemail123 at g mail. I'll do it right.
  • jardows2 - Monday, May 23, 2016 - link

    So, you want this product reviewed in tests that will show it in a better light, and ignore all the standard tests that give it an apples-to-apples comparison, showing that Intel has a long ways to go to provide good value to their customers in this market segment?
  • FlyingAarvark - Monday, May 23, 2016 - link

    Ah an AMD poverty gamer arrives. Apples to apples against what? Older NUCs? There is no other competition for this small of a form factor. Certainly not from AMDone.

    I'm asking that the things that all of us who have been so excited about this product have wanted to see. DDR4-3000 IGP gaming performance and Razer Core FuryX or 980Ti performance.

    As I noted, this review told me absolutely nothing that wasn't already known just through common sense. No one will buy this thing for a ho-hum NUC, there's already plenty of those. We're buying them for the size/performance combo and going to run 3000Mhz DDR4 or Razer Core with it.
  • JoeyJoJo123 - Monday, May 23, 2016 - link

    Nobody said anything about AMD, dude.
  • jardows2 - Tuesday, May 24, 2016 - link

    You just showed your true colors. I am anything but an "AMD poverty gamer." I need to know how this device compares to other computing devices, so I can determine if the small form factor benefit is worth the performance hit. Very few people are going to have a demand for a computer that fits a particular small form factor, and are willing to do anything to hit that size requirement. Most people just want the best value, and size is a component to that.

    If we are performing tests that show best case scenario for this unit, then we'd have to do the same for every other bare-bones unit. Then there would be no true comparison, and each piece would be no better than a cnet review bought and paid for by the manufacturer, and we would be no better informed.
  • FlyingAarvark - Tuesday, May 24, 2016 - link

    Nice attempt at digging yourself out of that hole: you'd be pissed to see this thing shown in a positive light because as you said supposedly, "Intel has a long ways to go to provide good value". Shows how much you know, just taking the typical fanboy stance on this thing without knowing what you're even looking at- much like this review.

    The point remains: people want to see this used with varying RAM speeds. It affects the gaming performance greatly due to the IGP. Also people want to see it benched with a 980Ti / 1080 to compare to other high end gaming CPUs.
    There's absolutely no reason that's "best case" at all. It's just asking for a full review.
  • stux - Monday, May 23, 2016 - link

    I'm curious if the BIOS supports RAID0/1 and if so, what the performance from dual sm950s in RAID0 is.

    Sounds like that'd be bumping up against the DMI bottleneck.
  • revanchrist - Monday, May 23, 2016 - link

    Very dissapointed TBH. i7 5775C performs on par with a GTX 750, so i thought this 6770HQ packed with a much stronger integrated graphics and double the eDRAM will be a monster but instead it performed much weaker, can't even matched a 5675C let alone a 5775C. I guess the power limit and TDP really limit the potential of the iGPU. Sigh. Probably need to wait until 7nm CPU to have a playable 1080p integrated graphics solution.
  • spikebike - Tuesday, May 24, 2016 - link

    Maybe the drivers haven't caught up. Or maybe it's heavily throttled because of heat. Seems very strange that it's not a substantial upgrade from the 5675c or 5775c. Hopefully, something else with a similar form factor will ship with the same CPU. Note the very wide difference between the two gtx 960 based units in this review.
  • ganeshts - Tuesday, May 24, 2016 - link

    This is a 45W TDP part, while the 5775C is a 65W TDP part. That is a substantial difference, as a larger TDP allows more leeway for the GPU than just the 20W difference would suggest.

    Also, not sure why multiple commentators are talking about two GTX 960 / same GPU when it comes to the GB-BXi5G-760 and the MAGNUS EN970. They are not the same GPU at all - the former uses the Kepler GK104-based 870M, while the latter uses the Maxwell GM104-based 970M.

Log in

Don't have an account? Sign up now