HTPC Aspects : Network Streaming Performance

Windows 7-based HTPCs need hardware acceleration in both Adobe Flash and Microsoft Silverlight for optimal streaming performance with YouTube and Netflix. The move to Windows 8.1 has made Silverlight unnecessary. The Netflix app on Windows 8.x brings a HTPC's capability on par with dedicated streaming consoles, with support for Super HD (6 Mbps) streams as well as Dolby Digital Plus bitstreaming support. The latest app also renders the video in such a way as to make taking screenshots an exercise in frustration.

As the above photograph shows, the Netflix app can be set to bitstream Dolby Digital Plus to the AV receiver and the 750Ti supports it. The video and audio streams are at 5.8 Mbps and 192 kbps respectively. It is not immediately evident as to whether GPU acceleration is being utilized. However, tracking the GPU / VPU loading and PC power consumption numbers make it obvious that it is not software decode at work in the Netflix app.

Unlike Silverlight, Adobe Flash continues to maintain some relevance right now. YouTube continues to use Adobe Flash to serve FLV (at SD resolutions) and MP4 (at both SD and HD resolutions) streams. YouTube's debug OSD indicates whether hardware acceleration is being used or not.

Similar to our Netflix streaming test, we recorded GPU / VPU loading as well as power consumption at the wall when streaming the 1080p version of the sample YouTube clip. The table below presents the relevant numbers for various configurations and streaming services.

Streaming Video Performance
  Netflix YouTube
  GPU/VPU Load Power GPU/VPU Load Power
NVIDIA GeForce GTX 750 Ti 11.95/12.65% 56.44 W 16.26/15.74% 55.45 W
NVIDIA GeForce GT 640 5.99/25.80% 58.89 W 15.57/25.72% 58.93 W
AMD Radeon HD 7750 0.72% 66.79 W 3.57% 67.11 W

NVIDIA has been touting Maxwell's low power nature, and it proves to be the best of the three candidates in terms of power efficiency when it comes to GPU support for streaming services.

HTPC Aspects : Introduction HTPC Aspects : Decoding & Rendering Benchmarks
Comments Locked

177 Comments

View All Comments

  • RealiBrad - Tuesday, February 18, 2014 - link

    If you were to run the AMD card 10hrs a day with the avg cost of electricity in the US, you would pay around $22 more a year in electricity. The AMD card gives a %19 boost in power for a %24.5 boost in power usage. That means that the Nvidia card is around %5 more efficient. Its nice that they got the power envelope so low, but if you look at the numbers, not huge.

    The biggest factor is the supply coming out of AMD. Unless they start making more cards, the the 750Ti will be the better buy.
  • Homeles - Tuesday, February 18, 2014 - link

    Your comment is very out of touch with reality, in regards to power consumption/efficiency:

    http://www.techpowerup.com/reviews/NVIDIA/GeForce_...

    It is huge.
  • mabellon - Tuesday, February 18, 2014 - link

    Thank you for that link. That's an insane improvement. Can't wait to see 20nm high end Maxwell SKUs.
  • happycamperjack - Wednesday, February 19, 2014 - link

    That's for gaming only, it's compute performance/watt is still horrible compared to AMD though. I wonder when can Nvidia catch up.
  • bexxx - Wednesday, February 19, 2014 - link

    http://media.bestofmicro.com/9/Q/422846/original/L...

    260kh/s at 60 watts is actually very high, that is basically matching 290x in kh/watt ~1000/280watts, and beating out r7 265 or anything... if you only look at kh/watt.
  • ninjaquick - Thursday, February 20, 2014 - link

    To be honest, all nvidia did was increase the granularity of power gating and core states, so in the event of pure burn, the TDP is hit, and the perf will (theoretically) droop.

    The reason the real world benefits from this is simply the way rendering works, under DX11. Commands are fast and simple, so increasing the number of parallel queues allows for faster completion and lower power (Average). So the TDP is right, even if the working wattage per frame is just as high as any other GPU. AMD doesn't have that granularity implemented in GCN yet, though they do have the tech for it.

    I think this is fairly silly, Nvidia is just riding the coat-tails of massive GPU stalling on frame-present.
  • elerick - Tuesday, February 18, 2014 - link

    Since the performance charts have 650TI Boost i looked up the TDP of 140W. When compared to the Maxwell 750TI with 60W TDP I am in awe of the performance per watt. I sincerely hope that the 760/770/780 with 20nm to give the performance a sharper edge but even if they are not it will still give people with older graphics cards more of a reason to finally upgrade since driver performance tuning will start favoring Maxwell over the next few years.
  • Lonyo - Tuesday, February 18, 2014 - link

    The 650TI/TI Boost aren't cards designed to be efficient. They are cut down cards with sections of the GPU disabled. While 2x perf per watt might be somewhat impressive, it's not that impressive given the comparison is made to inefficient cards.
    Comparing it to something like a GTX650 regular, which is a fully enabled GPU, might be more apt of a comparison, and probably wouldn't give the same perf/watt increases.
  • elerick - Tuesday, February 18, 2014 - link

    Thanks, I haven't been following lower end model cards for either camp. I usually buy $200-$300 class cards.
  • bexxx - Thursday, February 20, 2014 - link

    Still just over 1.8x higher perf/watt: http://www.techpowerup.com/reviews/NVIDIA/GeForce_...

Log in

Don't have an account? Sign up now