Flash acceleration has traditionally worked without issues in AMD and NVIDIA drivers, unlike Intel. Intel and Adobe got it right with Ivy Bridge. Fortunately, things look good with Trinity too. As the screenshot below indicates, we have full GPU acceleration for both decoding and rendering. AMD's System Monitor shows how the CPU and GPU resources are balanced when playing H.264 Flash videos.

Netflix streaming, on the other hand, uses Microsoft's Silverlight technology. Unlike Flash, hardware acceleration for the video decode process is not controlled by the user. It is up to the server side code to attempt GPU acceleration. Thankfully, Netflix does try to take advantage of the GPU's capabilities.

This is evident from the A/V stats recorded while streaming a Netflix HD video at the maximum possible bitrate of 3.7 Mbps. The high GPU usage in the AMD System Monitor also points to hardware acceleration being utilized.

One point which deserves mention here is that Flash and Silverlight acceleration works without hiccups here, unlike what we saw in the Brazos-based machines (where the CPU was too weak despite the availability of hardware acceleration through the GPU).

Video Decoding and Rendering Benchmarks Miscellaneous Aspects and Final Words
Comments Locked

49 Comments

View All Comments

  • Oxford Guy - Friday, September 28, 2012 - link

    4K strikes me as being completely unnecessary. 1080p is enough resolution.
  • brookheather - Friday, September 28, 2012 - link

    Is this a typo? "Intel and NVIDIA offer 50 Hz, 59 Hz and 60 Hz settings which are exactly double of the above settings" - 59 is not double 29 - did you mean 58?
  • ganeshts - Friday, September 28, 2012 - link

    Nope :) 29 Hz is 'control panel speak' for 29.97 Hz and 59 Hz is 'control panel speak' for 59.94 Hz. So, if you have a file at 29.97 fps, it can be played back without any dropped or unsymmetrical repetition at 59.94 Hz since each frame has to be just 'painted' twice at that refresh rate.
  • cjs150 - Friday, September 28, 2012 - link

    This is exact the standard of article I read AT for.

    I remain complete bewildered that chip manufacturers cannot get the frame rates right. It may be an odd frame rate but it is a standard rate that has remained the same forever.

    However, the problem for AMD remains the TDP of the processors. Heat requires to be dealt with, usually by fans and that means noise. An HTPC needs to be as close to silent as possible.

    TDP of 65W is simply too high. You can (as I have) buy a ridiculously over powered i7-3770T which has a TDP of 45W. AMD need to reduce the TDP to no more than 35-45W. At that point there are various HTPC cases which can cool that completely passively.

    Overall this is yet another step forward in the ideal HTPC but we are still short of the promised land
  • wwwcd - Saturday, September 29, 2012 - link

    i7-3770T too expensive against Trynity models and have a double weakness video. For poor peoples it not be choice.
  • cjs150 - Saturday, September 29, 2012 - link

    I agree that the i7-3770T is too expensive at the moment compared to AMD alternatives but it does not have video weaknesses check out the review on Anandtech.

    The refresh rate is close to the correct rate but close is not good enough it should be spot on.

    There is still a lot of work to be done to get to an ideal HTPC CPU. Both AMD and intel are close. If anything AMD has slightly better video but, as I said, TDP is too high.

    Of course the other option is something like the Raspberry pi, unfortunately whilst hardware is promising the software still needs a lot of work
  • Burticus - Friday, September 28, 2012 - link

    Put one of these on a mini-itx board and cram it into something the size of the Shuttle HX61 that I just got and I am interested. I am so spoiled by having a small, silent, cool HTPC I will never go back to anything louder or bigger than a 360.
  • LuckyKnight - Saturday, September 29, 2012 - link

    AMD are missing a market here, working 23.976Hz with a 35W TDP for a passive cooled case. That would be my choice, if it existed.

    Shame Intel can't get 23.976 to work properly, despite their alleged promise!
  • Esskay02 - Saturday, September 29, 2012 - link

    "Intel started the trend of integrating a GPU along with the CPU in the processor package with Clarkdale / Arrandale. The GPU moved to the die itself in Sandy Bridge. Despite having a much more powerful GPUs at its disposal (from the ATI acquisition), AMD was a little late in getting to the CPU - GPU party."

    According to my readings, it was AMD not Intel, first to talk and initiated APU(cpu+gpu). Intel found the threat used it manpower and resources , came out release cpu+Gpu chip.

Log in

Don't have an account? Sign up now