Before proceeding to the business end of the review, let us take a look at some power consumption numbers. The G.Skill RAM was set to DDR3 1600 during the measurements. We measured the average power drawn at the wall under different conditions. In the table below, the Blu-ray movie from the optical disk was played using CyberLink PowerDVD 12. The Prime95 + Furmark benchmark was run for 1 hour before any measurements were taken. The MKVs were played back from a NAS attached to the network. The testbed itself was connected to a GbE switch (as was the NAS). In all cases, a wireless keyboard and mouse were connected to the testbed.

Trinity HTPC Power Consumption
Idle 37.2 W
Prime95 + Furmark (Full loading) 172.1 W
Blu-ray from optical drive 93.1 W
Blu-ray ISO from NAS 62.3 W
1080p24 MKV Playback (MPC-HC + QuickSync + EVR-CP) 55.8 W
1080p24 MKV Playback (MPC-HC + QuickSync + madVR) 58.3 W

The Trinity platform ticks all the checkboxes for the mainstream HTPC user. Setting up MPC-HC with LAV Filters was a walk in the park. With good and stable support for DXVA2 APIs in the drivers, even software like XBMC can take advantage of the GPU's capabilities. Essential video processing steps such as chroma upsampling, cadence detection and deinterlacing work beautifully. For advanced users, the GPU is capable of supporting madVR for most usage scenarios even with DDR3-1600 memory in the system (provided DXVA is not used for decoding the video). Ivy Bridge wasn't a slam-dunk in this scenario even with software decode.

Does this signify the end of the road for the discrete HTPC GPU? Unfortunately, that is not the case. The Trinity platform is indeed much better than Llano, and can match / surpass even Ivy Bridge. However, it is not future proof. While AMD will end up pleasing a large HTPC audience with Trinity, there are still a number of areas which AMD seems to have overlooked:

  • Despite the rising popularity of 10-bit H.264 encodes, the GPU doesn't seem to support decoding them in hardware. That said, software decoding of 1080p 10-bit H.264 is not complex enough to overwhelm the A10-5800K (but that may not be true for the lower end CPUs).
  • Full hardware decode of MVC 3D videos is not available. 3D Blu-rays have a slightly greater power penalty as a result. However, 3D is fast becoming an 'also-ran' feature, and we don't really fault Trinity for not having full acceleration.
  • The video industry is pushing 4K and it makes more sense to a lot of people compared to the 3D push. 4K should see a much faster rate of adoption compared to 3D, but Trinity seems to have missed the boat here. AMD's Southern Islands as well as NVIDIA's Kepler GPUs support 4K output over HDMI, but Trinity doesn't have 4K video decode acceleration or 4K display output over HDMI.

Our overall conclusion is that discrete GPUs for HTPC use are only necessary if one has plans to upgrade to 4K in the near term, or the user is set upon using madVR for 1080i60 content. Otherwise, the Trinity platform has everything that a mainstream HTPC user would ever need.

Acceleration for Flash and Silverlight
Comments Locked

49 Comments

View All Comments

  • Oxford Guy - Friday, September 28, 2012 - link

    4K strikes me as being completely unnecessary. 1080p is enough resolution.
  • brookheather - Friday, September 28, 2012 - link

    Is this a typo? "Intel and NVIDIA offer 50 Hz, 59 Hz and 60 Hz settings which are exactly double of the above settings" - 59 is not double 29 - did you mean 58?
  • ganeshts - Friday, September 28, 2012 - link

    Nope :) 29 Hz is 'control panel speak' for 29.97 Hz and 59 Hz is 'control panel speak' for 59.94 Hz. So, if you have a file at 29.97 fps, it can be played back without any dropped or unsymmetrical repetition at 59.94 Hz since each frame has to be just 'painted' twice at that refresh rate.
  • cjs150 - Friday, September 28, 2012 - link

    This is exact the standard of article I read AT for.

    I remain complete bewildered that chip manufacturers cannot get the frame rates right. It may be an odd frame rate but it is a standard rate that has remained the same forever.

    However, the problem for AMD remains the TDP of the processors. Heat requires to be dealt with, usually by fans and that means noise. An HTPC needs to be as close to silent as possible.

    TDP of 65W is simply too high. You can (as I have) buy a ridiculously over powered i7-3770T which has a TDP of 45W. AMD need to reduce the TDP to no more than 35-45W. At that point there are various HTPC cases which can cool that completely passively.

    Overall this is yet another step forward in the ideal HTPC but we are still short of the promised land
  • wwwcd - Saturday, September 29, 2012 - link

    i7-3770T too expensive against Trynity models and have a double weakness video. For poor peoples it not be choice.
  • cjs150 - Saturday, September 29, 2012 - link

    I agree that the i7-3770T is too expensive at the moment compared to AMD alternatives but it does not have video weaknesses check out the review on Anandtech.

    The refresh rate is close to the correct rate but close is not good enough it should be spot on.

    There is still a lot of work to be done to get to an ideal HTPC CPU. Both AMD and intel are close. If anything AMD has slightly better video but, as I said, TDP is too high.

    Of course the other option is something like the Raspberry pi, unfortunately whilst hardware is promising the software still needs a lot of work
  • Burticus - Friday, September 28, 2012 - link

    Put one of these on a mini-itx board and cram it into something the size of the Shuttle HX61 that I just got and I am interested. I am so spoiled by having a small, silent, cool HTPC I will never go back to anything louder or bigger than a 360.
  • LuckyKnight - Saturday, September 29, 2012 - link

    AMD are missing a market here, working 23.976Hz with a 35W TDP for a passive cooled case. That would be my choice, if it existed.

    Shame Intel can't get 23.976 to work properly, despite their alleged promise!
  • Esskay02 - Saturday, September 29, 2012 - link

    "Intel started the trend of integrating a GPU along with the CPU in the processor package with Clarkdale / Arrandale. The GPU moved to the die itself in Sandy Bridge. Despite having a much more powerful GPUs at its disposal (from the ATI acquisition), AMD was a little late in getting to the CPU - GPU party."

    According to my readings, it was AMD not Intel, first to talk and initiated APU(cpu+gpu). Intel found the threat used it manpower and resources , came out release cpu+Gpu chip.

Log in

Don't have an account? Sign up now