AMD provided us with an A10-5800K APU along with the Asus F2 A85-M Pro motherboard for our test drive. Purists might balk at the idea of an overclockable 100W TDP processor being used in tests intended to analyze the HTPC capabilities. However, the A10-5800K comes with the AMD Radeon HD 7660D, the highest end GPU in the Trinity lineup. Using this as the review platform gives readers an understanding of the maximum HTPC capabilities of the Trinity lineup.

The table below presents the hardware components of our Trinity HTPC testbed:

Trinity HTPC Testbed Setup
Processor AMD A10-5800K - 3.80 GHz (Turbo to 4.2 GHz)
AMD Radeon HD 7660D - 800 MHz
Motherboard Asus F2A85-M Pro uATX
OS Drive OCZ Vertex2 120 GB
Memory G.SKILL Ares Series 8GB (2 x 4GB) SDRAM DDR3 2133 (PC3 17000) F3-2133C9Q-16GAB CAS 9-11 -10-28 2N
Optical Drives ASUS 8X Blu-ray Drive Model BC-08B1ST
Case Antec Skeleton ATX Open Air Case
Power Supply Antec VP-450 450W ATX
Operating System Windows 7 Ultimate x64 SP1
Display / AVR Acer H243H / Pioneer Elite VSX-32 + Sony Bravia KDL46EX720
.

The Trinity platform officially supports DDR3-1866 modules. Towards this, we obtained a 16 GB DDR3-2133 Ares kit from G.Skill for our testbed. Using this kit made it possible to study HTPC behaviour from a memory bandwidth perspective.

The software setup for the Trinity HTPC testbed involved the following:

Trinity HTPC Testbed Software Setup
Blu-ray Playback Software CyberLink PowerDVD 12
Media Player MPC-HC v1.6.3.5818
Splitter / Decoder LAV Filters 0.51.3
Renderers EVR / EVR-CP (integrated in MPC-HC v1.6.3.5818)
madVR v0.83.4

The madVR renderer settings were fixed as below for testing purposes:

  1. Decoding features disabled
  2. Deinterlacing set to:
    • automatically activated when needed (activate when in doubt)
    • automatic source type detection (i.e, disable automatic source type detection is left unchecked)
    • only look at pixels in the frame center
    • be performed in a separate thread
  3. Scaling algorithms were set as below:
    • Chroma upscaling set to SoftCubic with softness of 100
    • Luma upscaling set to Lanczos with 4 taps
    • Luma downscaling set to Lanczos with 4 taps
  4. Rendering parameters were set as below:
    • Start of playback (including post-seek) was delayed till the render queue filled up
    • Automatic fullscreen exclusive mode was used
    • A separate device was used presentation, and D3D11 was used
    • CPU and GPU queue sizes were set to 32 and 24 respectively
    • Under exclusive mode settings, the seek bar was enabled, switch to exclusive mode from windowed mode was delayed by 3 seconds and 16 frames were configured to be presented in advance. The GPU was set to fush after the intermediate render steps, copy to back buffer and after D3D peresentation. In addition, the GPU was set to wait (sleep) after the last render step.

Unlike our Ivy Bridge setup, we found the windowed mode to be generally bad in terms of performance compared to exclusive mode. Also, none of the options to trade quality for performance were checked.

Introduction HQV 2.0 Benchmarking
Comments Locked

49 Comments

View All Comments

  • Oxford Guy - Friday, September 28, 2012 - link

    4K strikes me as being completely unnecessary. 1080p is enough resolution.
  • brookheather - Friday, September 28, 2012 - link

    Is this a typo? "Intel and NVIDIA offer 50 Hz, 59 Hz and 60 Hz settings which are exactly double of the above settings" - 59 is not double 29 - did you mean 58?
  • ganeshts - Friday, September 28, 2012 - link

    Nope :) 29 Hz is 'control panel speak' for 29.97 Hz and 59 Hz is 'control panel speak' for 59.94 Hz. So, if you have a file at 29.97 fps, it can be played back without any dropped or unsymmetrical repetition at 59.94 Hz since each frame has to be just 'painted' twice at that refresh rate.
  • cjs150 - Friday, September 28, 2012 - link

    This is exact the standard of article I read AT for.

    I remain complete bewildered that chip manufacturers cannot get the frame rates right. It may be an odd frame rate but it is a standard rate that has remained the same forever.

    However, the problem for AMD remains the TDP of the processors. Heat requires to be dealt with, usually by fans and that means noise. An HTPC needs to be as close to silent as possible.

    TDP of 65W is simply too high. You can (as I have) buy a ridiculously over powered i7-3770T which has a TDP of 45W. AMD need to reduce the TDP to no more than 35-45W. At that point there are various HTPC cases which can cool that completely passively.

    Overall this is yet another step forward in the ideal HTPC but we are still short of the promised land
  • wwwcd - Saturday, September 29, 2012 - link

    i7-3770T too expensive against Trynity models and have a double weakness video. For poor peoples it not be choice.
  • cjs150 - Saturday, September 29, 2012 - link

    I agree that the i7-3770T is too expensive at the moment compared to AMD alternatives but it does not have video weaknesses check out the review on Anandtech.

    The refresh rate is close to the correct rate but close is not good enough it should be spot on.

    There is still a lot of work to be done to get to an ideal HTPC CPU. Both AMD and intel are close. If anything AMD has slightly better video but, as I said, TDP is too high.

    Of course the other option is something like the Raspberry pi, unfortunately whilst hardware is promising the software still needs a lot of work
  • Burticus - Friday, September 28, 2012 - link

    Put one of these on a mini-itx board and cram it into something the size of the Shuttle HX61 that I just got and I am interested. I am so spoiled by having a small, silent, cool HTPC I will never go back to anything louder or bigger than a 360.
  • LuckyKnight - Saturday, September 29, 2012 - link

    AMD are missing a market here, working 23.976Hz with a 35W TDP for a passive cooled case. That would be my choice, if it existed.

    Shame Intel can't get 23.976 to work properly, despite their alleged promise!
  • Esskay02 - Saturday, September 29, 2012 - link

    "Intel started the trend of integrating a GPU along with the CPU in the processor package with Clarkdale / Arrandale. The GPU moved to the die itself in Sandy Bridge. Despite having a much more powerful GPUs at its disposal (from the ATI acquisition), AMD was a little late in getting to the CPU - GPU party."

    According to my readings, it was AMD not Intel, first to talk and initiated APU(cpu+gpu). Intel found the threat used it manpower and resources , came out release cpu+Gpu chip.

Log in

Don't have an account? Sign up now