Video Post-Processing: GPU Loading

We saw in our coverage of discrete HTPC GPUs last year that noise reduction loaded up the GPU, and as such, was even disabled in the low end GPUs for want of shader resources. Starting with this review, we are planning to tabulate GPU usage under various post processing scenarios instead of running decoder benchmarks. GPU-Z gives us the necessary data for this purpose.

A 1080i60 H.264 clip (same as the one used in the discrete HTPC GPU article last year) was decoded with the LAV Video Decoder (DXVA2 Copy-Back mode) using EVR-CP as a renderer in GraphStudio Next's Decoder Performance section. Various post-processing options were turned on and off in CCC and the GPU usage recorded in each case.

Video Post Processing GPU Usage: 1080i60 H.264
AMD Radeon HD 7750 (1GB GDDR5) / Catalyst 12.1
LAV Video Decoder DXVA2 (Copy-Back) v0.46 / EVR-CP
Post Processing Algorithm GPU Load
No Video Post Processing 19%
Vector Adaptive Deinterlacing + Pulldown Detection 25%
Edge Enhancement 22%
Noise Reduction 48%
Dynamic Contrast and Colour 25%
All Post Processing / 'Enforce Smooth Video Playback' Disabled 62%

We also put some Full SBS / Full TAB 3D clips (which are basically 2 x HD resolution) through the same process. Those progressive clips resulted in around 70% of the GPU being loaded with all the post processing steps enabled.

It is not yet possible to use the madVR renderer from within GraphStudio Next, but, in future HTPC / HTPC GPU reviews, you can expect to find similar benchmarking with the madVR renderer (now that it is possible to use madVR along with hardware accelerated decode for AMD GPUs also). That said, we see that up to 62% of the GPU is loaded using just EVR-CP. It is not clear how much room is left for madVR processing, and we hope to address that question in future reviews.

Custom Refresh Rates Miscellaneous HTPC Aspects
Comments Locked

155 Comments

View All Comments

  • Dianoda - Wednesday, February 15, 2012 - link

    Yeah, I jumped on that BB/Visiontek HD4850 512MB deal as well. Bought the card about a week before the official launch and at a $50 discount on top of that. Timing was perfect, too, as I had just finished my build, short one video card (borrowed a 3850 from a friend for a few weeks).

    I finally upgraded from that card to a 6950 2GB (BIOS modded to 6970) about a month ago - Skyrim was just too much for the 4850 to handle @ 2560x1440. The 6950 2GB is a great card for the price if you're willing to perform the BIOS mod (and don't mind rebate forms).
  • nerrawg - Thursday, February 16, 2012 - link

    Exactly! I bought 2 4850's in the UK in 2009 for £65 ($95) each - best GPU purchase I have made in 12 years! I now have a single 6870 that I bought in 2011 for £120- but its not really an upgrade at all. Thought I would wait and get a second one cheaper but now I don't think that will happen

    2008-2009 was the sweet spot of a decade for Desktop GPUs. The way things are going with the Desktop (AMD bullsnoozer etc. etc.) I fear that it might have even been the sweet spot of GPU performance for the decade to come as well. I would love to see some massive progress in graphics, but it seems that the all the "suites" care about now days is "smart" this and "smart" that. I can't really blame them either, because until pc programmers get their act together and actually start making apps and games that push what is possible on current hardware I don't see any reason why I need 2X the GPU and CPU compute power every 1-2 years.

    Come on guys - we are all waiting for the next "Crysis" - if it doesn't come then it might spell the end of the enthusiast desktop
  • StrangerGuy - Wednesday, February 15, 2012 - link

    AMD having a fail product at $160 that couldn't even beat an almost 1.5 year old $150 6870 isn't surprising considering they are also the ones with the cheek to price their FX-8150 at near 2600K prices.
  • thunderising - Wednesday, February 15, 2012 - link

    The only problem I have with AMD on this card is WHY THE LOW BANDWIDTH.

    The card performs nearly 10% faster when the memory is clocked at 6GHz QDR (TPU reviews) and 15% with Core Clock matching XFX's OCed Speed.

    I think that 6GHz memory modules would have taken the HD7770 a long way ahead. The performance boost would have been enough to hit HD6850 performance, or beat it in all cases, and at that point, this card at 159$ would make sense.

    Right now, until the price hits about 129$, this doesn't make sense.
  • chizow - Wednesday, February 15, 2012 - link

    But you get GCN, 28nm and a bottle of verdetard?
  • Zoomer - Wednesday, February 15, 2012 - link

    GCN is worse than useless for gamers and non compute users.
  • jokeyrhyme - Wednesday, February 15, 2012 - link

    I think I've built my last system with an AMD CPU. Intel completely abused their monopoly and decimated AMD's success in the CPU department, and I don't think AMD will have an enthusiast-quality CPU ever again. :(

    That said, I think I will still use AMD GPUs for a while yet.

    nVidia's Kepler may beat AMD later this year, but AMD actually has an open-source driver developer on staff and routinely publishes hardware documentation. AMD GPUs will probably have better support for Wayland than their nVidia counterparts due to these factors. If you use Linux and want to stay on the cutting edge, then I don't think picking nVidia is particularly wise.
  • ganeshts - Wednesday, February 15, 2012 - link

    At least in the HTPC area, NVIDIA is miles ahead of AMD in the open source support department.

    Almost all Linux HTPCs capable of HD playback have NVIDIA GPUs under the hood, thanks to their well supported VDPAU feature.

    AMD started getting serious with xvBA only towards the end of last year, and they still have a lot of catching up to do [ http://phoronix.com/forums/showthread.php?65688-XB... ]
  • Ananke - Wednesday, February 15, 2012 - link

    AMD is several years behind NVidia on the compute side...actually they are nowhere as of today. The AMD 7xxx series is so ridiculously priced, it will not get an user base to be attractive for developers. Actually, I am at the point of considering NVidia cards for computing, despite that I hate their heat and power consumption.

    AMD had their chance and they blew it.

    Besides, we shall see where the AMD ex-VP will go - that company most likely will be the next big player in graphics and high performance computing. Probably Apple.
  • PeskyLittleDoggy - Thursday, February 16, 2012 - link

    In my country, company policy dictates you cannot leave your company and work for a competing company if you have valuable R&D knowledge. Thats part of the restraint of trade clause in your contract.

    Basically what I'm saying is, AMD's ex-VP will not be able to work in any company with a graphics department for 2 years if the contract is similar to mine. I can't remember now but some CEO was sued for that recently.

Log in

Don't have an account? Sign up now