Final Words

This is one of those times summing up is easy: both AMD and NVIDIA fail utterly at every test HD HQV throws at them. Advanced deinterlacing algorithms used on standard definition video are just not present when working with HD video. We are lacking inverse telecine and noise reduction as well. Watching interlaced HD content on a progressive scan monitor using a computer is not going to deliver an optimal experience until AMD and NVIDIA get with the program and start extending their image quality improving algorithms to HD interlaced content.

While watching 1080p content using a modern graphics card will not have many of the issues illuminated by HD HQV, noise reduction would definitely help. With higher resolution content, issues like noise are actually more noticeable and distracting. Ideally, movie studios would perform noise reduction on their content before encoding it for distribution, but this just doesn't happen right now.

With cable and satellite carrying interlaced video, watching TV and recorded content on the computer will be the more important benefactor of high quality deinterlacing support on graphics hardware. Of course, owners of 1080i HD DVDs and Blu-ray movies won't want to be overlooked. It could be argued that good deinterlacing is more important on cable boxes or TVs, but stand alone HD DVD and Blu-ray players are out there that provide higher quality deinterlacing and noise reduction than can currently be achieved on a PC. In fact, Silicon Optix not only makes the HD HQV benchmark, but their HQV line of video processors featured in high end consumer electronics devices is designed to pass all of the tests featured in their benchmark.

We certainly hope AMD and NVIDIA will pay some much needed attention to their HD video processing and address the issues we've seen here. As many of the techniques used in standard definition video processing can be extended to HD processing, we hope that future driver updates will make things more interesting. If and when we see something more interesting happen on PC graphics hardware, we will revisit the issue. We also plan on comparing the PC and consumer electronics hardware in the future as well.

HD HQV Performance
Comments Locked

27 Comments

View All Comments

  • bigpow - Monday, February 12, 2007 - link

    I'd like to see more results, maybe from xbox 360 hd-dvd & toshiba HD-DVD players before I can be convinced that ATI & NVIDIA totally suck

  • thestain - Sunday, February 11, 2007 - link

    Suggest a redo
  • ianken - Friday, February 9, 2007 - link

    ...I meant that in the context of post processing. FWIW.
  • ianken - Friday, February 9, 2007 - link

    Since every HD DVD and BRD I've seen is authored at 1080p, I don't think 1080i film cadence support is that critical for either next-gen disc format.

    It is critical for HD broadcasts where 1080i content is derrived from telecined film or HD24p content and not flagged, which is very very common on cable and OTA feeds.

    Noise reduction: just say no. It is NOT more important for HD. Noise reduction simply replaces random noise with deterministic noise and reduces true detail, I don't care how much magic is in there. With FUBAR analog cable is can make an unwatchable image moderalty palatable but keep it away from my HD-DVD, BRD content or broadcast HD.

    On my 7800GTX I get film cadence detection and adaptive per-pixel vector deinterlace on 1080i. The problem you're seeing may be with the HD-DVD/decoder app failing to properly talk to the GPU. On XP they need to support proprietary APIs to get anything beyond base VMR deinterlacing, particlarly for HD. With Cyberlink there is even a "PureVideo" option in the menus for this. If they do not support PureVideoHD then you will get none of those advanced features on Nvidia hardware. Not sure what ATI does, but I do belive they only support film cadence and noise reduction on SD content.



  • peternelson - Friday, February 9, 2007 - link

    "Noise can actually be more of a problem on HD video due to the clarity with which it is rendered. While much of the problem with noise could be fixed if movie studios included noise reduction as a post processing step, there isn't much content on which noise reduction is currently performed. This is likely a combination of the cost involved in noise reduction as well as the fact that it hasn't been as necessary in the past. In the meantime, we are left with a viewing experience that might not live up to the expectations of viewers, where a little noise reduction during decoding could have a huge impact on the image quality.

    There are down sides to noise reduction, as it can reduce detail. This is especially true if noise was specifically added to the video for effect. We don't run into this problem often, but it is worth noting. On the whole, noise reduction will improve the clarity of the content, especially with the current trend in Hollywood to ignore the noise issue. "

    > Doing noise reduction at the player is less than ideal. You take noisy content then waste much of your datarate describing noise. The NR should be done as a PRE PROCESSING (as opposed to POST) step prior to feeding the encoder (not post processing as you suggest). Any movie studios making disks without NR are just lazy, and the customer deserves better. Obviously a generous bitrate and efficient encoding standard like mpeg4 are desirable, but you waste the benefit if you don't either noise-reduce it or have substantively no-noise content like CGI animation sequences from Pixar.

    Thus the workflow ought to be Telecine scan data or digital intermediate eg 2K film res into colour correction into pan/scan cropping or aspect ratio conversion scaling (eg cinemascope into 16x9) then into noise reduction (statial and temporal etc) into encoder.

    Done professionally different portions of the movie can be encoded with different processing parameters which kick in at the desired timecodes. These are often hand-optimised for sequences that can benefit from them. Such setups may be called ECL (encoder control lists) rather like EDL (edit decision lists).

    Equipment to do excellent realtime noise reduction in high definition is readily available eg from Snell and Wilcox, and if you can't afford it you should either not be in the encoding business, or should be hiring it for the duration of the job from a broadcast hire supplier. Alternatively NR processing may be a feature of your telecine/datacine capture platform.

    Ideally the encoded streams can be compared with the source material to identify any significant encoding artifacts like noticeable DCT macroblocking. This is basic QA and can be done in software and/or visually/manually.

    If the NR is done by the studio prior to disk mastering, I see no reason to rely on the cheap and nasty NR in the player, and of course using a display capable of the proper bit depth and resolution will avoid quantisation banding and scaling degradation.

    Poor attention to production values is diminishing the experience of what ought to be great content.

    Contrary to your statement, noise reduction ought to have been used at standard definition too by anyone doing encoding professionally for DVDs etc. Even moderately expensive/affordable gear from FOR-A could do NR and colour correction using SDI digital ins and outs (that's if you can't afford the Snell gear LOL). The difference is certainly noticeable even before moving to HD content and bigger screens.

    Not all noise reduction techniques reduce detail, particularly when done at the preprocessing stage. Taking noise out make more bits available for the denoised content to be described in MORE detail for equivalent bitrate. Clever algorithms are able to take out hairs from frames of movie film and replace with what ought to be there from adjacent frames (including using motion vector compensation). At this stage the maximum uncompressed source data is available on which to perform the processing whereas NR in the player suffers from only having the bit-constrained compressed material to recreate from. Other pre-processing might include removing camera shake (eg Snell Shakeout) so that compression bits are not wasted on spurious motion vectors where these are undesired. Genuine pans, zooms etc can be distinguised and still get encoded.

    You rightly point out that video using deliberately added noise as simulation of film grain can be troublesome to encode, but there are several other techniques for making video appear film-like, eg Magic Bullet hardware or software as pioneered by The Orphanage which can do things like alter the gamma curve, and replicate various film lab processes like bleach bypass (like opening sequences of Saving Private Ryan).
  • DerekWilson - Sunday, February 11, 2007 - link

    Thanks for the very informative post.

    I think we've got a bit of a miscommunication though ...

    I'm not referring to post processing as post-encoding -- I'm referring to it as hollywood refers to it -- post-filming ... as in "fix it in post". You and I are referring to the same step in the overall scheme of things: after filming, before encoding.

    It seems a bit odd that I hadn't heard anyone talk about processing from the perspective of the encoding step before, as a brief look around google shows that it is a very common way of talking about handling content pre and post encoding.

    In any event, it may be that studios who don't do noise reduction are just lazy. Of course, you'd be calling most of them lazy if you say that. We agree that the customer deserves better, and currently they aren't getting it. Again, go pick up X-Men 3. Not that I liked the movie, but I certainly would have appreciated better image quality.

    Does your statement "If the NR is done by the studio prior to disk mastering, I see no reason to rely on the cheap and nasty NR in the player" go the other way as well? If studios do not perform noise reduction (or, perhaps, adequate noise reduction) prior to mastering, is NR in the player useful?

    I think it is -- but I do want to be able to turn it on and off at will.
  • Wesleyrpg - Thursday, February 8, 2007 - link

    Read more like an advertisement for silicon optix than an article for Anandtech?

    The future of advertising? Buy an article?
  • JarredWalton - Thursday, February 8, 2007 - link

    Hardly. People email us about all kinds of topics, and one of those has been HD video support. We've don't HQV image quality comparisons before, as have many websites, and it's not too surprising that NVIDIA and ATI decoder quality improved after many of the flaws were pointed out. It appears that there are plenty of flaws with the 1080i decoding now, and I'd bet that in the future it will be dramatically improved. We find the results to be useful - i.e. both ATI and NVIDIA are doing essentially nothing with HD video other than outputting it to the display. Now, readers will know that and maybe we'll see improvements. Not everyone cares about improving HD video quality, but for those that do this is good information to have.
  • Wwhat - Sunday, February 11, 2007 - link

    quote:

    both ATI and NVIDIA are doing essentially nothing with HD video other than outputting it to the display

    Well that's clearly not true, they both try to de-interlace the test shows, it's just not a good effort, so don't make such silly statements.


  • Wesleyrpg - Friday, February 9, 2007 - link

    sorry jarred, i must of woken up on the wrong side of the bed this morning, i didnt mean to take it out on you guys. I love Anandtech, and may of been a bit confused with the article.

    Sorry again

Log in

Don't have an account? Sign up now