One of the video post processing aspects heavily emphasized by the HQV 2.0 benchmark is cadence detection. Improper cadence detection / deinterlacing leads to the easily observed artifacts during video playback. When and where is cadence detection important? Unfortunately, the majority of the information about cadence detection online is not very clear. For example, one of the top Google search results makes it appear as if telecine and pulldown are one and the same. They also suggest that the opposite operations, inverse telecine and reverse pulldown are synonymous. Unfortunately, that is not exactly true.

We have already seen a high level view of how our candidates fare at cadence detection in the HQV benchmark section. In this section, we will talk about cadence detection in relation to HTPCs. After that, we will see how our candidates fare at inverse telecining.

Cadence detection literally refers to determining whether a pattern is present in a sequence of frames. Why do we have a pattern in a sequence of frames? This is because most films and TV series are shot at 24 frames per second. For the purpose of this section, we will refer to anything shot at 24 fps as a movie.

In the US, TV broadcasts conform to the NTSC standard, and hence, the programming needs to be at 60 frames/fields per second. Currently, some TV stations broadcast at 720p60 (1280x720 video at 60 progressive frames per second), while other stations broadcast at 1080i60 (1920x1080 video at 60 fields per second). The filmed material must be converted to either 60p or 60i before broadcast.

Pulldown refers to the process of increasing the movie frame rate by duplicating frames / fields in a regular pattern. Telecining refers to the process of converting progressive content to interlaced and also increasing the frame rate. (i.e, converting 24p to 60i). It is possible to perform pulldown without telecining, but not vice-versa.

For example, Fox Television broadcasts 720p60 content. The TV series 'House', shot at 24 fps, is subject to pulldown to be broadcast at 60 fps. However, there is no telecining involved. In this particular case, the pulldown applied is 2:3. For every two frames in the movie, we get five frames for the broadcast version by repeating the first frame twice and the second frame thrice.

Telecining is a bit more complicated. Each frame is divided into odd and even fields (interlaced). The first two fields of the 60i video are the odd and even fields of the first movie frame. The next three fields in the 60i video are the odd, even and odd fields of the second movie frame. This way, two frames of the movie are converted to five fields in the broadcast version. Thus, 24 frames are converted to 60 fields.

While the progressive pulldown may just result in judder (because every alternate frame stays on the screen a little bit longer than the other frame), improper deinterlacing of 60i content generated by telecining may result in very bad artifacting as shown below. This screenshot is from a sample clip in the Spears and Munsil (S&M) High Definition Benchmark Test Disc

Inverse Telecine OFF Inverse Telecine ON

Cadence detection tries to detect what kind of pulldown / telecine pattern was applied. When inverse telecine is applied, cadence detection is used to determine the pattern. Once the pattern is known, the appropriate fields are considered in order to reconstruct the original frames through deinterlacing. Note that plain inverse telecine still retains the original cadence while sending out decoded frames to the display. Pullup removes the superfluous repeated frames (or fields) to get us back to the original movie frame rate. Unfortunately, none of the DXVA decoders are able to do pullup. This can be easily verified by taking a 1080i60 clip (of known cadence) and frame stepping it during playback. You can additionally ensure that the refresh rate of the display is set to the same as the original movie frame rate. It can be observed that a single frame repeats multiple times according to the cadence sequence.

Now that the terms are clear, let us take a look at how inverse telecining works in our candidates. The gallery below shows a screenshot while playing back the 2:3 pulldown version of the wedge pattern in S&M.

This clip checks the overall deinterlacing performance for film based material. As the wedges move, the narrow end of the horizontal wedge should have clear alternating black and white lines rather than blurry or flickering lines. The moire in the last quarter of the wedges can be ignored. It is also necessary for both wedges should remain steady and not flicker for the length of the clip.

The surprising fact here is that the NVIDIA GT 430 is the only one to perfectly inverse telecine the clip. Even the 6570 fails in this particular screenshot. In this particular clip, the 6570 momentarily lost the cadence lock, but regained it within the next 5 frames. Even during HQV benchmarking, we found that the NVIDIA cards locked onto the cadence sequence much faster than the AMD cards.

Cadence detection is only part of the story. The deinterlacing quality is also important. In the next section, we will evaluate that aspect.

Custom Refresh Rates Deinterlacing Performance
Comments Locked

70 Comments

View All Comments

  • jwilliams4200 - Monday, June 13, 2011 - link

    All the numbers add up correctly now. Thanks for monitoring the comments and fixing the errors!
  • Samus - Monday, June 13, 2011 - link

    Honestly, my Geforce 210 has been chillin' in my HTPC for 2+ years, and works perfectly :)
  • josephclemente - Monday, June 13, 2011 - link

    If I am running a Sandy Bridge system with Intel HD Graphics 3000, do these cards have any benefit over integrated graphics? What is Anandtech's HQV Benchmark score?

    I tried searching for scores, but people say this is subjective and one reviewer may differ from another. One site says 196 and another in the low 100's. What does this reviewer say?
  • ganeshts - Monday, June 13, 2011 - link

    Give me a couple of weeks. I will be getting a test system soon with the HD 3000, and I will do detailed HQV benchmarking in that review too.
  • dmsher99@gmail.com - Tuesday, June 14, 2011 - link

    I recently built a HTPC with a core i5-2500k on a ASUS P8H67 EVO with a Ceton InfiniTV cable card. Note that the Intel driver is fundamentally flawed and will destroy a system if patched. See the Intel communities thread 20439 for more details.

    Besides causing BSOD over HDMI output when patched, the stable versions have their own sets of bugs including a memory bleed when watching some premium content on HD channels that crashed WMC. Intel appears to have 1 part time developer working on this problem but every test river he puts out breaks more than it fixes. Watching the same, content with a system running a NVIDIA GPU and the memory bleed goes away.

    In my opinion, second gen SB chips is just not ready for prime time in a fully loaded HTPC.
  • jwilliams4200 - Monday, June 13, 2011 - link

    "The first shot shows the appearance of the video without denoising turned on. The second shot shows the performance with denoising turned off. "

    Heads I win, tails you lose!
  • ganeshts - Monday, June 13, 2011 - link

    Again, sorry for the slip-up, and thanks for bringing it to our notice. Fixed it. Hopefully, the gallery pictures cleared up the confusion (particularly the Noise Reduction entry in the NVIDIA Control Panel)
  • stmok - Monday, June 13, 2011 - link

    Looking through various driver release README files, it appears the mobile Nvidia Quadro NVS 4200M (PCI Device ID: 0x1056) also has this feature set.

    The first stable Linux driver (x86) to introduce support for Feature Set D is 270.41.03 release.
    => ftp://download.nvidia.com/XFree86/Linux-x86/270.41...

    It shows only the Geforce GT 520 and Quadro NVS 4200M support Feature Set D.

    The most recent one confirms that they are still the only models to support it.
    => ftp://download.nvidia.com/XFree86/Linux-x86/275.09...
  • ganeshts - Monday, June 13, 2011 - link

    Thanks for bringing it to our notice. When that page was being written (around 2 weeks back), the README indicated that the GT 520 was the only GPU supporting Feature Set D. We will let the article stand as-is, and I am sure readers perusing the comments will become aware of this new GPU.
  • havoti97 - Monday, June 13, 2011 - link

    So basically the app store's purpose is to attract submissions of ideas for features of their next OS, uncompensated of course. All the other crap/fart apps not worthy are approved and people make pennies of those.

Log in

Don't have an account? Sign up now