The AMD 6550D is able to detect all the cadence patterns in the HQV benchmark Blu-ray. Moving beyond HQV, we have a more sterenuous clip from the Spears and Munsil High Definition Benchmark Test Disc (hereon referred to as the S&M). In this section, we first present screenshots from putting the 2-3-2-3 cadence detection / deinterlacing 'Wedge' test clip.

 

It was fascinating to put this clip through the paces. When starting out playback with MPC-HC, the framesteps followed the 2-3 cadence pattern indicating that the GPU had locked onto the cadence. However, as the wedge starts its descent, the lock gets lost only to be regained on the way back. This repeats for two revolutions, but the third time onwards, the lock gets permanently lost. Contrast this with the GT 430's behavior which retains the locked cadence all through, and the 6570 which loses it only momentarily around the same timestamp, but regains it later.

 

You can roll the mouse over the images below to get views of segments of the full size lossless version. This will make it easier to observe the Moire pattern and how the deinterlacing fails once the cadence lock is lost.

 

 

Cadence Locked

 

 

Cadence Lost

 

 

Cadence Regained

 

 

 

Cadence Permanently Lost

 

What is the takeaway from this experiment? Cadence detection will probably work for the more common scenarios, but don't be surprised if you encounter some streams which fail.

The more interesting aspect is the deinterlacing quality. Let us take a look at the artificial Cheese Slices first. A sample deinterlaced frame is provided below.

 

 

 

AMD 6550D's Vector Adaptive Deinterlacing

 


Now, let us compare Intel HD 3000 Graphics and the AMD 3550D with respect to the various deinterlacing aspects.

 

Deinterlacing - Video Reference:

 

AMD 6550D Intel HD3000

 

Deinterlacing - Cheese Slice Ticker:

 

AMD 6550D Intel HD3000

 

Deinterlacing - Noise Response:

 

AMD 6550D Intel HD3000

 

Deinterlacing - Algorithm Type:

 

AMD 6550D Intel HD3000

 

Deinterlacing - Disc Test:

 

AMD 6550D Intel HD3000


Except for a certain segment in the noise response section, it looks like the AMD algorithm works much better than Intel's for the Cheese Slices test.

How does edge adaptive deinterlacing work? Here are screenshots made while playing back the boat clip from S&M on the Intel HD 3000 and the AMD 6550D.

 

AMD 6550D Intel HD3000


From a deinterlacing perspective, the AMD 6550D looks better. However, the default video post processing settings with ESVP seem to wash out the details (for example, near the shoreline) when compared to the Intel HD 3000.

 

 

Lynx: Chroma Upsampling Errors DXVA Benchmarking
Comments Locked

104 Comments

View All Comments

  • patsfan - Sunday, July 10, 2011 - link

    I have read, off and on for the last 7 years or so articles on Anandtech, however, the ONLY reason I come here anymore is the same reason I look at CNN from time to time. For a good laugh. What started off as a fairly serious website for reviews on all things peripheral, has now turned into an INTEL propaganda machine. Seriously, when you look at all the ads, all you see is INTEL, INTEL, INTEL. Anand, if you stepped back and looked at your website as it gets pulled up, you would see what we see, a website that is no longer objective, between Intel, and Crapple you've apparently lost your objective way. I look at the obvious, AMD is the underdog, and I'll pick the underdog every time, and as far as Apple goes, it's a hell of a marketing machine, but as far as equipment goes???? The I-phone was great 4 years ago, today, it is mediocre at best, and if you're into power computing, Apple has absolutely NO compelling entry. Sadly, I resign myself to the fact that this website no longer provides me with any stimulating information, other than being an Intel/Crapple bully pulpit........ Oh well
  • qqgoodhao - Sunday, July 17, 2011 - link

    http://www.ifancyshop.com

    Women's fashion, men's personality + shoes

    Travel bagthat eye-catching jacket + super pack free shipping
  • zgoodbbb - Wednesday, July 20, 2011 - link

    http://www.ifancyshop.com

    Women's fashion, men's personality + shoes

    Travel bagthat eye-catching jacket + super pack free shipping
  • morohmoroh - Friday, July 22, 2011 - link

    puting northbrigde inside core CPU is wonderfull in age of 32 NM but than ....that GPU radeon is still shared ram outside the processor , and NB still have to hop in out to transfer data in and out from the socket .....and it shared too with other kinda data for system stored at ram socket ddr3

    discreet VGA have dedicated RAM and some indenpendet core CPU too like in Nvidia , i wondering GDDR5 VS DDR3 , which one is fastes speedt?........comboing VGA is another idea but all that handle via software not at hardware it self.... so far see there is just another decoder for movies aka films format

    the northbridge itself inside or outside core CPU will stressing alot hardwork to transforming coming in out data from external socket RAM....and i think it will produce more heat itself at the core

    how can NB self handle 64 Gigs? i read that phenom x6 are gonna be very hot to handle 16 Gigs

    intel create DMI to handle data boost at Mobos ram pci etc ...and QPI for inner core data boost...

    well i not preety sure 32 Nm liano architecture are work fines in real wordl practically electron flows handling data , speed etc.....and mobo compability to manage the potential of it

    pushing external DDR3 speed to act like GDDR5 for integrated VGA on same dies core ?

    and i dont really care about i3 etc with VGA too i think its just another variaton for marketing thats all..seem nothing significant inovation about nano tech age can do more beyond that...

    maybe doctor OCt gonna say "the power of the sun in palm of my hand "LOL

    cheers

Log in

Don't have an account? Sign up now