Intel’s Quick Sync Technology

In recent years video transcoding has become one of the most widespread consumers of CPU power. The popularity of YouTube alone has turned nearly everyone with a webcam into a producer, and every PC into a video editing station. The mobile revolution hasn’t slowed things down either. No smartphone can play full bitrate/resolution 1080p content from a Blu-ray disc, so if you want to carry your best quality movies and TV shows with you, you’ll have to transcode to a more compressed format. The same goes for the new wave of tablets.

At a high level, video transcoding involves taking a compressed video stream and further compressing it to better match the storage and decoding abilities of a target device. The reason this is transcoding and not encoding is because the source format is almost always already encoded in some sort of a compressed format. The most common, these days, being H.264/AVC.

Transcoding is a particularly CPU intensive task because of the three dimensional nature of the compression. Each individual frame within a video can be compressed; however, since sequential frames of video typically have many of the same elements, video compression algorithms look at data that’s repeated temporally as well as spatially.

I remember sitting in a hotel room in Times Square while Godfrey Cheng and Matthew Witheiler of ATI explained to me the challenges of decoding HD-DVD and Blu-ray content. ATI was about to unveil hardware acceleration for some of the stages of the H.264 decoding pipeline. Full hardware decode acceleration wouldn’t come for another year at that point.

The advent of fixed function video decode in modern GPUs is important because it helped enable GPU accelerated transcoding. The first step of the video transcode process is to first decode the source video. Since transcoding involves taking a video already in a compressed format and encoding it in a new format, hardware accelerated video decode is key. How fast a decode engine is has a tremendous impact on how fast a hardware accelerated video encode can run. This is true for two reasons.

First, unlike in a playback scenario where you only need to decode faster than the frame rate of the video, when transcoding the video decode engine can run as fast as possible. The faster frames can be decoded, the faster they can be fed to the transcode engine. The second and less obvious point is that some of the hardware you need to accelerate video encoding is already present in a video decode engine (e.g. iDCT/DCT hardware).

With video transcoding as a feature of Sandy Bridge’s GPU, Intel beefed up the video decode engine from what it had in Clarkdale. In the first generation Core series processors, video decode acceleration was split between fixed function decode hardware and the GPU’s EU array. With Sandy Bridge and the second generation Core CPUs, video decoding is done entirely in fixed function hardware. This is not ideal from a flexibility standpoint (e.g. newer video codecs can’t be fully hardware accelerated on existing hardware), but it is the most efficient method to build a video decoder from a power and performance standpoint. Both AMD and NVIDIA have fixed function video decode hardware in their GPUs now; neither rely on the shader cores to accelerate video decode.

The resulting hardware is both performance and power efficient. To test the performance of the decode engine I launched multiple instances of a 15Mbps 1080p high profile H.264 video running at 23.976 fps. I kept launching instances of the video until the system could no longer maintain full frame rate in all of the simultaneous streams. The graph below shows the maximum number of streams I could run in parallel:

  Intel Core i5-2500K NVIDIA GeForce GTX 460 AMD Radeon HD 6870
Number of Parallel 1080p HP Streams 5 streams 3 streams 1 stream

AMD’s Radeon HD 6000 series GPUs can only manage a single high profile, 1080p H.264 stream, which is perfectly sufficient for video playback. NVIDIA’s GeForce GTX 460 does much better; it could handle three simultaneous streams. Sandy Bridge however takes the cake as a single Core i5-2500K can decode five streams in tandem.

The Sandy Bridge decoder is likely helped by the very large (and high bandwidth) L3 cache connected to it. This is the first advantage Intel has in what it calls its Quick Sync technology: a very fast decode engine.

The decode engine is also reused during the actual encode phase. Once frames of the source video are decoded, they are actually fed to the programmable EU array to be split apart and prepared for transcoding. The data in each frame is transformed from the spatial domain (location of each pixel) to the frequency domain (how often pixels of a certain color appear); this is done by the use of a discrete cosine transform. You may remember that inverse discrete cosine transform hardware is necessary to decode video; well, that same hardware is useful in the domain transform needed when transcoding.

Motion search, the most compute intensive part of the transcode process, is done in the EU array. It's the combination of the fast decoder, the EU array, and fixed function hardware that make up Intel's Quick Sync engine.

A Near-Perfect HTPC Quick Sync: The Best Way to Transcode
Comments Locked

283 Comments

View All Comments

  • IanWorthington - Monday, January 3, 2011 - link

    Not really: the board manufacturers seem to be adding usb3 chipsets w/o real problems. Good enough.
  • usernamehere - Monday, January 3, 2011 - link

    Sure, if you're building a desktop you can find plenty with USB 3.0 support (via NEC). But if you're looking for a laptop, most will still not have it. For the fact that manufacturers don't want to have to pay extra for features, when they usually get features via the chipsets already included. Asus is coming out with a handful of notebooks in 2011 with USB 3.0 (that I know of), but wide-spread adoption will not be here this year.
  • JarredWalton - Monday, January 3, 2011 - link

    Most decent laptops will have USB3. ASUS, Dell, HP, Clevo, and Compal have all used the NEC chip (and probably others as well). Low-end laptops won't get USB3, but then low-end laptops don't get a lot of things.
  • TekDemon - Monday, January 3, 2011 - link

    Even the netbooks usually have USB 3.0 these days and those almost all use intel atom CPUs. The cost to add the controller is negligible for large manufacturers. USB is not going to be the deciding factor for purchases.
  • DanNeely - Monday, January 3, 2011 - link

    Are you sure about that? Newegg lists 99 netbooks on their site. Searching for USB 3 within netbooks returns 0 products.
  • TekDemon - Monday, January 3, 2011 - link

    Your claims are pretty silly seeing as how USB came about in the same way that Light Peak did-Intel invented USB and pushed it to legacy ports like PS/2, and slowly phased out support for the older ones entirely over the years. It makes no sense for them to support USB 3.0, especially without a real market of devices.
    But motherboard manufacturers will support USB 3.0 via add-in chips. I don't see how this anti-competitive at all, why should intel have to support a format it doesn't think makes sense? So far USB 3.0 hasn't really shown speeds close to it's theoretical, and the only devices that really need the higher bandwidth are external drives that are better off being run off E-SATA anyways. There's no real "killer app" for USB 3.0 yet.
    BTW Light Peak will easily support adding power to devices, so it definitely does not need USB in order to provide power. There'll just be two wires running alongside the fiber optics.
  • DanNeely - Tuesday, January 4, 2011 - link

    The eSata + USB (power) connector has never gone anywhere, which means that eSata devices need at least 2 cables to work. Flash drives and 2.5" HDs don't need enough power to require an external brick, and 80-90% of eSata speed is still much better than the USB2 bottleneck. With double the amount of power over USB2, USB3 could theoretically be used to run 3.5" drives with a double socket plug freeing them from the wall as well.
  • ilkhan - Monday, January 3, 2011 - link

    I've had my P67A-UD4 for almost 3 weeks now. Lets get the chips out already!

    I'm confused, however. The fist paragraph talks of 4.1Ghz turbo mode and the chart on page 2 lists 3.8Ghz as the max for the 2600K. Is the chart talking about 4-core turbo or what?
  • Spike - Monday, January 3, 2011 - link

    Isn't it an i7-2600k? The article title says "i5 2600k"... just curious...
  • Ryan Smith - Monday, January 3, 2011 - link

    Oh dear...

    Fixed. Thanks for that.

Log in

Don't have an account? Sign up now