Quick Sync: The Best Way to Transcode

Currently Intel’s Quick Sync transcode is only supported by two applications: Cyberlink’s Media Espresso 6 and Arcsoft’s Media Converter 7. Both of these applications are video to go applications targeted at users who want to take high resolution/high bitrate content and transcode it to more compact formats for use on smartphones, tablets, media streamers and gaming consoles. The intended market is not users who are attempting to make high quality archives of Blu-ray content. As a result, there’s no support for multi-channel audio; both applications are limited to 2-channel MP3 or AAC output. There’s also no support for transcoding to anything higher than the main profile of H.264.

Intel indicates that these are not hardware limitations of Quick Sync, but rather limitations of the transcoding software. To that extent, Intel is working with developers of video editing applications to bring Quick Sync support to applications that have a more quality-oriented usage model. These applications are using Intel’s Media SDK 2.0 which is publicly available. Intel says that any developer can get access to and use it.

For the purposes of this comparison I’ve used Media Converter 7, but that’s purely a personal preference thing. The performance and image quality should be roughly identical between the two applications as they both use the same APIs. Jarred's look at Mobile Sandy Bridge will focus on MediaEspresso.

Where image quality isn’t consistent however is between transcoding methods in either application. Both applications support four codepaths: ATI Stream, Intel Quick Sync, NVIDIA CUDA, and x86. While you can set any of these codepaths to the same transcoding settings, the method by which they arrive at the transcoded image will differ. This makes sense given how different all four target architectures are (e.g. a Radeon HD 6870 doesn’t look anything like a NVIDIA GeForce GTX 460). Each codepath makes a different set of performance vs. quality tradeoffs which we’ll explore in this section.

The first but not as obvious difference is if you use the Sandy Bridge CPU cores vs. Quick Sync to transcode you will actually get a different image. The image quality is slightly better on the x86 path, but the two are similar.

The reason for the image quality difference is easy to understand. CPUs are inherently not very parallel beasts. We get tremendous speedup on highly parallel tasks on multi-core CPUs, but compared to a GPU’s ability to juggle hundreds or thousands of threads, even a 6-core CPU doesn’t look too wide. As a result of this serial vs. parallel difference, transcoding algorithms optimized for CPUs are very computationally efficient. They have to be, because you can’t rely on hundreds of cores running in parallel when you’re running on a CPU.

Take the same code and run it on a GPU and you’ll find that the majority of your execution resources are wasted. A new codepath is needed that can take advantage of the greater amount of compute at your disposal. For example, a GPU can evaluate many different compression modes in parallel whereas on a CPU you generally have to pick a balance between performance and quality up front regardless of the content you’re dealing with.

There’s also one more basic difference between code running on the CPU vs. integrated GPU. At least in Intel’s case, certain math operations can be performed with higher precision on Sandy Bridge’s SSE units vs. the GPU’s EUs.

Intel tuned the PSNR of the Quick Sync codepath to be as similar to the x86 codepath as possible. The result is, as I mentioned above, quite similar:

Now let’s tackle the other GPUs. When I first started my Quick Sync investigations I did a little experiment. Without forming any judgments of my own, I quickly transcoded a ~15Mbps 1080p movie into a iPhone 4 compatible 720p H.264 at 4Mbps. I then trimmed it down to a single continuous 4 minute scene and passed the movie along to six AnandTech editors. I sent the editors three copies of the 4 minute scene. One transcoded on a GeForce GTX 460, one using Intel’s Quick Sync, and one using the standard x86 codepath. I named the three movies numerically and told no one which platform was responsible for each output. All I asked for was feedback on which ones they thought were best.

Here are some of the comments I received:

“Wow... there are some serious differences in quality. I'm concerned that the 1.mp4 is the accelerated transcode, in which case it looks like poop..”

“Video 1: Lots of distracting small compression blocks, as if the grid was determined pre-encoding (I know that generally there are blocks, but here the edges seem to persist constantly). Persistent artifacts after black. Quality not too amazing, I wouldn't be happy with this.”

Video one, which many assumed was Quick Sync, actually came from the GeForce GTX 460. The CUDA codepath, although extremely fast, actually produces a much worse image. Videos 2 and 3 were outputs from Sandy Bridge, and the editors generally didn’t agree on which one of those two looked better just that they were definitely better than the first video.

To confirm whether or not this was a fluke I set up three different transcodes. Lossy video compression is hard to get right when you’re transcoding scenes that are changing quickly, so I focused on scenes with significant movement.

The first transcode involves taking the original Casino Royale Blu-ray, stripping it of its DRM using AnyDVD HD, and feeding that into MC7 as a source. The output in this case was a custom profile: 15Mbps 1080p main profile H.264. This is an unrealistic usage model simply because the output file only had 2-channel audio, making it suitable only for PC use and likely a waste of bitrate. I simply wanted to see how the various codepaths looked and performed with an original BD source.

Let’s look at performance first. The entire movie has around 200,000 frames, the transcoding frame rate is below:

ArcSoft Media Converter 7—Casino Royale Transcode

As we’ve been noting in our GPU reviews for quite some time now, there’s no advantage to transcoding on a GPU faster than the $200 mainstream parts. Remember that the transcode process isn’t all infinitely parallel, we are ultimately bound by the performance of the sequential components of the algorithm. As a result, the Radeon HD 6970 offers no advantage over the 6870 here. Both of these AMD GPUs end up being just as fast as a Core i5-2500K.

NVIDIA’s GPUs offer a 15.7% performance advantage, but as I mentioned earlier, the advantage comes at the price of decreased quality (which we’ll get to in a moment).

Inte’s Quick Sync is untouchable though. It’s 48% faster than NVIDIA’s GeForce GTX 460 and 71% faster than the Radeon HD 6970. I don’t want to proclaim that discrete GPU based transcoding is dead, but based on these results it sure looks like it. What about image quality?

My image quality test scene isn’t anything absurd. Bond and Vespyr are about to meet Mathis for the first time. Mathis walks towards the two and the camera pans to follow him. With only one character and the camera both moving at a predictable rate, using some simple motion estimation most high quality transcoders should be able to handle this scene without getting tripped up too much.

Intel Core i5-2500K (x86) Intel Quick Sync NVIDIA GeForce GTX 460 AMD Radeon HD 6870
Download: PNG Download: PNG Download: PNG Download: PNG

Comparing the shots above the only real outlier is NVIDIA’s GeForce GTX 460. The CUDA path clearly errs on the side of performance vs. quality and produces a far noisier image. The ATI Stream codepath produces an image that’s very close to the standard x86 and Quick Sync output. In fact, everything but the GTX 460 does well here.

The next test uses an already transcoded 15Mbps 1080p x264 rip of Quantum of Solace Blu-ray disc. For many this is likely what you’ll have stored on your movie server rather than a full 50GB Blu-ray rip. Our destination this time is the iPhone 4. The settings are as follows: 4Mbps 720p H.264.

At only 4Mbps there’s a lot of compression going on, image quality isn’t going to be nearly as good as the previous test. Performance is considerably higher as the encoders are able to discard more data and optimize for performance over absolute quality. The entire movie has 152,000 frames that are transcoded in this test:

ArcSoft Media Converter 7—Quantum of Solace Transcode

The six-core Phenom II X6 1100T is faster than the Core i5-2500K thanks to the latter’s lack of Hyper Threading. Both are around the speed of the Radeon HD 6870.

The GeForce GTX 460 is faster than any standalone x86 CPU, regardless of core count. However once again, Quick Sync blows them all out of the water. At 200 frames per second Quick Sync is more than twice the speed of a standard Core i5-2500K or even the Phenom II X6 1100T. And it’s nearly twice as fast as the GTX 460.

The image quality comparison scene is also far more stressful on the transcoders. There’s a lot of unpredictable movement going on as Bond is in pursuit of a double agent at the beginning of the film.

Intel Core i5-2500K (x86) Intel Quick Sync NVIDIA GeForce GTX 460 AMD Radeon HD 6870
Download: PNG Download: PNG Download: PNG Download: PNG

The image quality story is about the same for AMD’s GPUs and the x86 path, however Quick Sync delivers a noticeably worse quality image. It’s no where near as bad as the GTX 460, but it’s just not as sharp as what you get from the software or ATI Stream codepaths.

The problem here seems to be that when transcoding from a lower quality source, the tradeoffs NVIDIA makes are amplified. Even Quick Sync isn’t perfect here. I’d say Quick Sync is closer to the pure x86 path than CUDA. Given the tremendous performance advantage I’d say the tradeoff is probably worth it in this case.

For our final test we’ve got a 12Mbps 1080p x264 rip of The Dark Knight. Our target this time is a 640x480, 1.5Mbps iPod Touch compatible format.

ArcSoft Media Converter 7—Dark Knight Transcode

Surprisingly enough the 6970 shows a slight performance advantage compared to the 6870 in this test, but still not enough to approach the speed of the x86 CPUs in this test. Quick Sync is almost 4x faster than the Radeon HD 6970 and twice as fast as everything else.

Our Dark Knight image quality test is also the most strenuous of the review. We’re looking at a very dark, high motion scene with a sudden explosion. The frame we’re looking at is right after the Joker fires a rocket at the rear of a police car. The sudden explosion casts light everywhere which can’t be predicted based on the previous frame.

Intel Core i5-2500K (x86) Intel Quick Sync NVIDIA GeForce GTX 460 AMD Radeon HD 6870
Download: PNG Download: PNG Download: PNG Download: PNG

The GeForce GTX 460 looks horrible here. The output looks like an old film, it’s simply inexcusable.

The Radeon HD 6870 produces a frame that has similar sharpness to the x86 codepath, but with muted colors. Quick Sync maintains color fidelity but loses the sharpness of the x86 path, similar to what we saw in the previous test. In this case the loss of sharpness does help smooth out some aliasing in the paint on the police car but otherwise is undesirable.

Overall, based on what I’ve seen in my testing of Quick Sync, it isn’t perfect but it does deliver a good balance of image quality and performance. With Quick Sync enabled you can transcode a ~2.5 hour Blu-ray disc in around 35 minutes. If you’ve got a lower quality source (e.g. a 15GB Blu-ray re-encode), you can plan on doing a full movie in around 13 minutes. Quick Sync will chew through TV shows in a couple of minutes, without a tremendous loss in quality.

With CUDA on NVIDIA GPUs we had to choose between high quality or high performance. (Perhaps other applications will do the transcode better as well, but at least Arcsoft's Media Converter 7 has serious image quality problems with CUDA.) With Quick Sync you can have both, and better performance than we’ve ever seen from any transcoding solution in desktops or notebooks.

Quick Sync with a Discrete GPU

There’s just one hangup to all of this Quick Sync greatness: it only works if the processor’s GPU is enabled. In other words, on a desktop with a single monitor connected to a discrete GPU, you can’t use Quick Sync.

This isn’t a problem for mobile since Sandy Bridge notebooks should support switchable graphics, meaning you can use Quick Sync without waking up the discrete GPU. However there’s no standardized switchable graphics for desktops yet. Intel indicated that we may see some switchable solutions in the coming months on the desktop, but until then you either have to use the integrated GPU alone or run a multimonitor setup with one monitor connected to Intel’s GPU in order to use Quick Sync.

Intel’s Quick Sync Technology Intel’s Gen 6 Graphics
Comments Locked

283 Comments

View All Comments

  • Kevin G - Monday, January 3, 2011 - link

    There is the Z67 chipset which will allow both overclocking and integrated video. However, this chipset won't arrive until Q2.
  • Tanel - Monday, January 3, 2011 - link

    Well, yes, but one wonders who came up with this scheme in the first place. Q2 could be half a year from now.
  • teohhanhui - Monday, January 3, 2011 - link

    I've been thinking the same thing while reading this article... It makes no sense at all. Bad move, Intel.
  • micksh - Monday, January 3, 2011 - link

    Exactly my thoughts. No Quick Sync for enthusiasts right now - that's a disappointment. I think it should be stated more clearly in review.
    Another disappointment - missing 23.976 fps video playback.
  • has407 - Monday, January 3, 2011 - link

    Yeah, OK, lack of support for VT-d ostensibly sucks on the K parts, but as previously posted, I think there may be good reasons for it. But lets look at it objectively...

    1. Do you have an IO-intensive VM workload that requires VT-d?
    2. Is the inefficiency/time incurred by the lack of VT-d support egregious?
    3. Does your hypervisor, BIOS and chipset support VT-d?

    IF you answered "NO" or "I don't know" to any of those questions, THEN what does it matter? ELSE IF you answered "YES" to all of those questions, THEN IMHO SB isn't the solution you're looking for. END IF. Simple as that.

    So because you--who want that feature and the ability to OC--which is likely 0.001% of the customers who are too cheap to spend the $300-400 for a real solution--the vendor should spend 10-100X to support that capability--which will thus *significantly* increase the cost to the other 99.999% of the customers. And that makes sense how and to whom (other than you and the other 0.0001%)?

    IMHO you demand a solution at no extra cost to a potential problem you do not have (or have not articulated); or you demand a solution at no extra cost to a problem you have and for which the market is not yet prepared to offer at a cost you find acceptable (regardless of vendor).
  • Tanel - Tuesday, January 4, 2011 - link

    General best practice is not to feed the trolls - but in this case your arguments are so weak I will go ahead anyway.

    First off, I like how you - without having any insight in my usage profile - question my need for VT-d and choose to call it "lets look at it objectively".

    VT-d is excellent when...
    a) developing hardware drivers and trying to validate functionality on different platforms.
    b) fooling around with GPU passthrough, something I did indeed hope to deploy with SB.

    So yes, I am in need of VT-d - "Simple as that".

    Secondly, _all_ the figures you've presented are pulled out of your ass. I'll be honest, I had a hard time following your argument as much of what you said makes no sense.

    So I should spend more money to get an equivalent retail SKU? Well then Sir, please go ahead and show me where I can get a retail SB SKU clocked at >4.4GHz. Not only that, you're in essence implying that that people only overclock because they're cheap. In case you've missed it it's the enthusiasts buying high-end components that enable much of the next-gen research and development.

    The rest - especially the part with 10-100X cost implication for vendors - is the biggest pile of manure I've come across on Anandtech. What we're seeing here is a vendor stripping off already existing functionality from a cheaper unit while at the same time asking for a premium price.

    If I were to make a car analogy, it'd be the same as if Ferrari sold the 458 in two versions. One with a standard engine, and one with a more powerful engine that lacks headlights. By your reasoning - as my usage profile is in need of headlights - I'd just have to settle with the tame version. Not only would Ferrari lose the added money they'd get from selling a premium version, they would lose a sell as I'd rather be waiting until they present a version that fits my needs. I sure hope you're not running a business.

    There is no other way to put it, Intel fucked up. I'd be jumping on the SB-bandwagon right now if it wasn't for this. Instead, I'll be waiting.
  • has407 - Tuesday, January 4, 2011 - link

    Apologies, didn't mean to come across as a troll or in-your-face idjit (although I admittedly did--lesson learned ). Everyone has different requirements/demands, and I presumed and assumed too much when I should not have, and should have been more measured in my response.

    You're entirely correct to call me on the fact that I know little or nothing about your requirements. Mea culpa. That said, I think SB is not for the likes of you (or I). While it is a "mainstream" part, it has a few too many warts..

    Does that mean Intel "fucked up"? IMHO no--they made a conscious decision to serve a specific market and not serve others. And no, that "10-100X" is not hot air but based on costing from several large scale deployments. Frickin amazing what a few outliers can do to your cost/budget.
  • Akv - Monday, January 3, 2011 - link

    I didn't have time to read all reviews, and furthermore I am not sure I will be able to express what I mean with the right nuances, since English is not my first language.

    For the moment I am a bit disappointed. To account for my relative coldness, it is important to explain where I start from :

    1) For gaming, I already have more than I need with a quad core 775 and a recent 6x ati graphic card.

    2) For office work, I already have more than I need with an i3 clarkdale.

    Therefore since I am already equipped, I am of course much colder than those who need to buy a new rig just now.

    Also, the joy of trying on a new processor must be tempered with several considerations :

    1) With Sandy Bridge, you have to add a new mobo in the price of the processor. That makes it much more expansive. And you are not even sure that 1155 will be kept for Ivy Bridge. That is annoying.

    2) There are always other valuable things that you can buy for a rig, apart from the sheer processor horsepower : more storage, better monitor...

    3) The power improvement that comes with Sandy Bridge with what I call a normal improvement for a new generation of processors. It is certainly not a quantum leap in the nature of processors.

    Now, there are two things I really dislike :

    1) If you want to use P67 with a graphic card, you still have that piece of hardware, the IGP, that you actually bought and that you cannot use. That seems to me extremely unelegant compared to the 775 generation of processors. It is not an elegant architecture.

    2) If you want to use H67 and the Intel IGP for office work and movies, the improvement compared to clarkdale is not sufficient to justify the buying of a new processor and a new mobo. With H67 you will be able to do office work fluently and watch quasi perfectly, with clarkdale you already could.

    The one thing that I like is the improvement in consumption. Otherwise it all seems to me a bit awkward.
  • sviola - Monday, January 3, 2011 - link

    Well, the IGP non being removable is like having on-board sound, but also having a dedicated soundcard. Not much of a deal, since you can't buy a motherboard withou integrated sound nowadays...
  • Shadowmaster625 - Monday, January 3, 2011 - link

    You say you want Intel to provide a $70 gpu. Well, here's a math problem for you: If the gpu on a 2600K is about 22% of the die, and the die costs $317 retail, then how much are you paying for the gpu? If you guessed $70, you win! Congrats, you now payed $70 for a crap gpu. The question is.... why? There is no tock here... only ridiculously high margins for Intel.

Log in

Don't have an account? Sign up now