Quick Sync: The Best Way to Transcode

Currently Intel’s Quick Sync transcode is only supported by two applications: Cyberlink’s Media Espresso 6 and Arcsoft’s Media Converter 7. Both of these applications are video to go applications targeted at users who want to take high resolution/high bitrate content and transcode it to more compact formats for use on smartphones, tablets, media streamers and gaming consoles. The intended market is not users who are attempting to make high quality archives of Blu-ray content. As a result, there’s no support for multi-channel audio; both applications are limited to 2-channel MP3 or AAC output. There’s also no support for transcoding to anything higher than the main profile of H.264.

Intel indicates that these are not hardware limitations of Quick Sync, but rather limitations of the transcoding software. To that extent, Intel is working with developers of video editing applications to bring Quick Sync support to applications that have a more quality-oriented usage model. These applications are using Intel’s Media SDK 2.0 which is publicly available. Intel says that any developer can get access to and use it.

For the purposes of this comparison I’ve used Media Converter 7, but that’s purely a personal preference thing. The performance and image quality should be roughly identical between the two applications as they both use the same APIs. Jarred's look at Mobile Sandy Bridge will focus on MediaEspresso.

Where image quality isn’t consistent however is between transcoding methods in either application. Both applications support four codepaths: ATI Stream, Intel Quick Sync, NVIDIA CUDA, and x86. While you can set any of these codepaths to the same transcoding settings, the method by which they arrive at the transcoded image will differ. This makes sense given how different all four target architectures are (e.g. a Radeon HD 6870 doesn’t look anything like a NVIDIA GeForce GTX 460). Each codepath makes a different set of performance vs. quality tradeoffs which we’ll explore in this section.

The first but not as obvious difference is if you use the Sandy Bridge CPU cores vs. Quick Sync to transcode you will actually get a different image. The image quality is slightly better on the x86 path, but the two are similar.

The reason for the image quality difference is easy to understand. CPUs are inherently not very parallel beasts. We get tremendous speedup on highly parallel tasks on multi-core CPUs, but compared to a GPU’s ability to juggle hundreds or thousands of threads, even a 6-core CPU doesn’t look too wide. As a result of this serial vs. parallel difference, transcoding algorithms optimized for CPUs are very computationally efficient. They have to be, because you can’t rely on hundreds of cores running in parallel when you’re running on a CPU.

Take the same code and run it on a GPU and you’ll find that the majority of your execution resources are wasted. A new codepath is needed that can take advantage of the greater amount of compute at your disposal. For example, a GPU can evaluate many different compression modes in parallel whereas on a CPU you generally have to pick a balance between performance and quality up front regardless of the content you’re dealing with.

There’s also one more basic difference between code running on the CPU vs. integrated GPU. At least in Intel’s case, certain math operations can be performed with higher precision on Sandy Bridge’s SSE units vs. the GPU’s EUs.

Intel tuned the PSNR of the Quick Sync codepath to be as similar to the x86 codepath as possible. The result is, as I mentioned above, quite similar:

Now let’s tackle the other GPUs. When I first started my Quick Sync investigations I did a little experiment. Without forming any judgments of my own, I quickly transcoded a ~15Mbps 1080p movie into a iPhone 4 compatible 720p H.264 at 4Mbps. I then trimmed it down to a single continuous 4 minute scene and passed the movie along to six AnandTech editors. I sent the editors three copies of the 4 minute scene. One transcoded on a GeForce GTX 460, one using Intel’s Quick Sync, and one using the standard x86 codepath. I named the three movies numerically and told no one which platform was responsible for each output. All I asked for was feedback on which ones they thought were best.

Here are some of the comments I received:

“Wow... there are some serious differences in quality. I'm concerned that the 1.mp4 is the accelerated transcode, in which case it looks like poop..”

“Video 1: Lots of distracting small compression blocks, as if the grid was determined pre-encoding (I know that generally there are blocks, but here the edges seem to persist constantly). Persistent artifacts after black. Quality not too amazing, I wouldn't be happy with this.”

Video one, which many assumed was Quick Sync, actually came from the GeForce GTX 460. The CUDA codepath, although extremely fast, actually produces a much worse image. Videos 2 and 3 were outputs from Sandy Bridge, and the editors generally didn’t agree on which one of those two looked better just that they were definitely better than the first video.

To confirm whether or not this was a fluke I set up three different transcodes. Lossy video compression is hard to get right when you’re transcoding scenes that are changing quickly, so I focused on scenes with significant movement.

The first transcode involves taking the original Casino Royale Blu-ray, stripping it of its DRM using AnyDVD HD, and feeding that into MC7 as a source. The output in this case was a custom profile: 15Mbps 1080p main profile H.264. This is an unrealistic usage model simply because the output file only had 2-channel audio, making it suitable only for PC use and likely a waste of bitrate. I simply wanted to see how the various codepaths looked and performed with an original BD source.

Let’s look at performance first. The entire movie has around 200,000 frames, the transcoding frame rate is below:

ArcSoft Media Converter 7—Casino Royale Transcode

As we’ve been noting in our GPU reviews for quite some time now, there’s no advantage to transcoding on a GPU faster than the $200 mainstream parts. Remember that the transcode process isn’t all infinitely parallel, we are ultimately bound by the performance of the sequential components of the algorithm. As a result, the Radeon HD 6970 offers no advantage over the 6870 here. Both of these AMD GPUs end up being just as fast as a Core i5-2500K.

NVIDIA’s GPUs offer a 15.7% performance advantage, but as I mentioned earlier, the advantage comes at the price of decreased quality (which we’ll get to in a moment).

Inte’s Quick Sync is untouchable though. It’s 48% faster than NVIDIA’s GeForce GTX 460 and 71% faster than the Radeon HD 6970. I don’t want to proclaim that discrete GPU based transcoding is dead, but based on these results it sure looks like it. What about image quality?

My image quality test scene isn’t anything absurd. Bond and Vespyr are about to meet Mathis for the first time. Mathis walks towards the two and the camera pans to follow him. With only one character and the camera both moving at a predictable rate, using some simple motion estimation most high quality transcoders should be able to handle this scene without getting tripped up too much.

Intel Core i5-2500K (x86) Intel Quick Sync NVIDIA GeForce GTX 460 AMD Radeon HD 6870
Download: PNG Download: PNG Download: PNG Download: PNG

Comparing the shots above the only real outlier is NVIDIA’s GeForce GTX 460. The CUDA path clearly errs on the side of performance vs. quality and produces a far noisier image. The ATI Stream codepath produces an image that’s very close to the standard x86 and Quick Sync output. In fact, everything but the GTX 460 does well here.

The next test uses an already transcoded 15Mbps 1080p x264 rip of Quantum of Solace Blu-ray disc. For many this is likely what you’ll have stored on your movie server rather than a full 50GB Blu-ray rip. Our destination this time is the iPhone 4. The settings are as follows: 4Mbps 720p H.264.

At only 4Mbps there’s a lot of compression going on, image quality isn’t going to be nearly as good as the previous test. Performance is considerably higher as the encoders are able to discard more data and optimize for performance over absolute quality. The entire movie has 152,000 frames that are transcoded in this test:

ArcSoft Media Converter 7—Quantum of Solace Transcode

The six-core Phenom II X6 1100T is faster than the Core i5-2500K thanks to the latter’s lack of Hyper Threading. Both are around the speed of the Radeon HD 6870.

The GeForce GTX 460 is faster than any standalone x86 CPU, regardless of core count. However once again, Quick Sync blows them all out of the water. At 200 frames per second Quick Sync is more than twice the speed of a standard Core i5-2500K or even the Phenom II X6 1100T. And it’s nearly twice as fast as the GTX 460.

The image quality comparison scene is also far more stressful on the transcoders. There’s a lot of unpredictable movement going on as Bond is in pursuit of a double agent at the beginning of the film.

Intel Core i5-2500K (x86) Intel Quick Sync NVIDIA GeForce GTX 460 AMD Radeon HD 6870
Download: PNG Download: PNG Download: PNG Download: PNG

The image quality story is about the same for AMD’s GPUs and the x86 path, however Quick Sync delivers a noticeably worse quality image. It’s no where near as bad as the GTX 460, but it’s just not as sharp as what you get from the software or ATI Stream codepaths.

The problem here seems to be that when transcoding from a lower quality source, the tradeoffs NVIDIA makes are amplified. Even Quick Sync isn’t perfect here. I’d say Quick Sync is closer to the pure x86 path than CUDA. Given the tremendous performance advantage I’d say the tradeoff is probably worth it in this case.

For our final test we’ve got a 12Mbps 1080p x264 rip of The Dark Knight. Our target this time is a 640x480, 1.5Mbps iPod Touch compatible format.

ArcSoft Media Converter 7—Dark Knight Transcode

Surprisingly enough the 6970 shows a slight performance advantage compared to the 6870 in this test, but still not enough to approach the speed of the x86 CPUs in this test. Quick Sync is almost 4x faster than the Radeon HD 6970 and twice as fast as everything else.

Our Dark Knight image quality test is also the most strenuous of the review. We’re looking at a very dark, high motion scene with a sudden explosion. The frame we’re looking at is right after the Joker fires a rocket at the rear of a police car. The sudden explosion casts light everywhere which can’t be predicted based on the previous frame.

Intel Core i5-2500K (x86) Intel Quick Sync NVIDIA GeForce GTX 460 AMD Radeon HD 6870
Download: PNG Download: PNG Download: PNG Download: PNG

The GeForce GTX 460 looks horrible here. The output looks like an old film, it’s simply inexcusable.

The Radeon HD 6870 produces a frame that has similar sharpness to the x86 codepath, but with muted colors. Quick Sync maintains color fidelity but loses the sharpness of the x86 path, similar to what we saw in the previous test. In this case the loss of sharpness does help smooth out some aliasing in the paint on the police car but otherwise is undesirable.

Overall, based on what I’ve seen in my testing of Quick Sync, it isn’t perfect but it does deliver a good balance of image quality and performance. With Quick Sync enabled you can transcode a ~2.5 hour Blu-ray disc in around 35 minutes. If you’ve got a lower quality source (e.g. a 15GB Blu-ray re-encode), you can plan on doing a full movie in around 13 minutes. Quick Sync will chew through TV shows in a couple of minutes, without a tremendous loss in quality.

With CUDA on NVIDIA GPUs we had to choose between high quality or high performance. (Perhaps other applications will do the transcode better as well, but at least Arcsoft's Media Converter 7 has serious image quality problems with CUDA.) With Quick Sync you can have both, and better performance than we’ve ever seen from any transcoding solution in desktops or notebooks.

Quick Sync with a Discrete GPU

There’s just one hangup to all of this Quick Sync greatness: it only works if the processor’s GPU is enabled. In other words, on a desktop with a single monitor connected to a discrete GPU, you can’t use Quick Sync.

This isn’t a problem for mobile since Sandy Bridge notebooks should support switchable graphics, meaning you can use Quick Sync without waking up the discrete GPU. However there’s no standardized switchable graphics for desktops yet. Intel indicated that we may see some switchable solutions in the coming months on the desktop, but until then you either have to use the integrated GPU alone or run a multimonitor setup with one monitor connected to Intel’s GPU in order to use Quick Sync.

Intel’s Quick Sync Technology Intel’s Gen 6 Graphics
Comments Locked

283 Comments

View All Comments

  • mosu - Monday, January 3, 2011 - link

    If I want to spend every year a big lot of money on something I'll sell on eBay at half price a few months later and if I'd like crappy quality images on my monitor, then I would buy Sandy Bridge... but sorry, I'm no no brainer for Intel.
  • nitrousoxide - Monday, January 3, 2011 - link

    It really impressed me as I do a lot of video transcoding and it's extremely slow on my triple-core Phenom II X3 720, even though I overclocked it to 4GHz. But there is one question: the acceleration needs EU in the GPU, and GPU is disabled in P67 chipset. Does it mean that if I paired my SNB with a P67 motherboard, I won't be able to use the transcoding accelerator?
  • nitrousoxide - Monday, January 3, 2011 - link

    Not talking about SNB-E this time, I know it will be the performance king again. But I wonder if Bulldozer can at least gain some performance advantage to SNB because it makes no sense that 8 cores running at stunning 4.0GHz won't overrun 4 cores below 3.5GHz, no matter what architectural differences there are between these two chips. SNB is only the new-generation mid-range parts, it will be out-performed by High-End Bulldozers. AMD will hold the low-end, just as it does now; as long as the Bulldozer regain some part that Phenoms lost in mainstream and performance market, things will be much better for it. Enthusiast market is not AMD's cup of tea, just as what it does in GPUs: let nVidia get the performance king and strike from lower performance niches.
  • strikeback03 - Tuesday, January 4, 2011 - link

    I don't think we'll know until AMD releases Bulldozer and Intel counters (if they do). Seems the SNB chips can run significantly faster than they do right now, so if necessary Intel could release new models (or a firmware update) that allows turbo modes up past 4GHz.
  • smashr - Monday, January 3, 2011 - link

    This review and others around the web refer to the CPUs as 'launching today', but I do not see them on NewEgg or other e-tailer sites.

    When can we expect these babies at retail?
  • JumpingJack - Monday, January 3, 2011 - link

    They are already selling in Malaysia, but if you don't live in Malasia then your are SOL :) ... I see rumors around that the NDA was suppose to expire on the 5th with retail availability on the 9th... I was thinking about making the leap, but think I will hold off for more info on BD and Sk2011 SB.
  • slickr - Monday, January 3, 2011 - link

    Intel has essentially shoot itself in the foot this time. Between the letters restrictions, the new chipset and crazy chipset differentiations between a P and a H its crazy.
    Not to mention they lack USB 3.0, ability to have an overclock mobo with integrated graphics and the stupid turbo boost restrictions.

    I'll go even more and say that the I3 core is pure crap and while its better than the old core I3 they are essentially leaving the biggest market the one up the $200 dollars wide open to AMD.

    Those who purchase CPU's at $200 and higher have luck in the 2500 and 2600 variants, but for the majority of us who purchase cpu's bellow $200 its crap.

    Essentially if you want gaming performance you buy I3 2100, but if you want overall better performance go for a phenom II.

    Hopefully AMD comes up with some great CPU's bellow the $200 range that are going to be with 4 cores, unlimited turbo boost and not locked.
  • Arakageeta - Tuesday, January 4, 2011 - link

    It seems that these benchmarks test the CPUs (cores) and GPU parts of SandyBridge separately. I'd like to know more about the effects of the CPU and GPU (usually data intensive) sharing the L3 cache.

    One advantage a system with a discrete GPU is that the GPU and CPUs can happily work simultaneously without largely affecting one another. This is no longer the case with SandyBridge.

    A test I would like to see is a graphics intensive application running while another another application performs some multi-threaded ATLAS-tuned LAPACK computations. Do either the GPU or CPUs swamp the L3 cache? Are there any instances of starvation? What happens to the performance of each application? What happens to frame rates? What happens to execution times?
  • morpheusmc - Tuesday, January 4, 2011 - link

    To me it seems that marketing is defining the processors now in Intel rather than engineering. This is always the case but I think now it is more evident than ever.

    Essentially if you want he features that the new architecture brings, you have to sell out for the higher end models.
    My ideal processor would be a i5-2520M for the desktop: Reasonable clocks, good turbo speeds (could be higher for the desktop since the TDP is not that limited), HT, good graphics etc. The combination of 2 cores and HT provides a good balance between power consumption and perfromance for most users.

    Its desktop equivalent price-wise is the 2500, wich has no HT and a much higher TDP because of the four cores. Alternatively, maybe the 2500S, 2400S or 2390T could be considered if they are too overpriced.

    Intel has introduced too much differentiation in this generation, and in an Apple-like fashion, i.e. they force you to pay more for stuff you don't need, just for an extra feature (eg. VT support, good graphics etc) that practically costs nothing since the silicon is already there. Bottomline, if you want to have the full functionality of the silicon that you get, you have to pay for the higher end models.
    Moreover, having features for specific functions (AES, transcoding etc) and good graphics makes more sense in lower-end models where CPU power is limited.

    This is becoming like the software market, where you have to pay extra for licenses for specific functionalities.
    I wouldn't be surprised if Intel starts selling "upgrade licenses" sometime in the future that will simply unlock features.

    I strongly prefer AMD's approach where all the fatures are available to all models.

    I am also a bit annoyed that there is very little discusison about this problem in the review. I agree that technologically Sandy Bridge is impressive, but the artificial limiting of functionality is anti-technological.
  • ac2 - Tuesday, January 4, 2011 - link

    Agreed, but, apart from the K-series/ higher IGP/ motherboard mess up (which I think should be shortly cleared up), all the rest of it is just smart product marketing...

    It irritates readers of AnandTech, but for the most people who buy off-the-shelf it's all good, with integrators patching up any shortcomings in the core/ chipset.

    The focus does seem to be mobile, low-power and video transcode, almost a recipe for macbook!!

Log in

Don't have an account? Sign up now