Quick Sync Image Quality & Performance

Intel obviously focused on increasing GPU performance with Ivy Bridge, but a side effect of that increased GPU performance is more compute available for Quick Sync. As you may recall, Sandy Bridge's secret weapon was an on-die hardware video transcode engine (Quick Sync), designed to keep Intel's CPUs competitive when faced with the onslaught of GPU computing applications. At the time, video transcode seemed to be the most likely candidate for significant GPU acceleration so the move made sense. Plus it doesn't hurt that video transcoding is an extremely popular activity to do with one's PC these days.

The power of Quick Sync was how it leveraged fixed function decode (and some encode) hardware with the on-die GPU's EU array. The combination of the two resulted in some pretty incredible performance gains not only over traditional software based transcoding, but also over the fastest GPU based solutions as well.

Intel put to rest any concerns about image quality when Quick Sync launched, and thankfully the situation hasn't changed today with Ivy Bridge. In fact, you get a bit more flexibility than you had a year ago.

Intel's latest drivers now allow for a selectable tradeoff between image quality and performance when transcoding using Quick Sync. The option is exposed in Media Espresso and ultimately corresponds to an increase in average bitrate. To test image quality and performance, I took the last Harry Potter Blu-ray, stripped it of its DRM and used Media Espresso to make it playable on an iPad 2 (1024 x 768 preset).

In the case of our Harry Potter transcode, selecting the Better Quality option increased average bitrate from to 3.86Mbps to 5.83Mbps. The resulting file size for the entire movie increased from 3.78GB to 5.71GB. Both options produced a good quality transcode, picking one over the other really depends on how much time (and space) you have as well as the screen size of the device you'll be watching it on. For most phone/tablet use I'd say the faster performing option is ideal.

Intel Core i7 3770K (x86) Intel Quick Sync (SNB) Intel Quick Sync (IVB) Intel Quick Sync, Better (IVB) NVIDIA GeForce GTX 680 AMD Radeon HD 7970
original original original original original original


While AMD has yet to enable VCE in any publicly available software, NVIDIA's hardware encoder built into Kepler is alive and well. Cyberlink Media Espresso 6.5 will take advantage of the 680's NVENC engine which is why we standardized on it here for these tests. Once again, Quick Sync's transcoding abilities are limited to applications like Media Espresso or ArcSoft's Media Converter—there's still no support in open source applications like Handbrake.

Compared to the output from Quick Sync, NVENC appears to produce a softer image. However, if you compare the NVENC output to what we got from the software/x86 path you'll see that the two are quite similar. It seems that Quick Sync, at least in this case, is sharpening/adding more noise beyond what you'd normally expect. I'm not sure I'd call it bad, but I need to do some more testing before I know whether or not it's a good thing.

The good news is that NVENC doesn't pose any of the horrible image quality issues that NVIDIA's CUDA transcoding path gave us last year. For getting videos onto your phone, tablet or game console I'd say the output of either of these options, NVENC or Quick Sync, is good enough.

Unfortunately AMD's solution hasn't improved. The washed out images we saw last year, particularly in dark scenes prior to a significant change in brightness are back again. While NVENC delivers acceptable image quality, AMD does not.

The performance story is unfortunately not much different from last year either. The chart below is average frame rate over the entire encode process.

CyberLink Media Espresso 6.5—Harry Potter 8 Transcode

Just as we saw with Sandy Bridge, Quick Sync continues to be an incredible way to get video content onto devices other than your PC. One thing I wanted to make sure of was that Media Espresso wasn't somehow holding x86 performance back to make the GPU accelerated transcodes seem much better than they actually are. I asked our resident video expert, Ganesh, to clone Media Espresso's settings in a Handbrake profile. We took the profile and performed the same transcode, the result is listed above as the Core i7 3770K (Handbrake). You will notice that the Handbrake x86/x264 path is definitely faster than Cyberlink's software path, by over 50% to be exact. However even using Handbrake as a reference, Quick Sync transcodes over 2x faster.

In the tests below I took the same source and varied the output quality with some custom profiles. I targeted 1080p, 720p and 480p at decreasing average bitrates to illustrate the relationship between compression demands and performance:

CyberLink Media Espresso 6.5—Harry Potter 8 Transcode

CyberLink Media Espresso 6.5—Harry Potter 8 Transcode

CyberLink Media Espresso 6.5—Harry Potter 8 Transcode

Unfortunately NVENC performance does not scale like Quick Sync. When asked to preserve a good amount of data, both NVENC and Quick Sync perform similarly in our 1080p/13Mbps test. However ask for more aggressive compression ratios for lower resolution/bitrate targets, and the Intel solution quickly distances itself from NVIDIA. One theory is that NVIDIA's entropy encode block could be the limiting factor here.

Ivy Bridge's improved Quick Sync appears to be aided both by an improved decoder and the HD 4000's faster/larger EU array. The graph below helps illustrate:

If we rely on software decoding but use Intel's hardware encode engine, Ivy Bridge is 18% faster than Sandy Bridge in this test (1080p 13Mbps output from BD source, same as above). If we turn on both hardware decode and encode, the advantage grows to 29%. More than half of the performance advantage in this case is due to the faster decode engine on Ivy Bridge.

Power Consumption Final Words


View All Comments

  • JarredWalton - Tuesday, April 24, 2012 - link

    I don't think it's a mystery. It's straight fact: "One problem Intel does currently struggle with is game developers specifically targeting Intel graphics and treating the GPU as a lower class citizen."

    It IS a problem, and it's one INTEL has to deal with. They need more advocates with game developers, they need to make better drivers, and they need to make faster hardware. We know exactly why this has happened: Intel IGP failed to run for so long that a lot of developers gave up and just blacklisted Intel. Now, Intel is actually capable of running most games, and so long as they aren't explicitly blacklisted things should be okay.

    In truth, the only title I can think of from recent history where Intel could theoretically work but was blacklisted by the game developer is Fallout 3. Even today, if you want to run FO3 on Intel IGP (HD 2000/3000/4000), you need to download a hacked DLL that will identify your Intel GPU as an NVIDIA GT 9800 or something.

    And really, there's no need to blacklist by game developers, because you can't predict the future. FO3 is the perfect example: it runs okay on HD 3000 and plenty fast on HD 4000, but the shortsighted developers locked out Intel for all time. It's better to pop up a warning like some games do: "Warning: we don't recognize your driver and the game may not run properly." Blacklisting is almost more of a political statement IMO.
  • craziplaya21 - Monday, April 23, 2012 - link

    I might be blind or something but did you guys not do a comparison between an original bluray IQ vs an encoded 1080p IQ by quicksync?? Reply
  • toyotabedzrock - Monday, April 23, 2012 - link

    Why is Intel disabling this on the K parts? And why disable vPro? Reply
  • jwcalla - Monday, April 23, 2012 - link

    First, a diversion: "I was able to transcode a complete 130 minute 1080p video to an iPad friendly format..." Just kill me. Somebody please. Why do consumers put up with this crap? Even my ancient Galaxy S has better media playback support.

    It's the same story with my HP TouchPad: MP4 container or GTFO. Who can stand to re-encode their media libraries or has the patience to deal with DLNA slingers when the hardware is perfectly capable of curb-stomping any container / codec you could even conceive? Just get an Android tablet if this is the crap they force on you. Or, in the TouchPad case, wipe it and install ICS.

    As for the article... did I totally misunderstand the page about power consumption? I got the impression that idle power is relatively unchanged. I must be misreading that. Or maybe the lower-end chips will show a stark improvement. Otherwise I totally miss the point of IVB.

    I'm beginning to lose confidence in Intel, at least in terms of innovation. These tick-tock improvements are basically minor pushes in the same boring direction. From an enthusiasts' perspective, the stuff going into ARM SoCs is so much more interesting. Intel makes great high-end CPUs but it seems that these are becoming less important when looking at the consumer market as a whole.
  • Anand Lal Shimpi - Monday, April 23, 2012 - link

    Idle power didn't really go down because at idle nearly everything is power gated to begin with. Any improvements in leakage current don't help if the transistors aren't leaking to begin with :)

    Your ARM sentiments are spot on for a huge portion of the market however. Let's see what Haswell brings...

    Take care,
  • thomas-hrb - Monday, April 23, 2012 - link

    I disagree with the testing methodology for the World of Warcraft test. Firstly no gamer of any game buys hardware so they can go to the most isolated areas in a game. Also the percentage of who can pay for one of these CPU's who would be playing at 1650x1050, would be pretty small.

    I've been playing WoW for a number of years and I don't care about 60fps+ because my monitor won't display it anyway. I care about minimum fps and average fps. nVidia's new adaptive vsync is a great innovation, but I am sure there are other tests that while not as controlled and repeatable is a much indicative of real world performance (the actual reason behind purchasing decisions).

    One possible testing methodology you could look into is to take a character into one of the topend 25man raids. There are 10 classes in WoW and my experience is that a 25man raid will show up every single possible spell/ability and effect that the game has to offer in fairly repeatable patterns.

    I agree that it is not the most scientific approach but I put more stock in a friend saying "go buy this cpu/gpu you can do all the raids and video capture and you get no lag" than you telling me that this cpu will give me 100+ fps in the middle of nowhere. There is a fine line between efficient and effective. I am just hoping that you can dial down the efficiency and come up with a testing methodology that actually produces a metric I can use in my purchasing decisions. After all that is one of the core reasons most people read reviews at all.
  • redisnidma - Monday, April 23, 2012 - link

    Expect Anand's Trinity review to be heavily biased with lots of AMD bashing.
    This site is so predictable...
  • Nfarce - Monday, April 23, 2012 - link

    Oh boy. Another delusional red label fangirl. Maybe when AMD gets their s**t together Anandtech will have something positive to review in comparison to the Intel offerings at the moment. Bulldozer bulldozed right off a cliff. And don't get me wrong: I WANT AMD to whip out some butt-kicking CPUs to keep the competition strong. But right now, Intel is not getting complacent and keep stepping their game up when the competition isn't even on the same playing court. But that's just for now. If AMD continues to falter, Intel may not be as motivated to stay ahead and spend so much R&D in the future. After all, why put the latest F1 car on the track when the competition can only bring a NASCAR car to every track? Reply
  • Reikon - Monday, April 23, 2012 - link

    Temperature is in the overclocking article.

  • rickthestik - Monday, April 23, 2012 - link

    An upgrade for me makes sense as my current cpu is an Intel Core 2 Quad and the new i7-3770K will be a pretty significant upgrade...2.34Ghz to 3.5Ghz and the heaps of additonal tech to go with it.
    I could see a fair number of Sandy Bridge owners holding off for Haswell, though for me this jump is pretty big and I'm looking forward to seeing what the i7-3770K can do with the Z77 motherboards and a shiny new PCI 3.0 GPU.

Log in

Don't have an account? Sign up now