Mostly No QuickSync

One of the most significant features of Intel's Sandy Bridge CPU is Quick Sync, the hardware assisted video transcode engine. In our review we found it to be better than any of the currently available GPU based transcoding methods and far better than just running the transcode operation on your CPU. While Quick Sync's performance/quality in the pro space is unproven, there's simply no better way of taking your existing video content and transcoding it for use on mobile devices like an iPhone or an iPad.

Given how well Quick Sync is suited for moving content between i-devices it's surprising that Apple doesn't tout it as a feature of the new 2011 MacBook Pros. Not only is Quick Sync not featured by Apple, it's not supported by any Apple application other than FaceTime.

That means iMovie and QuickTime rely on CPU based video encoding and not Quick Sync.

Apple has traditionally been very conservative with adopting new hardware features in software (ahem, TRIM). I'm worried that we may not see Quick Sync in iMovie until the 2012 version, however once the rest of the Mac lineup moves to Sandy Bridge maybe the incentive to introduce it sooner will be there.

Apple does claim support for Quick Sync in FaceTime however CPU utilization is still very high when using FaceTime HD:

Depending on available upstream bandwidth I saw between 50 and 100% CPU utilization of a single core while running FaceTime. According to Apple, FaceTime HD wasn't possible on a dual-core machine without the SNB video encoder. As to why we're seeing such high CPU utilization even with hardware accelerated encode and decode, your guess is as good as mine.

What About The 13? 6Gbps SATA
POST A COMMENT

197 Comments

View All Comments

  • IntelUser2000 - Friday, March 11, 2011 - link

    You don't know that, testing multiple systems over the years should have shown performance differences between manufacturers with identical hardware is minimal(<5%). Meaning its not Apple's fault. GPU bound doesn't mean rest of the systems woud have zero effect.

    It's not like the 2820QM is 50% faster, its 20-30% faster. The total of which could have been derived from:

    1. Quad core vs. Dual core
    2. HD3000 in the 2820QM has max clock of 1.3GHz, vs. 1.2GHz in the 2410M
    3. Clock speed of the 2820QM is quite higher in gaming scenarios
    4. LLC is shared between CPU and Graphics. 2410M has less than half the LLC of 2820QM
    5. Even at 20 fps, CPU has some impact, we're not talking 3-5 fps here

    It's quite reasonable to assume, in 3DMark03 and 05, which are explicitely single threaded, benefits from everything except #1, and frames should be high enough for CPU to affect it. Games with bigger gaps, quad core would explain to the difference, even as little as 5%.
    Reply
  • JarredWalton - Friday, March 11, 2011 - link

    I should have another dual-core SNB setup shortly, with HD 3000, so we'll be able to see how that does.

    Anyway, we're not really focusing on 3DMarks, because they're not games. Looking just at the games, there's a larger than expected gap in the performance. Remember: we've been largely GPU limited with something like the GeForce G 310M using Core i3-330UM ULV vs. Core i3-370. That's a doubling of clock speed on the CPU, and the result was: http://www.anandtech.com/bench/Product/236?vs=244 That's a 2 to 14% difference, with the exception of the heavily CPU dependent StarCraft II (which is 155% faster with the U35Jc).

    Or if you want a significantly faster GPU comparison (i.e. so the onus is on the CPU), look at the Alienware M11x R2 vs. the ASUS N82JV: http://www.anandtech.com/bench/Product/246?vs=257 Again, much faster GPU than the HD 3000 and we're only seeing 10 to 25% difference in performance for low detail gaming. At medium detail, the difference between the two platforms drops to just 0 to 15% (but it grows to 28% in BFBC2 for some reason).

    Compare that spread to the 15 to 33% difference between the i5-2415M and the i7-2820QM at low detail, and perhaps even more telling is the difference remains large at medium settings (16.7 to 44% for the i7-2820QM, except SC2 turns the tables and leads by 37%). The theoretical clock speed difference on the IGP is only 8.3%, and we're seeing two to four times that much -- the average is around 22% faster, give or take. StarCraft II is a prime example of the funkiness we're talking about: the 2820QM is 31% faster at low, but the 2415M is 37% faster at medium? That's not right....

    Whatever is going on, I can say this much: it's not just about the CPU performance potential. I'll wager than when I test the dual-core SNB Windows notebook (an ASUS model) that scores in gaming will be a lot closer than what the MBP13 managed. We'll see....
    Reply
  • IntelUser2000 - Saturday, March 19, 2011 - link

    I forgot one more thing. The quad core Sandy Bridge mobile chips support DDR3-1600 and dual core ones only up to DDR3-1333. Reply
  • mczak - Thursday, March 10, 2011 - link

    memory bus width of HD6490M and H6750M is listed as 128bit/256bit. That's quite wrong, should be 64bit/128bit.

    btw I'm wondering what's the impact on battery life for the HD6490M? It isn't THAT much faster than the HD3000, so I'm wondering if at least the power consumption isn't that much higher neither...
    Reply
  • Anand Lal Shimpi - Thursday, March 10, 2011 - link

    Thanks for the correction :)

    Take care,
    Anand
    Reply
  • gstrickler - Thursday, March 10, 2011 - link

    Anand, I would like to see heat and maximum power consumption of the 15" with the dGPU disabled using gfxCardStatus. For those of us who aren't gamers and don't need OpenCL, the dGPU is basically just a waste of power (and therefore, battery life) and a waste of money. Those should be fairly quick tests. Reply
  • Nickel020 - Thursday, March 10, 2011 - link

    The 2010 Macbooks with the Nvidia GPUs and Optimus switch to the iGPU again even if you don't close the application, right? Is this a general ATI issue that's also like this on Windows notebooks or is it only like this on OS X? This seems like quite an unnecessary hassle, actually having to manage it yourself. Not as bad as having to log off like on my late 2008 Macbook Pro, but still inconvenient. Reply
  • tipoo - Thursday, March 10, 2011 - link

    Huh? You don't have to manage it yourself. Reply
  • Nickel020 - Friday, March 11, 2011 - link

    Well if you don't want to use the dGPU when it's not necessary you kind of have to manage it yourself. If I don't want to have the dGPU power up while web browsing and make the Macbook hotter I have to manually switch to the iGPU with gfxCardStatus. I mean I can leave it set to iGPU, but then I will still manually have to switch to the dGPU when I need the dGPU. So I will have to manage it manually.

    I would really have liked to see more of a comparison with how the GPU switching works in the 2010 Macbook Pros. I mean I can look it up, but I can find most of the info in the review somewhere else too; the point of the review is kind of to have it all the info in one place, and not having to look stuff up.
    Reply
  • tajmahal42 - Friday, March 11, 2011 - link

    I think switching behaviour should be exactly the same for the 2010 and 2011 MacBook Pros, as the switching is done by the Mac OS, not by the Hardware.

    Apparently, Chrome doesn't properly close done Flash when it doesn't need it anymore or something, so the OS thinks it should still be using the dGPU.
    Reply

Log in

Don't have an account? Sign up now