Ivy Bridge: Much Faster Quick Sync and 3DMark Performance

We’ve looked at a bunch of general application benchmarks, but there are two areas where Ivy Bridge really looks to improve on Sandy Bridge: Quick Sync and the integrated graphics. How important these two items are really depends on how you plan to use your laptop. If you’re only going to surf the web, watch some YouTube/Hulu/Netflix streams, and work in Microsoft Office applications, many of the improvements in Ivy Bridge won’t really matter. If you might play some games or convert and upload videos to YouTube, however, these last two improvements represent the biggest change relative to Sandy Bridge.

We’ll start with the video encoding tests, using ArcSoft MediaConverter 7 and CyberLink MediaEspresso 6.5. We’ve looked at both utilities in the past, and while there are some minor changes the basic goal continues to be simplicity of transcoding videos. If you’re after maximum quality video transcoding, you’re not going to beat CPU-based utilities using fixed function encoders like Quick Sync. Instead, Quick Sync is all about speed, and sacrificing a bit of quality in order to get your videos converted faster is considered an acceptable tradeoff.

MediaConverter 7 doesn’t give much in the way of options, so we tested with CPU-based and GPU- or Quick Sync accelerated encoding. For all tests, we used GPU accelerated decoding, as disabling/enabling this feature didn’t appear to affect quality or performance much and by default it’s enabled. MediaEspresso has a few more options, depending on how you’re doing the transcoding. For CPU-based and Quick Sync encoding, you can choose between speed and quality for the transcode; for NVIDIA or AMD GPU encoding, you don’t get a choice—we assume here that the encoding for AMD and NVIDIA GPUs is more or less equivalent to the “Faster” encoding setting of the CPU/Quick Sync encodes. Here are the three charts showing performance of dual-core Sandy Bridge i5-2520M, quad-core Sandy Bridge i7-2820QM, quad-core Ivy Bridge i7-3720QM, Ivy Bridge with the GT 630M active, Llano A8-3500M, and Llano A8-3500M with GPU-accelerated encoding.

Assisted Video Transcoding - ArcSoft MediaConverter 7

Assisted Video Transcoding - CyberLink MediaEspresso 6.5

Assisted Video Transcoding - CyberLink MediaEspresso 6.5

First things first, either AMD’s GPUs don’t handle this sort of task very well, or MediaEspresso and MediaConverter aren’t at all optimized for AMD’s GPUs—or at least they need far more than the 400 GPU cores in the HD 6620G. AMD has their VCE (Video Codec Engine) in Southern Islands, but so far we have yet to see a demonstration of it working—yes, it’s over four months after the launch of HD 7970 and VCE is still MIA; that’s as bad as it sounds, and we’re starting to wonder if the VCE hardware even works properly at this point. As for NVIDIA, their CUDA-based encoding works a little better (though as we noted in the past, quality may be a bit lacking relative to other encoding solutions); however, with only 96 CUDA cores in the GT 630M it still can’t match the quad-core Ivy Bridge CPU encoding, let alone Quick Sync. That means that for now, Intel stands alone with their highly efficient Quick Sync encoder, and where it was already quite fast in Sandy Bridge, it’s even faster in Ivy Bridge—anywhere from 70 to 105% faster, depending on which application and settings we’re looking at.

We also get a second look at CPU performance gains relative to Sandy Bridge in video encoding. Here the i7-3720QM is 15% faster than the i7-2820QM in ArcSoft’s MediaConverter, but interestingly the i7-2820QM actually comes out 10 to 15% faster in CyberLink’s MediaEspresso. I can’t really come up with a good reason why Ivy Bridge would be slower in that test, but perhaps there are some Sandy Bridge specific optimizations that don’t carry over to Ivy Bridge right now. As for AMD’s Llano, with no real benefit to GPU-assisted encoding it ends up taking almost three times as long as Ivy Bridge/Sandy Bridge in this particular set of tests, and the quality based encoding is even worse, requiring over six minutes to complete our test encode compared to just over one minute on the Sandy Bridge/Ivy Bridge quad-cores. Even the dual-core Sandy Bridge chip is significantly faster than Llano.

Something else worth noting is that Intel's Quick Sync performance is completely separate from the CPU side of the equation; it's a fixed function encoder that resides on the GPU section of the die. What that means is that you typically get the same performance whether you have a high-end quad-core CPU or a lower-end dual-core CPU. The latter is where Quick Sync is really useful; you can see in our charts that Quick Sync is a lot faster than quad-core CPU transcoding, but if you have a quad-core CPU you're not really waiting that long for most transcodes. Dual-core processors on the other hand are about half as fast as the quad-core offerings, and the result is that Quick Sync on a dual-core Ivy Bridge processor (nevermind the ULV parts) will be many times faster than CPU-based transcoding.

Futuremark 3DMark 11

Futuremark 3DMark Vantage

Futuremark 3DMark Vantage

Futuremark 3DMark06

As noted earlier, Ivy Bridge appears to be more about graphics than CPU performance, and we get a taste of that by looking at the 3DMark results. If games actually echo what we’re seeing with 3DMark, Ivy Bridge’s HD 4000 is set to marginalize anything below the level of AMD’s HD 6630M or NVIDIA’s GT 525M. In some cases, we even see over a 100% improvement relative to the Sandy Bridge i7-2820QM (e.g. 3DMark Vantage), and now Intel is actually able to run the DX11-required 3DMark 11. 3DMark06 appears to be a more likely scenario, however, with performance about 50% higher than Sandy Bridge and even higher than Llano A8 at times. But then, we all know how much 3DMark means when it comes to actual gaming, right? So let’s move on to the gaming benchmarks.

Ivy Bridge Application Performance: Movin’ On Up Ivy Bridge HD 4000: Medium Quality Gaming Now Possible


View All Comments

  • krumme - Monday, April 23, 2012 - link

    There is a reason Intel is bringing 14nm to the atoms in 2014.

    The product here doesnt make sense. Its expensive and not better than the one before it, except better gaming - that is, if the drivers work.

    I dont know if the SB notebooks i have in the house is the same as the ones Jarred have. Mine didnt bring a revolution, but solid battery life, like the penryn notebook and core duo i also have. In my world more or less the same if you apply a ssd for normal office work.

    Loads of utterly uninteresting benchmark doest mask the facts. This product excels where its not needed, and fails where it should excell most: battery life.

    The trigate is mostyly a failure now. There is no need to call it otherwise, and the "preview" looks 10% like a press release i my world. At least trigate is not living up to expectations. Sometimes that happen with technology development, its a wonder its so smooth for Intel normally, and a testament to their huge expertise. When the technology matures and Intel makes better use of the technology in the arch, we will se huge improvements. Spare the praise until then, this is just wrong and bad.
  • JarredWalton - Monday, April 23, 2012 - link

    Seriously!? You're going to mention Atom as the first comment on Ivy Bridge? Atom is such a dog as far as performance is concerned that I have to wonder what planet you're living on. 14nm Atom is going to still be a slow product, only it might double the performance of Cedar Trail. Heck, it could triple the performance of Cedar Trail, which would make it about as fast as Core 2 CULV from three years ago. Hmmm.....

    If Sandy Bridge wasn't a revolution, offering twice the performance as Clarksfield at the high end and triple the battery life potential (though much of that is because Clarksfield was paired with power hungry GPUs), I'm not sure what would be a revolution. Dual-core SNB wasn't as big of a jump, but it was still a solid 15-25% faster than Arrandale and offered 5% to 50% better battery life--the 50% figure coming in H.264 playback; 10-15% better battery life was typical of office workloads.

    Your statement with regards to battery life basically shows you either don't understand laptops, or you're being extremely narrow minded with Ivy Bridge. I was hoping for more, but we're looking at one set of hardware (i7-3720QM, 8GB RAM, 750GB 7200RPM HDD, switchable GT 630M GPU, and a 15.6" LCD that can hit 430 nits), and we're looking at it several weeks before it will go on sale. That battery life isn't a huge leap forward isn't a real surprise.

    SNB laptops draw around 10W at idle, and 6-7W of that is going to the everything besides the CPU. That means SNB CPUs draw around 2-3W at idle. This particular IVB laptop draws around 10W at idle, and all of the other components (especially the LCD) will easily draw at least 6-7W, which means once again the CPU is using 2-3W at idle. IVB could draw 0W at idle and the best we could hope for would be a 50% improvement in battery life.

    As for the final comment, 22nm and tri-gate transistors are hardly a failure. They're not the revolution many hoped for, at least not yet. Need I point out that Intel's first 32nm parts (Arrandale) also failed to eclipse their outgoing and mature 45nm parts? I'm not sure what the launch time frame is for ULV IVB, but I suspect by the time we see those chips 22nm will be performing a lot better than it is in the first quad-core chips.

    From my perspective, to shrink a process node, improve performance of your CPU by 5-25%, and keep power use static is still a definite success and worthy of praise. When we get at least three or four other retail IVB laptops in for review, then we can actually start to say with conviction how IVB compares to SNB. I think it's better and a solid step forward for Intel, especially for lower cost laptops and ultrabooks.

    If all you're doing is office work, which is what it sounds like, you're right: Core 2, Arrandale, Sandy Bridge, etc. aren't a major improvement. That's because if all you're doing is office work, 95% of the time the computer is waiting for user input. It's the times where you really tax your PC that you notice the difference between architectures, and the change from Penryn to Arrandale to Sandy Bridge to Ivy Bridge represents about a doubling in performance just for mundane tasks like office work...and a lot of people would still be perfectly content to run Word, Excel, etc. on a Core 2 Duo.
  • usama_ah - Monday, April 23, 2012 - link

    Trigate is not a failure, this move to Trigate wasn't expected to bring any crazy amounts of performance benefits. Trigate was necessary because of the limitations (leaks) from ever smaller transistors. Trigate has nothing to do with the architecture of the processor per se, it's more about how each individual transistor is created on such a small scale. Architectural improvements are key to significant improvements.

    Sandy Bridge was great because it was a brand new architecture. If you have been even half-reading what they post on Anandtech, Intel's tick-tock strategy dictates that this move to Ivy Bridge would be small improvements BY DESIGN.

    You will see improvements in battery life with the NEW architecture, AFTER Ivy Bridge (when Intel stays at 22nm), the so-called "tock," called "Haswell." And yes, tri-gate will still be in use at that time.
  • krumme - Monday, April 23, 2012 - link

    As I understand trigate, trigate provides the oportunity to even better granularity of power for the individual transistor, by using different numbers of gates. If you design your arch to the process (using that oportunity,- as IB is not, but the first 22nm Atom aparently is), there should be "huge" savings

    I asume you BY DESIGN mean "by process" btw.

    In my world process improvement is key to most industrial production, with tools often being the weak link. The process decides what is possible in your design. That why Intel have used billions "just" mounting the right equipment.
  • JarredWalton - Monday, April 23, 2012 - link

    No, he means Ivy Bridge is not the huge leap forward by design -- Intel intentionally didn't make IVB a more complex, faster CPU. That will be Haswell, the 22nm tock to the Ivy Bridge tick. Making large architectural changes requires a lot of time and effort, and making the switch between process nodes also requires time and effort. If you try to do both at the same time, you often end up with large delays, and so Intel has settled on a "tick tock" cadence where they only do one at a time.

    But this is all old news and you should be fully aware of what Intel is doing, as you've been around the comments for years. And why is it you keep bringing up Atom? It's a completely different design philosophy from Ivy Bridge, Sandy Bridge, Merom/Conroe, etc. Atom is more a competitor to ARM SoCs, which have roughly an order of magnitude less compute performance than Ivy Bridge.
  • krumme - Monday, April 23, 2012 - link

    - Intel speeds up Atom development, - not using depreciated equipment for the future.
    - Intel invest heavily to get into new business areas and have done for years
    - Haswell will probably be slimmer on the cpu part

    The reason they do so is because the need of cpu power outside of the servermarket, is stagnating. And new third world markets is emergin. And all is turning mobile - its all over your front page now i can see.

    The new Atom probably will provide adequate for most. (like say core 2 culv). Then they will have the perfect product. Its about mobility and price and price. Haswell will probably be the product for the rest of the mainstream market leaving even less for the dedicated gpu.

    IB is an old style desktop cpu, maturing a not quite ready 22nm trigate process. Designed to fight a BD that did not arive. Thats why it does not impress. And you can tell Intel knows because the mobile lineup is so slim.

    The market have changed. The shareprice have rocketed for AMD even though their high-end cpu failed, because the Atom sized bobcat and old technology llano could enter the new market. I could note have imagined the success of Llano. I didnt understand the purpose of it, because trinity was comming so close. But the numbers talk for themselves. People buy an user experience where it matter at lowest cost, not pcmark, encoding times, zip, unzip.

    You have to use new benchmarks. And they have to be reinvented again. They have to make sense. Obviously cpu have to play a less role and the rest more. You have a very strong team, if not the strongest out there. Benchmark methology should be at the top of your list and use a lot of your development time.
  • JarredWalton - Monday, April 23, 2012 - link

    The only benchmarks that would make sense under your new paradigm are graphics and video benchmarks, well, and battery life as well, because those are the only areas where a better GPU matters. Unless you have some other suggestions? Saying "CPU speed is reaching the point where it really doesn't matter much for a large number of people" is certainly true, and I've said as much on many occasions. Still, there's a huge gulf between Atom and Core 2 still, and there are many tasks where CULV would prove insufficient.

    By the time the next Atom comes out, maybe it will be fixed in the important areas so that stuff like YouTube/Netflix/Hulu all work without issue. Hopefully it also supports at least 4GB RAM, because right now the 2GB limit along with bloated Windows 7 makes Atom a horrible choice IMO. Plus, margins are so low on Atom that Intel doesn't really want to go there; they'd rather figure out ways to get people to continue paying at least $150 per CPU, and I can't fault their logic. If CULV became "fast enough" for everyone Intel's whole business model goes down the drain.

    Funny thing is that even though we're discussing Atom and by extension ARM SoCs, those chips are going through the exact same rapid increases in performance. And they need it. Tablets are fine for a lot of tasks, but opening up many web sites on a tablet is still a ton slower than opening the same sites on a Windows laptop. Krait and Tegra 3 are still about 1/3 the amount of performance I want from a CPU.

    As for your talk about AMD share prices, I'd argue that AMD share prices have increased because they've rid themselves of the albatross that was their manufacturing division. And of course, GF isn't publicly traded and Abu Dhabi has plenty of money to invest in taking over CPU manufacturing. It's a win-win scenario for those directly involved (AMD, UAE), though I'm not sure it's necessarily a win for everyone.
  • bhima - Monday, April 23, 2012 - link

    I figure Intel wants everyone to want their CULV processors since they seem to charge the most for them to the OEMs, or are the profit margins not that great because they are a more difficult/expensive processor to make? Reply
  • krumme - Tuesday, April 24, 2012 - link

    Yes - video and gaming is what matters for the consumer now, everything is okey as it will - hopefully - be 2014. What matters is ssd, screen quality, and everything else, - just not cpu power. It just needs to have far less space. Cpu having so much space is just old habits for us old geeks.

    AMD getting rid of GF burden have been in the plan for years. Its known and can not influence share price. Basicly the, late, move to mobile focus, and the excellent execution of those consumer / not reviewer shaped apus is a part of the reason.

    The reviewers need to move their mindset :) - btw its my impression Dustin is more in line with what the general consumer want. Ask him if he thinks the consumer want a new ssd benchmark with 100 hours of 4k reading and writing.
  • MrSpadge - Monday, April 23, 2012 - link

    No, the finer granularity is just a nice side effect (which could probably be used more aggressively in the future). However, the main benefit of tri-gate is more control over the channel, which enables IB to reach high clock speeds at comparably very low voltages, and at very low leakage. Reply

Log in

Don't have an account? Sign up now