Final Words

Based on these early numbers, Ivy Bridge is pretty much right where we expected it on the CPU side. You're looking at a 5 - 15% increase in CPU performance over Sandy Bridge at a similar price point. I have to say that I'm pretty impressed by the gains we've seen here today. It's quite difficult to get tangible IPC improvements from a modern architecture these days, particularly on such a strict nearly-annual basis. For a tick in Intel's cadence, Ivy Bridge is quite good. It feels a lot like Penryn did after Conroe, but better.

The improvement on the GPU side is significant. Although not nearly the jump we saw going to Sandy Bridge last year. Ivy's GPU finally puts Intel's processor graphics into the realm of reasonable for a system with low end GPU needs. Based on what we've seen, discrete GPUs below the $50 - $60 mark don't make sense if you've got Intel's HD 4000 inside your system. The discrete market above $100 remains fairly safe however.

With Ivy Bridge you aren't limited to playing older titles, although you are still limited to relatively low quality settings on newer games. If you're willing to trade off display resolution you can reach a much better balance. We are finally able to deliver acceptable performance at or above 1366 x 768. With the exception of Metro 2033, the games we tested even showed greater than 30 fps at 1680 x 1050. The fact that we were able to run Crysis: Warhead at 1680 x 1050 at over 50 fps on free graphics from Intel is sort of insane when you think about where Intel was just a few years ago.

Whether or not this is enough for mainstream gaming really depends on your definition of that segment of the market. Being able to play brand new titles at reasonable frame rates as realistic resolutions is a bar that Intel has safely met. I hate to sound like a broken record but Ivy Bridge continues Intel's march in the right direction when it comes to GPU performance. Personally, I want more and I suspect that Haswell will deliver much of that. It is worth pointing out that Intel is progressing at a faster rate than the discrete GPU industry at this point. Admittedly the gap is downright huge, but from what I've heard even the significant gains we're seeing here with Ivy will pale in comparison to what Haswell provides.

What Ivy Bridge does not appear to do is catch up to AMD's A8-series Llano APU. It narrows the gap, but for systems whose primary purpose is gaming AMD will still likely hold a significant advantage with Trinity. The fact that Ivy Bridge hasn't progressed enough to challenge AMD on the GPU side is good news. The last thing we need is for a single company to dominate on both fronts. At least today we still have some degree of competition in the market. To Intel's credit however, it's just as unlikely that AMD will surpass Intel in CPU performance this next round with Trinity. Both sides are getting more competitive, but it still boils down to what matters more to you: GPU or CPU performance. Similarly, there's also the question of which one (CPU or GPU) approaches "good enough" first. I suspect the answer to this is going to continue to vary wildly depending on the end user.

The power savings from 22nm are pretty good on the desktop. Under heavy CPU load we measured a ~30W decrease in total system power consumption compared to a similar Sandy Bridge part. If this is an indication of what we can expect from notebooks based on Ivy Bridge I'd say you shouldn't expect significant gains in battery life under light workloads, but you may see improvement in worst case scenario battery life. For example, in our Mac battery life suite we pegged the Sandy Bridge MacBook Pro at around 2.5 hours of battery life in our heavy multitasking scenario. That's the number I'd expect to see improve with Ivy Bridge. We only had a short amount of time with the system and couldn't validate Intel's claims of significant gains in GPU power efficiency but we'll hopefully be able to do that closer to launch.

There's still more to learn about Ivy Bridge, including how it performs as a notebook chip. If the results today are any indication, it should be a good showing all around. Lower power consumption and better performance at the same price as last year's parts - it's the Moore's Law way. There's not enough of an improvement to make existing SNB owners want to upgrade, but if you're still clinging to an old Core 2 (or earlier) system, Ivy will be a great step forward.

QuickSync Performance
POST A COMMENT

195 Comments

View All Comments

  • tipoo - Wednesday, March 07, 2012 - link

    Thankfully the comments of a certain troll were removed so mine no longer makes sense, for any future readers. Reply
  • Articuno - Tuesday, March 06, 2012 - link

    Just like how overclocking a Pentium 4 resulted in it beating an Athlon 64 and had lower power consumption to boot-- oh wait. Reply
  • SteelCity1981 - Tuesday, March 06, 2012 - link

    That's a stupid comment only a stupid fanboy would make AMD is way ahead of Intel in the graphics department and is very competitive with Intel in the mobile segment now. Reply
  • tipoo - Tuesday, March 06, 2012 - link

    Your comments would do nothing to inform regular readers of sites like this, we already know more. So please, can it. Reply
  • tipoo - Tuesday, March 06, 2012 - link

    Not what I asked little troll. Give a source that says Apple will get a special HD4000 like no other. Reply
  • Operandi - Tuesday, March 06, 2012 - link

    What are you talking about? As long as AMD has a better iGPU there is plenty of reason for them to be viable choice today. And if gaming iGPU performance holds on against Intel there is more than just hope of them getting back in the game in terms of high performance comput tomorrow. Reply
  • tipoo - Tuesday, March 06, 2012 - link

    I'm pretty sure even 16x AF has a sub 2% performance hit on even the lowest end of todays GPUs, is it different with the HD Graphics? If not, why not just enable it like most people would, even on something like a 4670 I max out AF without thinking twice about it, AA still hurts performance though. Reply
  • IntelUser2000 - Tuesday, March 06, 2012 - link

    AF has greater performance impact on low end GPUs. Typically its about 10-15%. It's less on the HD Graphics 3000, only because their 16x AF really only works at much lower levels. It's akin to having option for 1280x1024 resolution, but performing like 1024x768 because it looks like the latter.

    If Ivy Bridge improved AF quality to be on par with AMD/Nvidia, performance loss should be similar as well.
    Reply
  • tipoo - Wednesday, March 07, 2012 - link

    Hmm I did not know that, what component of the GPU is involved in that performance hit (shaders, ROPs, etc)? My card is fairly low end and 16x AF performs nearly no different than 0x. Reply
  • Exophase - Wednesday, March 07, 2012 - link

    AF requires more samples in cases of high anisotropy so I guess the TMU load increases, which may also increase bandwidth requirements since it could force higher LOD in these cases. You'll only see a performance difference if the AF causes the scene to be TMU/bandwidth limited instead of say, ALU limited. I'd expect this to happen more as you move up in performance, not down, since ALU:TEX ratio tends to go up along the higher end.. but APUs can be more bandwidth sensitive and I think Intel's IGPs never had a lot of TMUs.

    Of course it's also very scene dependent. And maybe an inferior AF implementation could end up sampling more than a better one.
    Reply

Log in

Don't have an account? Sign up now