Final Words

Ivy Bridge will bring about higher clock speeds thanks to its 22nm process, however the gains will likely be minimal at best. Intel hasn't been too keen on pursuing clock speed for quite some time now. Clock for clock performance will go up by a small amount over Sandy Bridge (4 - 6%), combine that with slightly higher clock speeds and we may see CPU performance gains of around 10% at the same price point with Ivy Bridge. The bigger news will be around power consumption and graphics performance.

Ivy Bridge will be Intel's flagship 22nm CPU for some time. The chip was originally due out at the end of this year but was likely subject to delays due to issues with the fab process and the chip itself. The move to 22nm is significant leap. Not only are these new transistors aggressively small but the introduction of Intel's tri-gate technology is a major departure from previous designs. Should the fab engineers at Intel do their job well, Ivy Bridge could deliver much better power characteristics than Sandy Bridge. As we've already seen, introducing a 35W quad-core part could enable Apple (and other OEMs) to ship a quad-core IVB in a 13-inch system.

Ivy Bridge's GPU performance is particularly intriguing. With a 33% increase in execution hardware and a near doubling of performance per EU, it's clear that Intel is finally taking GPU performance seriously. If Intel can hit its clock and performance targets, Ivy Bridge could deliver GPU performance on-par with AMD's Llano. By the time Ivy Bridge arrives however, AMD will have already taken another step forward with Trinity. The question is who will address their performance issues quicker? Will AMD improve x86 performance faster than Intel can improve GPU performance? Does it even matter if both companies end up at the same point down the road? Short of 3D gaming workloads, I believe that x86 CPU performance is what sells CPUs today. Intel's embracing of OpenCL however and AMD's efforts in that space imply things are finally changing in that regard.

Sandy Bridge brought about a significant increase in CPU performance, but Ivy seems almost entirely dedicated to addressing Intel's aspirations in graphics. With two architectures in a row focused on improving GPU performance, I do wonder if we might see this trend continue with Haswell. Intel implied that upward scalability was a key goal of the Ivy Bridge GPU design, perhaps we will see that happen in 2013.

Ivy Bridge can do very well in notebooks. A more efficient chip built using lower power transistors should positively impact battery life and thermal output. Desktop users who already upgraded to Sandy Bridge may not feel the pressure to upgrade, but having better graphics shipping on all new systems can only be good for the industry.

The New GPU
Comments Locked

97 Comments

View All Comments

  • NeBlackCat - Wednesday, September 21, 2011 - link

    The GPU part would be streets ahead, the drivers would be good, Tegra 3 (4..5...) on the 22nm trigate process is an absolutely mouth-watering proposition, and who knows what else could have been accomplished with the engineering effort saved on Intel GPUs and the (so far) fruitless efforts to push x86 into smart consumer devices.

    On the downside, there's be no AMD.
  • mrpatel - Wednesday, September 21, 2011 - link

    The iMac 2011 27" model ships with the Z68 chipset.

    So the question is whether or not it would support IVY BRIDGE CPUs in it? (given that all other things like TDP etc requirements match up).

    I wonder if IVY BRIDGE CPUs would require a full EFI or kernel module upgrade to be supported? (i mean i really don't care if the USB 3.0 works, but I do care about the new design, gpu performance and lower power to performance ratio compared to sandy bridge!).
  • caggregate - Friday, September 23, 2011 - link

    So being that this is a current/future platform, what's the big deal about support for DDR3L (which as a standard was ratified in July 2010)? I realize the specs of DDR3U ("Ultra low voltage" 1.25V) are not "final" yet, but you'd think it would be implemented given that DDR3U has been available to engineers (according to Hynix/Google) since June 2010.
  • fb39ca4 - Sunday, September 25, 2011 - link

    No OpenGL 4 support? Seriously?
  • OCguy - Tuesday, September 27, 2011 - link

    Are they even trying anymore?
  • Olbi - Tuesday, October 18, 2011 - link

    I wonder why Intel add DX11, but no OpenGL 4? Both are needed by developers of apps and DX11 isnt need by allmost all app. OpenGL 4 is needed by Linux desktop like KDE 4, GNOME, Xfce and others. So why Intel still doesnt support it.
  • tkafafi - Tuesday, March 20, 2012 - link

    Why do the new intel chipsets (series 7) still contain so many (10) usb2 ports ? Would any PC/laptop manufacturer chose to use a usb2 port instead of anavailable usb3 port from the chipset ? for e.g would they use 2 usb2 + 2usb3 instead of 4 usb3 from the chipset ?

    I know PC manufacturers are using this configuration (2 usb2 + 2 usb3) because now they need to support usb3 through an external controller so they are saving cost by using a 2 port controller. But once series 7 chipsets arrive with native usb3 support, there would be no cost advantage to do this. Is this to derisk any interoperability issues with older usb2 devices (i.e if for some reason usb3 ports don't work well with some existing usb2 devices) ?

    Thanks

Log in

Don't have an account? Sign up now