Final Words

Ivy Bridge will bring about higher clock speeds thanks to its 22nm process, however the gains will likely be minimal at best. Intel hasn't been too keen on pursuing clock speed for quite some time now. Clock for clock performance will go up by a small amount over Sandy Bridge (4 - 6%), combine that with slightly higher clock speeds and we may see CPU performance gains of around 10% at the same price point with Ivy Bridge. The bigger news will be around power consumption and graphics performance.

Ivy Bridge will be Intel's flagship 22nm CPU for some time. The chip was originally due out at the end of this year but was likely subject to delays due to issues with the fab process and the chip itself. The move to 22nm is significant leap. Not only are these new transistors aggressively small but the introduction of Intel's tri-gate technology is a major departure from previous designs. Should the fab engineers at Intel do their job well, Ivy Bridge could deliver much better power characteristics than Sandy Bridge. As we've already seen, introducing a 35W quad-core part could enable Apple (and other OEMs) to ship a quad-core IVB in a 13-inch system.

Ivy Bridge's GPU performance is particularly intriguing. With a 33% increase in execution hardware and a near doubling of performance per EU, it's clear that Intel is finally taking GPU performance seriously. If Intel can hit its clock and performance targets, Ivy Bridge could deliver GPU performance on-par with AMD's Llano. By the time Ivy Bridge arrives however, AMD will have already taken another step forward with Trinity. The question is who will address their performance issues quicker? Will AMD improve x86 performance faster than Intel can improve GPU performance? Does it even matter if both companies end up at the same point down the road? Short of 3D gaming workloads, I believe that x86 CPU performance is what sells CPUs today. Intel's embracing of OpenCL however and AMD's efforts in that space imply things are finally changing in that regard.

Sandy Bridge brought about a significant increase in CPU performance, but Ivy seems almost entirely dedicated to addressing Intel's aspirations in graphics. With two architectures in a row focused on improving GPU performance, I do wonder if we might see this trend continue with Haswell. Intel implied that upward scalability was a key goal of the Ivy Bridge GPU design, perhaps we will see that happen in 2013.

Ivy Bridge can do very well in notebooks. A more efficient chip built using lower power transistors should positively impact battery life and thermal output. Desktop users who already upgraded to Sandy Bridge may not feel the pressure to upgrade, but having better graphics shipping on all new systems can only be good for the industry.

The New GPU
Comments Locked

97 Comments

View All Comments

  • Arnulf - Sunday, September 18, 2011 - link

    "Voltage changes have a cubic affect on power, so even a small reduction here can have a tangible impact."

    P = V^2/R

    Quadratic relationship, rather than cubic ?
  • damianrobertjones - Sunday, September 18, 2011 - link

    " As we've already seen, introducing a 35W quad-core part could enable Apple (and other OEMs) to ship a quad-core IVB in a 13-inch system."

    Is Apple the only company that can release a 13" system?
  • medi01 - Monday, September 19, 2011 - link

    No. But it's the only one that absolutely needs to be commented on in orgasmic tone in US press (and big chunk of EU press too)
  • JonnyDough - Monday, September 19, 2011 - link

    They're the only ones who will market it with a flashy Apple logo light on a pretty aluminum case. Everyone knows that lightweight pretty aluminum cases are a great investment on a system that is outdated after just a few years. I wish Apple would make cars instead of PCs so we could bring the DeLorean back. Something about that stainless steel body just gets me so hot. Sure, it would get horrible gas mileage and be less safe in an accident. But it's just so pretty! Plus, although it would use a standard engine made by Ford or GM under the hood, its drivers would SWEAR that Apple builds its own superior hardware!
  • cldudley - Sunday, September 18, 2011 - link

    Am I the only one who thinks Intel is really wasting a lot of time and money on improvements to their on-die GPU? They keep adding features and improvements to the onboard video, right up to including DirectX 11 support, but isn't this really all an excersise in futility?

    Ultimately a GPU integrated with the CPU is going to be bottlenecked by the simple fact that it does not have access to any local memory of it's own. Every time it rasterizes a triangle or performs a texture operation, it is doing it through the same memory bus the CPU is using to fetch instructions, read and write data, etc.

    I read that the GPU is taking a larger proportion of the die space in Ivy Bridge, and all I see is a tragic waste of space that would have been better put into another (pair of?) core or more L1/L2 cache.

    I can see the purpose of integrated graphics in the lowest-end SKUs for budget builds, and there are certainly power and TDP advantages, and things like Quick-Sync are a great idea, but why stuff a GPU in a high-end processor that will be blown away by a comparatively middle-of-the-road discrete GPU?
  • Death666Angel - Sunday, September 18, 2011 - link

    I disagree. AMD has shown that on-die GPUs can already compete with middle-of-the-road discrete graphics in notebooks. Trinity will probably take on middle-of-the-road in the current desktop space.
    Your memory bandwidth argument also doesn't seem to be correct, either. Except for some AMD mainboard graphics with dedicated sideport memory, all IGPs use the RAM, but a lot of them are doing fine. It is also nice to finally see higher clocked RAM be taken advantage of (see Llano 1666MHz vs 1800MHz). DDR4 will add bandwidth as well.
    Once the bandwidth becomes a bottleneck, you can address that, but at the moment Intel doesn't seem to be there, yet, so they keep addressing their other GPU issues. What is wrong with that?
    Also, how many people who buy high-end CPUs end up gaming 90% of the time on them? A lot of people need high-end CPUs for work related stuff, coding, CAD etc. Why should they have to buy a discrete graphics card?

    Overall, you are doing a lot of generalization and you don't take into account quite a few things. :-)
  • cldudley - Sunday, September 18, 2011 - link

    Ironically I spend lots of time in AutoCAD, and a discrete graphics board makes a tremendous difference. Gamer-grade stuff is usually not the best thing in that arena though, it needs to be the special "workstation" cards, which have very different drivers. Quadro or FireGL.

    I agree with you on the work usage, and gaming workloads not being 90% of the time, but on the other hand,workstations tend to have Xeons in them, with discrete graphics cards.
  • platedslicer - Sunday, September 18, 2011 - link

    As a fraction of the computer market, buyers who want power over everything else have plunged. Mobility is so important for OEMs now that fitting already-existent performance levels into smaller, cheaper devices becomes more important than pushing the envelope. I still remember a time when hardly anybody gave a rat's ass about how much power a CPU consumed as long as it didn't melt down. Today, power consumption is a crucial factor due to battery life and heat.

    Personally these developments make me rather sad, partly because I like ever-shinier games, and (more importantly) because seeing the unwashed masses talk about computers as if they were clothing brands makes me want to rip out their throats. That's how the world works, though. Hopefully the chip makers will realize that there's still a market for power over fluff.

    Looking at it on the bright side, CPU power stagnation might make game designers pay more attention to content. Hey, you have to look on the bright side of life.
  • KPOM - Monday, September 19, 2011 - link

    I think that's largely because for the average consumer, PCs have reached the point where CPU capabilities are no longer the bottleneck. Look at the success of the 2010 MacBook Air, which had a slow C2D but a speedy SSD, and sold well enough to last into mid-2011. Games are the next major hurdle, but that's the GPU rather than the CPU, and hence the reason it receives a bigger focus in Ivy Bridge (as it also did in Sandy Bridge compared to Westmere).

    The emphasis now is having the power we have last longer and be available in smaller, more portable devices.
  • JonnyDough - Monday, September 19, 2011 - link

    You're missing the point. They aren't trying to beef the power of the CPU. CPUs are already quite powerful for most tasks. They are trying to lower energy usage and sell en-mass to businesses that use thousands of computers.

Log in

Don't have an account? Sign up now