The Prelude

As Intel got into the chipset business it quickly found itself faced with an interesting problem. As the number of supported IO interfaces increased (back then we were talking about things like AGP, FSB), the size of the North Bridge die had to increase in order to accommodate all of the external facing IO. Eventually Intel ended up in a situation where IO dictated a minimum die area for the chipset, but the actual controllers driving that IO didn’t need all of that die area. Intel effectively had some free space on its North Bridge die to do whatever it wanted with. In the late 90s Micron saw this problem and contemplating throwing some L3 cache onto its North Bridges. Intel’s solution was to give graphics away for free.

The budget for Intel graphics was always whatever free space remained once all other necessary controllers in the North Bridge were accounted for. As a result, Intel’s integrated graphics was never particularly good. Intel didn’t care about graphics, it just had some free space on a necessary piece of silicon and decided to do something with it. High performance GPUs need lots of transistors, something Intel would never give its graphics architects - they only got the bare minimum. It also didn’t make sense to focus on things like driver optimizations and image quality. Investing in people and infrastructure to support something you’re giving away for free never made a lot of sense.

Intel hired some very passionate graphics engineers, who always petitioned Intel management to give them more die area to work with, but the answer always came back no. Intel was a pure blooded CPU company, and the GPU industry wasn’t interesting enough at the time. Intel’s GPU leadership needed another approach.

A few years ago they got that break. Once again, it had to do with IO demands on chipset die area. Intel’s chipsets were always built on a n-1 or n-2 process. If Intel was building a 45nm CPU, the chipset would be built on 65nm or 90nm. This waterfall effect allowed Intel to help get more mileage out of its older fabs, which made the accountants at Intel quite happy as those $2 - $3B buildings are painfully useless once obsolete. As the PC industry grew, so did shipments of Intel chipsets. Each Intel CPU sold needed at least one other Intel chip built on a previous generation node. Interface widths as well as the number of IOs required on chipsets continued to increase, driving chipset die areas up once again. This time however, the problem wasn’t as easy to deal with as giving the graphics guys more die area to work with. Looking at demand for Intel chipsets, and the increasing die area, it became clear that one of two things had to happen: Intel would either have to build more fabs on older process nodes to keep up with demand, or Intel would have to integrate parts of the chipset into the CPU.

Not wanting to invest in older fab technology, Intel management green-lit the second option: to move the Graphics and Memory Controller Hub onto the CPU die. All that would remain off-die would be a lightweight IO controller for things like SATA and USB. PCIe, the memory controller, and graphics would all move onto the CPU package, and then eventually share the same die with the CPU cores.

Pure economics and an unwillingness to invest in older fabs made the GPU a first class citizen in Intel silicon terms, but Intel management still didn’t have the motivation to dedicate more die area to the GPU. That encouragement would come externally, from Apple.

Looking at the past few years of Apple products, you’ll recognize one common thread: Apple as a company values GPU performance. As a small customer of Intel’s, Apple’s GPU desires didn’t really matter, but as Apple grew, so did its influence within Intel. With every microprocessor generation, Intel talks to its major customers and uses their input to help shape the designs. There’s no sense in building silicon that no one wants to buy, so Intel engages its customers and rolls their feedback into silicon. Apple eventually got to the point where it was buying enough high-margin Intel silicon to influence Intel’s roadmap. That’s how we got Intel’s HD 3000. And that’s how we got here.

Haswell GPU Architecture & Iris Pro
Comments Locked

177 Comments

View All Comments

  • DanaGoyette - Saturday, June 1, 2013 - link

    Any idea if this IGP supports 30-bit color and/or 120Hz displays?
    Currently, laptops like the HP EliteBook 8770w and Dell Precision M6700 haven't been able to use Optimus if you opt for such displays. It would be nice to see that question addressed...
  • DickGumshoe - Saturday, June 1, 2013 - link

    I have been planning on getting a Haswell rMBP 15". I was holding out for Haswell namely due to the increased iGPU performance. My primary issue with the current Ivy Bridge rMBP is the lagginess with much of the UI, especially when there are multiple open windows.

    However, I'm a bit concerned about how the Haswell CPU's will compare with the current Ivy Bridge CPU's that Apple is currently shipping with the rMBP. The Haswell equivalent of the current rMBP Ivy Bridge CPU's do not have the Iris Pro, they only have the "slightly improved" HD 4600.

    Obviously, we still need to wait until WWDC, but based on the released Haswell info, will Haswell only be a slight bump in performance for the 15" rMBP? If so, that is *very* disappointing news.
  • hfm - Saturday, June 1, 2013 - link

    This is a huge win for Intel, definitely performance on par with a 650M. It's just as playable on nearly all those games at 1366x768. Even though the 650M pulls away at 1600X900, I wouldn't call either gpu playable in most of those games at that resolution.

    you look at it intelligently, this is a huge win by Intel. The 750M may save them, but if I was in the market for an Ultrabook to complement my gaming notebook, I would definitely go with iris pro. Hell, even if I didn't have a dedicated gaming notebook I would probably get iris Peru in my Ultrabook just for the power savings, it's not that much slower at playable resolution.
  • IntelUser2000 - Tuesday, June 4, 2013 - link

    Iris Pro 5200 with eDRAM is only for the quad core standard notebook parts. The highest available for the Ultrabook is the 28W version, the regular Iris 5100. Preliminary results shows the Iris 5100 to be roughly on par with Desktop HD 4600.
  • smilingcrow - Saturday, June 1, 2013 - link

    For those commenting about pricing Intel has only released data for the high end Iris Pro enabled SKUs at this point and cheaper ones are due later.
    The high end chips are generally best avoided due to being poor value so stay tuned.
  • whyso - Saturday, June 1, 2013 - link

    Yes, the rmbp is clearly using 90 watts on an 85 watt power adapter for the WHOLE SYSTEM!
  • gxtoast - Sunday, June 2, 2013 - link

    Question for Anand:

    I'm looking at getting a Haswell 15" Ultrabook with 16GB RAM and plenty of SSD to run up come fairly sophisticated Cisco, Microsoft and VMware cloud labs.

    Is it likely that the Crystalwell cache could offset the lower performance specifications on the 4950HQ to make it as competitive, or more so, against the 4900MQ in this scenario?

    It would also be good to understand the performance improvement, for non-game video tasks, the HQ part might have over the 4900MQ on a FHD panel. If the advantage isn't there, then, unless the Crystalwell makes a big difference, the 4900MQ part is likely the one to get.

    Cheers
  • piesquared - Sunday, June 2, 2013 - link

    Question. Why in Kabini reviews did we get the standard "just wait til intel releases their next gen parts to see the real competion OMGBBSAUCE!!" marketing spiel, while not a mention that hsw's competition is Kaveri?
  • IntelUser2000 - Sunday, June 2, 2013 - link

    Uhh, because Haswell launch was less than a month away from Kabini, while Kaveri is 6+ months away from Haswell?

    AMD paper launched Kabini and Richland in March, and products are coming now. Kaveri claims to be late Q4 for Desktop and early Q1 next year for mobile. If they do the same thing, that means Feb-March for Desktop Kaveri and April/May for Mobile. Yeah.... perhaps you should think about that.
  • JarredWalton - Sunday, June 2, 2013 - link

    The Kabini article never said, "just wait and see what Intel has coming!" so much as it said, "We need to see the actual notebooks to see how this plays out, and with Intel's Celeron and Pentium ULV parts are already at Kabini's expected price point, it's a tough row to hoe." Kabini is great as an ARM or Atom competitor; it's not quite so awesome compared to Core i3, unless the OEMs pass the price savings along in some meaningful way. I'd take Kabini with a better display over Core i3 ULV, but I'll be shocked if we actually see a major OEM do Kabini with a quality 1080p panel for under $500.

Log in

Don't have an account? Sign up now