Final Words

For the past few years Intel has been threatening to make discrete GPUs obsolete with its march towards higher performing integrated GPUs. Given what we know about Iris Pro today, I'd say NVIDIA is fairly safe. The highest performing implementation of NVIDIA's GeForce GT 650M remains appreciably quicker than Iris Pro 5200 on average. Intel does catch up in some areas, but that's by no means the norm. NVIDIA's recently announced GT 750M should increase the margin a bit as well. Haswell doesn't pose any imminent threat to NVIDIA's position in traditional gaming notebooks. OpenCL performance is excellent, which is surprising given how little public attention Intel has given to the standard from a GPU perspective.

Where Iris Pro is dangerous is when you take into account form factor and power consumption. The GT 650M is a 45W TDP part, pair that with a 35 - 47W CPU and an OEM either has to accept throttling or design a cooling system that can deal with both. Iris Pro on the other hand has its TDP shared by the rest of the 47W Haswell part. From speaking with OEMs, Iris Pro seems to offer substantial power savings in light usage (read: non-gaming) scenarios. In our 15-inch MacBook Pro with Retina Display review we found that simply having the discrete GPU enabled could reduce web browsing battery life by ~25%. Presumably that delta would disappear with the use of Iris Pro instead.

Lower thermal requirements can also enabler smaller cooling solutions, leading to lighter notebooks. While Iris Pro isn't the fastest GPU on the block, it is significantly faster than any other integrated solution and does get within striking distance of the GT 650M in many cases. Combine that with the fact that you get all of this in a thermal package that a mainstream discrete GPU can't fit into and this all of the sudden becomes a more difficult decision for an OEM to make.

Without a doubt, gaming focused notebooks will have to stick with discrete GPUs - but what about notebooks like the 15-inch MacBook Pro with Retina Display? I have a dedicated PC for gaming, I use the rMBP for work and just need a GPU that's good enough to drive everything else in OS X. Intel's HD 4000 comes close, and I suspect Iris Pro will completely negate the need for a discrete GPU for non-gaming use in OS X. Iris Pro should also be competent enough to make modern gaming possible on the platform as well. Just because it's not as fast as a discrete GPU doesn't mean that it's not a very good integrated graphics solution. And all of this should come at a much lower power/thermal profile compared to the current IVB + GT 650M combination.

Intel clearly has some architectural (and perhaps driver) work to do with its Gen7 graphics. It needs more texture hardware per sub-slice to remain competitive with NVIDIA. It's also possible that greater pixel throughput would be useful as well but that's a bit more difficult to say at this point. I would also like to see an increase in bandwidth to Crystalwell. While the 50GB/s bi-directional link is clearly enough in many situations, that's not always the case.

Intel did the right thing with making Crystalwell an L4 cache. This is absolutely the right direction for mobile SoCs going forward and I expect Intel will try something similar with its low power smartphone and tablet silicon in the next 18 - 24 months. I'm pleased with the size of the cache and the fact that it caches both CPU and GPU memory. I'm also beyond impressed that Intel committed significant die area to both GPU and eDRAM in its Iris Pro enabled Haswell silicon. The solution isn't perfect, but it is completely unlike Intel to put this much effort towards improving graphics performance - and in my opinion, that's something that should be rewarded. So I'm going to do something I've never actually done before and give Intel an AnandTech Editors' Choice Award for Haswell with Iris Pro 5200 graphics.

This is exactly the type of approach to solving problems I expect from a company that owns around a dozen modern microprocessor fabs. Iris Pro is the perfect example of what Intel should be doing across all of the areas it competes in. Throw smart architecture and silicon at the problem and don't come back whining to me about die area and margins. It may not be the fastest GPU on the block, but it's definitely the right thing to do.

I'm giving Intel our lowest award under the new system because the solution needs to be better. Ideally I wouldn't want a regression from GT 650M performance, but in a pinch for a mostly work notebook I'd take lower platform power/better battery life as a trade in a heartbeat. This is absolutely a direction that I want to see Intel continue to explore with future generations too. I also feel very strongly that we should have at least one (maybe two) socketed K-series SKUs with Crystalwell on-board for desktop users. It is beyond unacceptable for Intel to not give its most performance hungry users the fastest Haswell configuration possible. Most companies tend to lose focus of their core audience as they pursue new markets and this is a clear example of Intel doing just that. Desktop users should at least have the option of buying a part with Crystalwell on-board.

So much of Intel's march towards improving graphics has been driven by Apple, I worry about what might happen to Intel's motivation should Apple no longer take such an aggressive position in the market. My hope is that Intel has finally realized the value of GPU performance and will continue to motivate itself.

Pricing
Comments Locked

177 Comments

View All Comments

  • Elitehacker - Tuesday, September 24, 2013 - link

    Even for a given power usage the 650M isn't even to on the top of the list for highest end discrete GPU.... The top at the moment for lowest wattage to power ratio would be the 765M, even the Radeon HD 7750 has less wattage and a tad more power than the 650M. Clearly someone did not do their researching before opening their mouth.

    I'm gonna go out on a limb and say that vFunct is one of those Apple fanboys that knows nothing about performance. You can get a PC laptop in the same size and have better performance than any Macbook available for $500 less. Hell you can even get a Tablet with an i7 and 640M that'll spec out close to the 650M for less than a Macbook Pro with 650M.
  • Eric S - Tuesday, June 25, 2013 - link

    The Iris Pro 5200 would be ideal for both machines. Pro users would benefit from ECC memory for the GPU. The Iris chip uses ECC memory making it ideal for OpenCL workloads in Adobe CS6 or Final Cut X. Discrete mobile chips may produce errors in the OpenCL output. Gamers would probably prefer a discrete chip, but that isn't the target for these machines.
  • Eric S - Monday, July 1, 2013 - link

    I think Apple cares more about the OpenCL performance which is excellent on the Iris. I doubt the 15" will have a discrete GPU. There isn't one fast enough to warrant it over the Iris 5200. If they do ever put a discrete chip back in, I hope they go with ECC GDDR memory. My guess is space savings will be used for more battery. It is also possible they may try to reduce the display bezel.
  • emptythought - Tuesday, October 1, 2013 - link

    It's never had the highest end chip, just the best "upper midrange" one. Above the 8600m GT was the 8800m GTX and GTS, and above the 650m there was the 660, a couple 670 versions, the 675 versions, and the 680.

    They chose the highest performance part that hit a specific TDP, stretching a bit from time to time. It was generally the case that anything which outperformed the MBP was either a thick brick, or had perpetual overheating issues.
  • CyberJ - Sunday, July 27, 2014 - link

    Not even close, but whatever floats you boat.
  • emptythought - Tuesday, October 1, 2013 - link

    It wouldn't surprise me if the 15in just had the "beefed up" iris pro honestly. They might even get their own, special even more overclocked than 55w version.

    Mainly, because it wouldn't be without precedent. Remember when the 2009 15in macbook pro had a 9400m still? Or when they dropped the 320m for the hd3000 even though it was slightly slower?

    They sometimes make lateral, or even slightly backwards moves when there are other motives at play.
  • chipped - Sunday, June 2, 2013 - link

    That's just crazy talk, they want drop dedicated graphics. The difference is still too big, plus you can't sell a $2000 laptop without a dedicated GFX.
  • shiznit - Sunday, June 2, 2013 - link

    considering Apple specifically asked for eDRAM and since there is no dual core version yet for the 13", I'd say there is very good chance.
  • mavere - Sunday, June 2, 2013 - link

    "The difference is still too big"

    The difference in what?

    Something tells me Apple and its core market is more concerned with rendering/compute performance more than Crysis 3 performance...
  • iSayuSay - Wednesday, June 5, 2013 - link

    If it plays Crysis 3 well, it can render/compute/do whatever intensive fine.

Log in

Don't have an account? Sign up now