Final Words

For the past few years Intel has been threatening to make discrete GPUs obsolete with its march towards higher performing integrated GPUs. Given what we know about Iris Pro today, I'd say NVIDIA is fairly safe. The highest performing implementation of NVIDIA's GeForce GT 650M remains appreciably quicker than Iris Pro 5200 on average. Intel does catch up in some areas, but that's by no means the norm. NVIDIA's recently announced GT 750M should increase the margin a bit as well. Haswell doesn't pose any imminent threat to NVIDIA's position in traditional gaming notebooks. OpenCL performance is excellent, which is surprising given how little public attention Intel has given to the standard from a GPU perspective.

Where Iris Pro is dangerous is when you take into account form factor and power consumption. The GT 650M is a 45W TDP part, pair that with a 35 - 47W CPU and an OEM either has to accept throttling or design a cooling system that can deal with both. Iris Pro on the other hand has its TDP shared by the rest of the 47W Haswell part. From speaking with OEMs, Iris Pro seems to offer substantial power savings in light usage (read: non-gaming) scenarios. In our 15-inch MacBook Pro with Retina Display review we found that simply having the discrete GPU enabled could reduce web browsing battery life by ~25%. Presumably that delta would disappear with the use of Iris Pro instead.

Lower thermal requirements can also enabler smaller cooling solutions, leading to lighter notebooks. While Iris Pro isn't the fastest GPU on the block, it is significantly faster than any other integrated solution and does get within striking distance of the GT 650M in many cases. Combine that with the fact that you get all of this in a thermal package that a mainstream discrete GPU can't fit into and this all of the sudden becomes a more difficult decision for an OEM to make.

Without a doubt, gaming focused notebooks will have to stick with discrete GPUs - but what about notebooks like the 15-inch MacBook Pro with Retina Display? I have a dedicated PC for gaming, I use the rMBP for work and just need a GPU that's good enough to drive everything else in OS X. Intel's HD 4000 comes close, and I suspect Iris Pro will completely negate the need for a discrete GPU for non-gaming use in OS X. Iris Pro should also be competent enough to make modern gaming possible on the platform as well. Just because it's not as fast as a discrete GPU doesn't mean that it's not a very good integrated graphics solution. And all of this should come at a much lower power/thermal profile compared to the current IVB + GT 650M combination.

Intel clearly has some architectural (and perhaps driver) work to do with its Gen7 graphics. It needs more texture hardware per sub-slice to remain competitive with NVIDIA. It's also possible that greater pixel throughput would be useful as well but that's a bit more difficult to say at this point. I would also like to see an increase in bandwidth to Crystalwell. While the 50GB/s bi-directional link is clearly enough in many situations, that's not always the case.

Intel did the right thing with making Crystalwell an L4 cache. This is absolutely the right direction for mobile SoCs going forward and I expect Intel will try something similar with its low power smartphone and tablet silicon in the next 18 - 24 months. I'm pleased with the size of the cache and the fact that it caches both CPU and GPU memory. I'm also beyond impressed that Intel committed significant die area to both GPU and eDRAM in its Iris Pro enabled Haswell silicon. The solution isn't perfect, but it is completely unlike Intel to put this much effort towards improving graphics performance - and in my opinion, that's something that should be rewarded. So I'm going to do something I've never actually done before and give Intel an AnandTech Editors' Choice Award for Haswell with Iris Pro 5200 graphics.

This is exactly the type of approach to solving problems I expect from a company that owns around a dozen modern microprocessor fabs. Iris Pro is the perfect example of what Intel should be doing across all of the areas it competes in. Throw smart architecture and silicon at the problem and don't come back whining to me about die area and margins. It may not be the fastest GPU on the block, but it's definitely the right thing to do.

I'm giving Intel our lowest award under the new system because the solution needs to be better. Ideally I wouldn't want a regression from GT 650M performance, but in a pinch for a mostly work notebook I'd take lower platform power/better battery life as a trade in a heartbeat. This is absolutely a direction that I want to see Intel continue to explore with future generations too. I also feel very strongly that we should have at least one (maybe two) socketed K-series SKUs with Crystalwell on-board for desktop users. It is beyond unacceptable for Intel to not give its most performance hungry users the fastest Haswell configuration possible. Most companies tend to lose focus of their core audience as they pursue new markets and this is a clear example of Intel doing just that. Desktop users should at least have the option of buying a part with Crystalwell on-board.

So much of Intel's march towards improving graphics has been driven by Apple, I worry about what might happen to Intel's motivation should Apple no longer take such an aggressive position in the market. My hope is that Intel has finally realized the value of GPU performance and will continue to motivate itself.

Pricing
Comments Locked

177 Comments

View All Comments

  • boe - Monday, June 3, 2013 - link

    As soon as intel CPUs have video performance that exceeds NVidia and AMD flagship video cards I'll get excited. Until then I think of them as something to be disabled on workstations and to be tolerated on laptops that don't have better GPUs on board.
  • MySchizoBuddy - Monday, June 3, 2013 - link

    So Intel just took the OpenCL crown. Never thought this day would come.
  • prophet001 - Monday, June 3, 2013 - link

    I have no idea whether or not any of this article is factually accurate.

    However, the first page was a treat to read. Very well written.

    :)
  • Teemo2013 - Monday, June 3, 2013 - link

    Great success by Intel.
    4600 is near GT630 and HD4650 (much better than 6450 which sells for $15 at newegg)
    5200 is better than GT640 and HD 6670 (currently sells like $50 at newegg)
    Intel's intergrated used to be worthless comparing with discret cards. It slowly catches up during the past 3 years, and now 5200 is beating a $50 card. Can't wait for next year!
    Hopefully this will finally push AMD and Nvidia to come up with meaningful upgrade to their low level product lines.
  • Cloakstar - Monday, June 3, 2013 - link

    A quick check for my own sanity:
    Did you configure the A10-5800K with 4 sticks of RAM in bank+channel interleave mode, or did you leave it memory bandwidth starved with 2 sticks or locked in bank interleave mode?

    The numbers look about right for 2 sticks, and if that is the case, it would leave Trinity at about 60% of its actual graphics performance.

    I find it hard to believe that the 5800K is about a quarter the performace per watt of the 4950HQ in graphics, even with the massive, server-crushing cache.
  • andrerocha - Monday, June 3, 2013 - link

    is this new cpu faster than the 4770k? it sure cost more?
  • zodiacfml - Monday, June 3, 2013 - link

    impressive but one has to take advantage of the compute/quick sync performance to justify the increase in price over the HD 4600
  • ickibar1234 - Tuesday, June 4, 2013 - link

    Well, my Asus G50VT laptop is officially obsolete! A Nvidia 512MB GDDR3 9800gs is completely pwned by this integrated GPU, and, the CPU is about 50-65% faster clock for clock to the last generation Core 2 Duo Penryn chips. Sure, my X9100 can overclock stably to 3.5GHZ but this one can get close even if all cores are fully taxed.

    Can't wait to see what the Broadwell die shrink brings, maybe a 6-core with Iris or a higher clocked 4-core?

    I too see that dual core versions of mobile Haswell with this integrated GPU would be beneficial. Could go into small 4.5 pounds laptops.

    AMD.....WTH are you going to do.
  • zodiacfml - Tuesday, June 4, 2013 - link

    AMD has to create a Crystalwell of their own. I never thought Intel could beat them to it since their integrated GPUs always has needed bandwidth ever since.
  • Spunjji - Tuesday, June 4, 2013 - link

    They also need to find a way past their manufacturing process disadvantage, which may not be possible at all. We're comparing 22nm Apples to 32/28nm Pears here; it's a relevant comparison because those are the realities of the marketplace, but it's worth bearing in mind when comparing architecture efficiencies.

Log in

Don't have an account? Sign up now