Metro: Last Light

Metro: Last Light is the latest entry in the Metro series of post-apocalyptic shooters by developer 4A Games. Like its processor, Last Light is a game that sets a high bar for visual quality, and at its highest settings an equally high bar for system requirements thanks to its advanced lighting system. This doesn’t preclude it from running on iGPUs thanks to the fact that it scales down rather well, but it does mean that we have to run at fairly low resolutions to get a playable framerate.

Metro: Last Light

Metro is a pretty heavy game to begin with, but Iris Pro starts off with an extremely good showing here. In its 55W configuration, Iris Pro is only 5% slower than the GeForce GT 650M. At 47W the gap is larger at 11% however. At 1366 x 768 the difference seems less memory bandwidth related and has more to do with efficiency of the graphics hardware itself.

The comparison to mobile Trinity is a walk in the park for Iris Pro. Even a 100W desktop Trinity part is appreciably slower here.

Metro: Last Light

Increasing the resolution and quality settings changes things quite a bit. The 650M pulls ahead, and now the Iris Pro 5200 basically equals the performance of the GT 640. Intel claims a very high hit rate on the L4 cache, however it could be that 50GB/s is just not enough bandwidth between the GPU and Crystalwell. The performance improvement compared to all other processor graphics solutions, regardless of TDP, is still well in favor of Iris Pro. The i7-4950HQ holds a 50% advantage over the desktop i7-4770K and is almost 2x the speed of the i7-3770K.

Comparing mobile to mobile, Iris Pro delivers over 2x the frame rate of Trinity.

The Comparison Points BioShock: Infinite
Comments Locked

177 Comments

View All Comments

  • HisDivineOrder - Saturday, June 1, 2013 - link

    I see Razer making an Edge tablet with an Iris-based chip. In fact, it seems built for that idea more than anything else. That or a NUC HTPC run at 720p with no AA ever. You've got superior performance to any console out there right now and it's in a size smaller than an AppleTV.

    So yeah, the next Razer Edge should include this as an optional way to lower the cost of the whole system. I also think the next Surface Pro should use this. So high end x86-based laptops with Windows 8 Pro.

    And NUC/BRIX systems that are so small they don't have room for discrete GPU's.

    I imagine some thinner than makes sense ultrathins could also use this to great effect.

    All that said, most systems people will be able to afford and use on a regular basis won't be using this chip. I think that's sad, but it's the way it will be until Intel stops trying to use Iris as a bonus for the high end users instead of trying to put discrete GPU's out of business by putting these on every chip they make so people start seeing it CAN do a decent job on its own within its specific limitations.

    Right now, no one's going to see that, except those few fringe cases. Strictly speaking, while it might not have matched the 650m (or its successor), it did a decent job with the 640m and that's a lot better than any other IGP by Intel.
  • Spunjji - Tuesday, June 4, 2013 - link

    You confused me here on these points:

    1) The NUC uses a 17W TDP chip and overheats. We're not going to have Iris in that form factor yet.
    2) It would increase the cost of the Edge, not lower it. Same TDP problem too.

    Otherwise I agree, this really needs to roll down lower in the food chain to have a serious impact. Hopefully they'll do that with Broadwell used by the GPU when the die area effectively becomes free thanks to the process switch.
  • whyso - Saturday, June 1, 2013 - link

    So intel was right. Iris Pro pretty much matches a 650m at playable settings (30 fps +). Note that anandtech is being full of BullS**t here and comparing it to an OVERCLOCKED 650m from apple. Lets see, when intel made that 'equal to a 650m' claim it was talking about a standard 650m not an overclocked 650m running at 900/2500 (GDDR5) vs the normal 835/1000 (GDDR5 + boost at full, no boost = 735 mhz core). If you look at a standard clocked GDDR3 variant iris pro 5200 and the 650m are pretty much very similar (depending on the games) within around 10%. New Intel drivers should further shorten the gap (given that intel is quite good in compute).
  • JarredWalton - Sunday, June 2, 2013 - link

    http://www.anandtech.com/bench/Product/814

    For the games I tested, the rMBP15 isnt' that much faster in many titles. Iris isn't quite able to match GT 650M, but it's pretty close all things considered.
  • Spunjji - Tuesday, June 4, 2013 - link

    I will believe this about new Intel drivers when I see them. I seriously, genuinely hope they surprise me, though.
  • dbcoopernz - Saturday, June 1, 2013 - link

    Are you going to test this system with madVR?
  • Ryan Smith - Sunday, June 2, 2013 - link

    We have Ganesh working to answer that question right now.
  • dbcoopernz - Sunday, June 2, 2013 - link

    Cool. :)
  • JDG1980 - Saturday, June 1, 2013 - link

    I would have liked to see some madVR tests. It seems to me that the particular architecture of this chip - lots of computing power, somewhat less memory bandwidth - would be very well suited to madVR's better processing options. It's been established that difficult features like Jinc scaling (the best quality) are limited by shader performance, not bandwidth.
    The price is far steeper than I would have expected, but once it inevitably drops a bit, I could see mini-ITX boards with this become a viable solution for high-end, passively-cooled HTPCs.
    By the way, did they ever fix the 23.976 fps error that has been there since Clarkdale?
  • dbcoopernz - Saturday, June 1, 2013 - link

    Missing Remote reports that 23.976 timing is much better.

    http://www.missingremote.com/review/intel-core-i7-...

Log in

Don't have an account? Sign up now