Metro: Last Light

Metro: Last Light is the latest entry in the Metro series of post-apocalyptic shooters by developer 4A Games. Like its processor, Last Light is a game that sets a high bar for visual quality, and at its highest settings an equally high bar for system requirements thanks to its advanced lighting system. This doesn’t preclude it from running on iGPUs thanks to the fact that it scales down rather well, but it does mean that we have to run at fairly low resolutions to get a playable framerate.

Metro: Last Light

Metro is a pretty heavy game to begin with, but Iris Pro starts off with an extremely good showing here. In its 55W configuration, Iris Pro is only 5% slower than the GeForce GT 650M. At 47W the gap is larger at 11% however. At 1366 x 768 the difference seems less memory bandwidth related and has more to do with efficiency of the graphics hardware itself.

The comparison to mobile Trinity is a walk in the park for Iris Pro. Even a 100W desktop Trinity part is appreciably slower here.

Metro: Last Light

Increasing the resolution and quality settings changes things quite a bit. The 650M pulls ahead, and now the Iris Pro 5200 basically equals the performance of the GT 640. Intel claims a very high hit rate on the L4 cache, however it could be that 50GB/s is just not enough bandwidth between the GPU and Crystalwell. The performance improvement compared to all other processor graphics solutions, regardless of TDP, is still well in favor of Iris Pro. The i7-4950HQ holds a 50% advantage over the desktop i7-4770K and is almost 2x the speed of the i7-3770K.

Comparing mobile to mobile, Iris Pro delivers over 2x the frame rate of Trinity.

The Comparison Points BioShock: Infinite
Comments Locked

177 Comments

View All Comments

  • kyuu - Saturday, June 1, 2013 - link

    It's probably habit coming from eluding censoring.
  • maba - Saturday, June 1, 2013 - link

    To be fair, there is only one data point (GFXBenchmark 2.7 T-Rex HD - 4X MSAA) where the 47W cTDP configuration is more than 40% slower than the tested GT 650M (rMBP15 90W).
    Actually we have the following [min, max, avg, median] for 47W (55W):
    games: 61%, 106%, 78%, 75% (62%, 112%, 82%, 76%)
    synth.: 55%, 122%, 95%, 94% (59%, 131%, 102%, 100%)
    compute: 85%, 514%, 205%, 153% (86%, 522%, 210%, 159%)
    overall: 55%, 514%, 101%, 85% (59%, 522%, 106%, 92%)
    So typically around 75% for games with a considerably lower TDP - not that bad.
    I do not know whether Intel claimed equal or better performance given a specific TDP or not. With the given 47W (55W) compared to a 650M it would indeed be a false claim.
    But my point is, that with at least ~60% performance and typically ~75% it is admittedly much closer than you stated.
  • whyso - Saturday, June 1, 2013 - link

    Note your average 650m is clocked lower than the 650m reviewed here.
  • lmcd - Saturday, June 1, 2013 - link

    If I recall correctly, the rMBP 650m was clocked as high as or slightly higher than the 660m (which was really confusing at the time).
  • JarredWalton - Sunday, June 2, 2013 - link

    Correct. GT 650M by default is usually 835MHz + Boost, with 4GHz RAM. The GTX 660M is 875MHz + Boost with 4GHz RAM. So the rMBP15 is a best-case for GT 650M. However, it's not usually a ton faster than the regular GT 650M -- benchmarks for the UX51VZ are available here:
    http://www.anandtech.com/bench/Product/814
  • tipoo - Sunday, June 2, 2013 - link

    I think any extra power just went to the rMBP scaling operations.
  • DickGumshoe - Sunday, June 2, 2013 - link

    Do you know if the scaling algorithms are handled by the CPU or the GPU on the rMBP?

    The big thing I am wondering is that if Apple releases a higher-end model with the MQ CPU's, would the HD 4600 be enough to eliminate the UI lag currently present on the rMBP's HD 4000?

    If it's done on the GPU, then having the HQ CPU's might actually get *better* UI performance than the MQ CPU's for the rMPB.
  • lmcd - Sunday, June 2, 2013 - link

    No, because these benchmarks would change the default resolution, which as I understand is something the panel would compensate for?

    Wait, aren't these typically done while the laptop screen is off and an external display is used?
  • whyso - Sunday, June 2, 2013 - link

    You got this wrong. 650m is 735/1000 + boost to 850/1000. 660m is 835/1250 boost to 950/1250.
  • jasonelmore - Sunday, June 2, 2013 - link

    worst mistake intel made was that demo with DIRT when it was side by side with a 650m laptop. That set people's expectations. and it falls short in the reviews and people are dogging it. If they would have just kept quite people would be praising them up and down right now.

Log in

Don't have an account? Sign up now