Battlefield 3

Our multiplayer action game benchmark of choice is Battlefield 3, DICE’s 2011 multiplayer military shooter. Its ability to pose a significant challenge to GPUs has been dulled some by time and drivers at the high-end, but it’s still a challenge for more entry-level GPUs such as the iGPUs found on Intel and AMD's latest parts. Our goal here is to crack 60fps in our benchmark, as our rule of thumb based on experience is that multiplayer framerates in intense firefights will bottom out at roughly half our benchmark average, so hitting medium-high framerates here is not necessarily high enough.

Battlefield 3

The move to 55W brings Iris Pro much closer to the GT 650M, with NVIDIA's advantage falling to less than 10%. At 47W, Iris Pro isn't able to remain at max turbo for as long. The soft configurable TDP is responsible for nearly a 15% increase in performance here.

Iris Pro continues to put all other integrated graphics solutions to shame. The 55W 5200 is over 2x the speed of the desktop HD 4000 and the same for the mobile Trinity. There's even a healthy gap between it and desktop Trinity/Haswell.

Battlefield 3

Ramp up resolution and quality settings and Iris Pro once again looks far less like a discrete GPU. NVIDIA holds over a 50% advantage here. Once again I don't believe this is memory bandwidth related, Crystalwell appears to be doing its job. Instead it looks like fundamental GPU architecture issue.

Battlefield 3

The gap narrows slightly with an increase in resolution, perhaps indicating that as the limits shift to memory bandwidth Crystalwell is able to win some ground. Overall, there's just an appreciable advantage to NVIDIA's architecture here.

The iGPU comparison continues to be an across the board win for Intel. It's amazing what can happen when you actually dedicate transistors to graphics.

Tomb Raider (2013) Crysis 3
Comments Locked

177 Comments

View All Comments

  • kyuu - Saturday, June 1, 2013 - link

    It's probably habit coming from eluding censoring.
  • maba - Saturday, June 1, 2013 - link

    To be fair, there is only one data point (GFXBenchmark 2.7 T-Rex HD - 4X MSAA) where the 47W cTDP configuration is more than 40% slower than the tested GT 650M (rMBP15 90W).
    Actually we have the following [min, max, avg, median] for 47W (55W):
    games: 61%, 106%, 78%, 75% (62%, 112%, 82%, 76%)
    synth.: 55%, 122%, 95%, 94% (59%, 131%, 102%, 100%)
    compute: 85%, 514%, 205%, 153% (86%, 522%, 210%, 159%)
    overall: 55%, 514%, 101%, 85% (59%, 522%, 106%, 92%)
    So typically around 75% for games with a considerably lower TDP - not that bad.
    I do not know whether Intel claimed equal or better performance given a specific TDP or not. With the given 47W (55W) compared to a 650M it would indeed be a false claim.
    But my point is, that with at least ~60% performance and typically ~75% it is admittedly much closer than you stated.
  • whyso - Saturday, June 1, 2013 - link

    Note your average 650m is clocked lower than the 650m reviewed here.
  • lmcd - Saturday, June 1, 2013 - link

    If I recall correctly, the rMBP 650m was clocked as high as or slightly higher than the 660m (which was really confusing at the time).
  • JarredWalton - Sunday, June 2, 2013 - link

    Correct. GT 650M by default is usually 835MHz + Boost, with 4GHz RAM. The GTX 660M is 875MHz + Boost with 4GHz RAM. So the rMBP15 is a best-case for GT 650M. However, it's not usually a ton faster than the regular GT 650M -- benchmarks for the UX51VZ are available here:
    http://www.anandtech.com/bench/Product/814
  • tipoo - Sunday, June 2, 2013 - link

    I think any extra power just went to the rMBP scaling operations.
  • DickGumshoe - Sunday, June 2, 2013 - link

    Do you know if the scaling algorithms are handled by the CPU or the GPU on the rMBP?

    The big thing I am wondering is that if Apple releases a higher-end model with the MQ CPU's, would the HD 4600 be enough to eliminate the UI lag currently present on the rMBP's HD 4000?

    If it's done on the GPU, then having the HQ CPU's might actually get *better* UI performance than the MQ CPU's for the rMPB.
  • lmcd - Sunday, June 2, 2013 - link

    No, because these benchmarks would change the default resolution, which as I understand is something the panel would compensate for?

    Wait, aren't these typically done while the laptop screen is off and an external display is used?
  • whyso - Sunday, June 2, 2013 - link

    You got this wrong. 650m is 735/1000 + boost to 850/1000. 660m is 835/1250 boost to 950/1250.
  • jasonelmore - Sunday, June 2, 2013 - link

    worst mistake intel made was that demo with DIRT when it was side by side with a 650m laptop. That set people's expectations. and it falls short in the reviews and people are dogging it. If they would have just kept quite people would be praising them up and down right now.

Log in

Don't have an account? Sign up now