Crysis Warhead

Up next is our legacy title for 2013, Crysis: Warhead. The stand-alone expansion to 2007’s Crysis, at over 4 years old Crysis: Warhead can still beat most systems down. Crysis was intended to be future-looking as far as performance and visual quality goes, and it has clearly achieved that. We’ve only finally reached the point where high-end single-GPU cards have come out that can hit 60fps at 1920 with 4xAA, while low-end GPUs are just now hitting 60fps at lower quality settings and resolutions.

Crysis: Warhead

I can't believe it. An Intel integrated solution actually beats out an NVIDIA discrete GPU in a Crysis title. The 5200 does well here, outperforming the 650M by 12% in its highest TDP configuration. I couldn't run any of the AMD parts here as Bulldozer based parts seem to have a problem with our Crysis benchmark for some reason.

Crysis: Warhead is likely one of the simpler tests we have in our suite here, which helps explain Intel's performance a bit. It's also possible that older titles have been Intel optimization targets for longer.

Crysis: Warhead

Ramping up the res kills the gap between the highest end Iris Pro and the GT 650M.

Crysis: Warhead

Moving to higher settings and at a higher resolution gives NVIDIA the win once more. The margin of victory isn't huge, but the added special effects definitely stress whatever Intel is lacking within its GPU architecture.

Crysis 3 GRID 2
Comments Locked

177 Comments

View All Comments

  • kyuu - Saturday, June 1, 2013 - link

    It's probably habit coming from eluding censoring.
  • maba - Saturday, June 1, 2013 - link

    To be fair, there is only one data point (GFXBenchmark 2.7 T-Rex HD - 4X MSAA) where the 47W cTDP configuration is more than 40% slower than the tested GT 650M (rMBP15 90W).
    Actually we have the following [min, max, avg, median] for 47W (55W):
    games: 61%, 106%, 78%, 75% (62%, 112%, 82%, 76%)
    synth.: 55%, 122%, 95%, 94% (59%, 131%, 102%, 100%)
    compute: 85%, 514%, 205%, 153% (86%, 522%, 210%, 159%)
    overall: 55%, 514%, 101%, 85% (59%, 522%, 106%, 92%)
    So typically around 75% for games with a considerably lower TDP - not that bad.
    I do not know whether Intel claimed equal or better performance given a specific TDP or not. With the given 47W (55W) compared to a 650M it would indeed be a false claim.
    But my point is, that with at least ~60% performance and typically ~75% it is admittedly much closer than you stated.
  • whyso - Saturday, June 1, 2013 - link

    Note your average 650m is clocked lower than the 650m reviewed here.
  • lmcd - Saturday, June 1, 2013 - link

    If I recall correctly, the rMBP 650m was clocked as high as or slightly higher than the 660m (which was really confusing at the time).
  • JarredWalton - Sunday, June 2, 2013 - link

    Correct. GT 650M by default is usually 835MHz + Boost, with 4GHz RAM. The GTX 660M is 875MHz + Boost with 4GHz RAM. So the rMBP15 is a best-case for GT 650M. However, it's not usually a ton faster than the regular GT 650M -- benchmarks for the UX51VZ are available here:
    http://www.anandtech.com/bench/Product/814
  • tipoo - Sunday, June 2, 2013 - link

    I think any extra power just went to the rMBP scaling operations.
  • DickGumshoe - Sunday, June 2, 2013 - link

    Do you know if the scaling algorithms are handled by the CPU or the GPU on the rMBP?

    The big thing I am wondering is that if Apple releases a higher-end model with the MQ CPU's, would the HD 4600 be enough to eliminate the UI lag currently present on the rMBP's HD 4000?

    If it's done on the GPU, then having the HQ CPU's might actually get *better* UI performance than the MQ CPU's for the rMPB.
  • lmcd - Sunday, June 2, 2013 - link

    No, because these benchmarks would change the default resolution, which as I understand is something the panel would compensate for?

    Wait, aren't these typically done while the laptop screen is off and an external display is used?
  • whyso - Sunday, June 2, 2013 - link

    You got this wrong. 650m is 735/1000 + boost to 850/1000. 660m is 835/1250 boost to 950/1250.
  • jasonelmore - Sunday, June 2, 2013 - link

    worst mistake intel made was that demo with DIRT when it was side by side with a 650m laptop. That set people's expectations. and it falls short in the reviews and people are dogging it. If they would have just kept quite people would be praising them up and down right now.

Log in

Don't have an account? Sign up now