Crysis Warhead

Up next is our legacy title for 2013, Crysis: Warhead. The stand-alone expansion to 2007’s Crysis, at over 4 years old Crysis: Warhead can still beat most systems down. Crysis was intended to be future-looking as far as performance and visual quality goes, and it has clearly achieved that. We’ve only finally reached the point where high-end single-GPU cards have come out that can hit 60fps at 1920 with 4xAA, while low-end GPUs are just now hitting 60fps at lower quality settings and resolutions.

Crysis: Warhead

I can't believe it. An Intel integrated solution actually beats out an NVIDIA discrete GPU in a Crysis title. The 5200 does well here, outperforming the 650M by 12% in its highest TDP configuration. I couldn't run any of the AMD parts here as Bulldozer based parts seem to have a problem with our Crysis benchmark for some reason.

Crysis: Warhead is likely one of the simpler tests we have in our suite here, which helps explain Intel's performance a bit. It's also possible that older titles have been Intel optimization targets for longer.

Crysis: Warhead

Ramping up the res kills the gap between the highest end Iris Pro and the GT 650M.

Crysis: Warhead

Moving to higher settings and at a higher resolution gives NVIDIA the win once more. The margin of victory isn't huge, but the added special effects definitely stress whatever Intel is lacking within its GPU architecture.

Crysis 3 GRID 2
Comments Locked

177 Comments

View All Comments

  • whyso - Saturday, June 1, 2013 - link

    They are completely different systems making power consumption values irrelevant.
  • codedivine - Saturday, June 1, 2013 - link

    Hi folks. Can you post the OpenCL extensions supported? You can use something like "GPU Caps viewer" from Geeks3d.
  • tipoo - Saturday, June 1, 2013 - link

    Interesting that the compute is punches above it's game performance weight. I wonder if they could put more EUs in a chip, maybe a larger eDRAM, and put it on a board as a compute card.
  • lmcd - Saturday, June 1, 2013 - link

    They already have a compute card called Xeon Phi if I remember correctly.
  • Klimax - Sunday, June 2, 2013 - link

    Different Arch (X86 in Phi)
  • tipoo - Sunday, June 2, 2013 - link

    I'm aware, but the Xeon Phi requires completely different programming than a GPU like this which can just use OpenCL.
  • Soul_Master - Saturday, June 1, 2013 - link

    What's your point for comparing desktop GPU with middle-range mobile GPU? CPU on both devices are not equal.
  • Soul_Master - Saturday, June 1, 2013 - link

    Sorry. I misunderstood about i7 4950HQ process, a high-end quad-core processor for laptops.
  • Ryan Smith - Sunday, June 2, 2013 - link

    It's what we had available. We wanted to test a DDR3 version of GK107, and that's what was on-hand.
  • tipoo - Saturday, June 1, 2013 - link

    Hmm, so it's heavily hinted at that the next rMBP will ditch discreet graphics. The 5200 is good, but that would still be a regression in performance. Not the first time Apple would have done that, there was the Radeon cut out of the Mini, the 320M to the 3000, even the bottom rung of the newest iMac with the 640m. I wonder if it would at least be cheaper to make up for it.

Log in

Don't have an account? Sign up now