Crysis Warhead

Up next is our legacy title for 2013, Crysis: Warhead. The stand-alone expansion to 2007’s Crysis, at over 4 years old Crysis: Warhead can still beat most systems down. Crysis was intended to be future-looking as far as performance and visual quality goes, and it has clearly achieved that. We’ve only finally reached the point where high-end single-GPU cards have come out that can hit 60fps at 1920 with 4xAA, while low-end GPUs are just now hitting 60fps at lower quality settings and resolutions.

Crysis: Warhead

I can't believe it. An Intel integrated solution actually beats out an NVIDIA discrete GPU in a Crysis title. The 5200 does well here, outperforming the 650M by 12% in its highest TDP configuration. I couldn't run any of the AMD parts here as Bulldozer based parts seem to have a problem with our Crysis benchmark for some reason.

Crysis: Warhead is likely one of the simpler tests we have in our suite here, which helps explain Intel's performance a bit. It's also possible that older titles have been Intel optimization targets for longer.

Crysis: Warhead

Ramping up the res kills the gap between the highest end Iris Pro and the GT 650M.

Crysis: Warhead

Moving to higher settings and at a higher resolution gives NVIDIA the win once more. The margin of victory isn't huge, but the added special effects definitely stress whatever Intel is lacking within its GPU architecture.

Crysis 3 GRID 2
Comments Locked

177 Comments

View All Comments

  • s2z.domain@gmail.com - Friday, February 21, 2014 - link

    I wonder where this is going. Yes the multi core and cache on hand and graphics may be goody, ta.
    But human interaction in actual products?
    I weigh in at 46kg but think nothing of running with a Bergen/burden of 20kg so a big heavy laptop with ingratiated 10hr battery and 18.3" would be efficacious.
    What is all this current affinity with small screens?
    I could barely discern the vignette of the feathers of a water fowl at no more than 130m yesterday, morning run in the Clyde Valley woodlands.
    For the "laptop", > 17" screen, desktop 2*27", all discernible pixels, every one of them to be a prisoner. 4 core or 8 core and I bore the poor little devils with my incompetence with DSP and the Julia language. And spice etc.

    P.S. Can still average 11mph @ 50+ years of age. Some things one does wish to change. And thanks to the Jackdaws yesterday morning whilst I was fertilizing a Douglas Fir, took the boredom out of a another wise perilous predicament.
  • johncaldwell - Wednesday, March 26, 2014 - link

    Hello,
    Look, 99% of all the comments here are out of my league. Could you answer a question for me please? I use an open source 3d computer animation and modeling program called Blender3d. The users of this program say that the GTX 650 is the best GPU for this program, siting that it works best for calculating cpu intensive tasks such as rendering with HDR and fluids and other particle effects, and they say that other cards that work great for gaming and video fall short for that program. Could you tell me how this Intel Iris Pro would do in a case such as this? Would your test made here be relevant to this case?
  • jadhav333 - Friday, July 11, 2014 - link

    Same here johncaldwell. I would like to know the same.

    I am a Blender 3d user and work on cycles render which also uses the GPU to process its renders. I am planning to invest in a new workstation.. either a custome built hardware for a linux box or the latest Macbook Pro from Apple. In case of latter, how useful will it be, in terms of performance for GPU rendering on Blender.

    Anyone care to comment on this, please.
  • HunkoAmazio - Monday, May 26, 2014 - link

    Wow I cant believe I understood this, My computer archieture class paid off... except I got lost when they were talking about n1 n2 nodes.... that must have been a post 2005 feature in CPU N bridge S Bridge Technology
  • systemBuilder - Tuesday, August 5, 2014 - link

    I don't think you understand the difference between DRAM circuitry and arithmetic circuitry. A DRAM foundry process is tuned for high capacitance so that the memory lasts longer before refresh. High capacitance is DEATH to high-speed circuitry for arithmetic execution, that circuitry is tuned for very low capacitance, ergo, tuned for speed. By using DRAM instead of SRAM (which could have been built on-chip with low-capacitance foundry processes), Intel enlarged the cache by 4x+, since an SRAM cell is about 4x+ larger than a DRAM cell.
  • Fingalad - Friday, September 12, 2014 - link

    CHEAP SLI! They should make a cheap IRIS pro graphics card and do a new board where you can add that board for SLI.
  • P39Airacobra - Thursday, January 8, 2015 - link

    Not a bad GPU at all, On a small laptop screen you can game just fine, But it should be paired with a lower CPU, And the i3, i5, i7 should have Nvidia or AMD solutions.

Log in

Don't have an account? Sign up now