Battlefield 3

Our multiplayer action game benchmark of choice is Battlefield 3, DICE’s 2011 multiplayer military shooter. Its ability to pose a significant challenge to GPUs has been dulled some by time and drivers at the high-end, but it’s still a challenge for more entry-level GPUs such as the iGPUs found on Intel and AMD's latest parts. Our goal here is to crack 60fps in our benchmark, as our rule of thumb based on experience is that multiplayer framerates in intense firefights will bottom out at roughly half our benchmark average, so hitting medium-high framerates here is not necessarily high enough.

Battlefield 3

The move to 55W brings Iris Pro much closer to the GT 650M, with NVIDIA's advantage falling to less than 10%. At 47W, Iris Pro isn't able to remain at max turbo for as long. The soft configurable TDP is responsible for nearly a 15% increase in performance here.

Iris Pro continues to put all other integrated graphics solutions to shame. The 55W 5200 is over 2x the speed of the desktop HD 4000 and the same for the mobile Trinity. There's even a healthy gap between it and desktop Trinity/Haswell.

Battlefield 3

Ramp up resolution and quality settings and Iris Pro once again looks far less like a discrete GPU. NVIDIA holds over a 50% advantage here. Once again I don't believe this is memory bandwidth related, Crystalwell appears to be doing its job. Instead it looks like fundamental GPU architecture issue.

Battlefield 3

The gap narrows slightly with an increase in resolution, perhaps indicating that as the limits shift to memory bandwidth Crystalwell is able to win some ground. Overall, there's just an appreciable advantage to NVIDIA's architecture here.

The iGPU comparison continues to be an across the board win for Intel. It's amazing what can happen when you actually dedicate transistors to graphics.

Tomb Raider (2013) Crysis 3
Comments Locked

177 Comments

View All Comments

  • s2z.domain@gmail.com - Friday, February 21, 2014 - link

    I wonder where this is going. Yes the multi core and cache on hand and graphics may be goody, ta.
    But human interaction in actual products?
    I weigh in at 46kg but think nothing of running with a Bergen/burden of 20kg so a big heavy laptop with ingratiated 10hr battery and 18.3" would be efficacious.
    What is all this current affinity with small screens?
    I could barely discern the vignette of the feathers of a water fowl at no more than 130m yesterday, morning run in the Clyde Valley woodlands.
    For the "laptop", > 17" screen, desktop 2*27", all discernible pixels, every one of them to be a prisoner. 4 core or 8 core and I bore the poor little devils with my incompetence with DSP and the Julia language. And spice etc.

    P.S. Can still average 11mph @ 50+ years of age. Some things one does wish to change. And thanks to the Jackdaws yesterday morning whilst I was fertilizing a Douglas Fir, took the boredom out of a another wise perilous predicament.
  • johncaldwell - Wednesday, March 26, 2014 - link

    Hello,
    Look, 99% of all the comments here are out of my league. Could you answer a question for me please? I use an open source 3d computer animation and modeling program called Blender3d. The users of this program say that the GTX 650 is the best GPU for this program, siting that it works best for calculating cpu intensive tasks such as rendering with HDR and fluids and other particle effects, and they say that other cards that work great for gaming and video fall short for that program. Could you tell me how this Intel Iris Pro would do in a case such as this? Would your test made here be relevant to this case?
  • jadhav333 - Friday, July 11, 2014 - link

    Same here johncaldwell. I would like to know the same.

    I am a Blender 3d user and work on cycles render which also uses the GPU to process its renders. I am planning to invest in a new workstation.. either a custome built hardware for a linux box or the latest Macbook Pro from Apple. In case of latter, how useful will it be, in terms of performance for GPU rendering on Blender.

    Anyone care to comment on this, please.
  • HunkoAmazio - Monday, May 26, 2014 - link

    Wow I cant believe I understood this, My computer archieture class paid off... except I got lost when they were talking about n1 n2 nodes.... that must have been a post 2005 feature in CPU N bridge S Bridge Technology
  • systemBuilder - Tuesday, August 5, 2014 - link

    I don't think you understand the difference between DRAM circuitry and arithmetic circuitry. A DRAM foundry process is tuned for high capacitance so that the memory lasts longer before refresh. High capacitance is DEATH to high-speed circuitry for arithmetic execution, that circuitry is tuned for very low capacitance, ergo, tuned for speed. By using DRAM instead of SRAM (which could have been built on-chip with low-capacitance foundry processes), Intel enlarged the cache by 4x+, since an SRAM cell is about 4x+ larger than a DRAM cell.
  • Fingalad - Friday, September 12, 2014 - link

    CHEAP SLI! They should make a cheap IRIS pro graphics card and do a new board where you can add that board for SLI.
  • P39Airacobra - Thursday, January 8, 2015 - link

    Not a bad GPU at all, On a small laptop screen you can game just fine, But it should be paired with a lower CPU, And the i3, i5, i7 should have Nvidia or AMD solutions.

Log in

Don't have an account? Sign up now