Battlefield 3

Our multiplayer action game benchmark of choice is Battlefield 3, DICE’s 2011 multiplayer military shooter. Its ability to pose a significant challenge to GPUs has been dulled some by time and drivers at the high-end, but it’s still a challenge for more entry-level GPUs such as the iGPUs found on Intel and AMD's latest parts. Our goal here is to crack 60fps in our benchmark, as our rule of thumb based on experience is that multiplayer framerates in intense firefights will bottom out at roughly half our benchmark average, so hitting medium-high framerates here is not necessarily high enough.

Battlefield 3

The move to 55W brings Iris Pro much closer to the GT 650M, with NVIDIA's advantage falling to less than 10%. At 47W, Iris Pro isn't able to remain at max turbo for as long. The soft configurable TDP is responsible for nearly a 15% increase in performance here.

Iris Pro continues to put all other integrated graphics solutions to shame. The 55W 5200 is over 2x the speed of the desktop HD 4000 and the same for the mobile Trinity. There's even a healthy gap between it and desktop Trinity/Haswell.

Battlefield 3

Ramp up resolution and quality settings and Iris Pro once again looks far less like a discrete GPU. NVIDIA holds over a 50% advantage here. Once again I don't believe this is memory bandwidth related, Crystalwell appears to be doing its job. Instead it looks like fundamental GPU architecture issue.

Battlefield 3

The gap narrows slightly with an increase in resolution, perhaps indicating that as the limits shift to memory bandwidth Crystalwell is able to win some ground. Overall, there's just an appreciable advantage to NVIDIA's architecture here.

The iGPU comparison continues to be an across the board win for Intel. It's amazing what can happen when you actually dedicate transistors to graphics.

Tomb Raider (2013) Crysis 3
Comments Locked

177 Comments

View All Comments

  • boe - Monday, June 3, 2013 - link

    As soon as intel CPUs have video performance that exceeds NVidia and AMD flagship video cards I'll get excited. Until then I think of them as something to be disabled on workstations and to be tolerated on laptops that don't have better GPUs on board.
  • MySchizoBuddy - Monday, June 3, 2013 - link

    So Intel just took the OpenCL crown. Never thought this day would come.
  • prophet001 - Monday, June 3, 2013 - link

    I have no idea whether or not any of this article is factually accurate.

    However, the first page was a treat to read. Very well written.

    :)
  • Teemo2013 - Monday, June 3, 2013 - link

    Great success by Intel.
    4600 is near GT630 and HD4650 (much better than 6450 which sells for $15 at newegg)
    5200 is better than GT640 and HD 6670 (currently sells like $50 at newegg)
    Intel's intergrated used to be worthless comparing with discret cards. It slowly catches up during the past 3 years, and now 5200 is beating a $50 card. Can't wait for next year!
    Hopefully this will finally push AMD and Nvidia to come up with meaningful upgrade to their low level product lines.
  • Cloakstar - Monday, June 3, 2013 - link

    A quick check for my own sanity:
    Did you configure the A10-5800K with 4 sticks of RAM in bank+channel interleave mode, or did you leave it memory bandwidth starved with 2 sticks or locked in bank interleave mode?

    The numbers look about right for 2 sticks, and if that is the case, it would leave Trinity at about 60% of its actual graphics performance.

    I find it hard to believe that the 5800K is about a quarter the performace per watt of the 4950HQ in graphics, even with the massive, server-crushing cache.
  • andrerocha - Monday, June 3, 2013 - link

    is this new cpu faster than the 4770k? it sure cost more?
  • zodiacfml - Monday, June 3, 2013 - link

    impressive but one has to take advantage of the compute/quick sync performance to justify the increase in price over the HD 4600
  • ickibar1234 - Tuesday, June 4, 2013 - link

    Well, my Asus G50VT laptop is officially obsolete! A Nvidia 512MB GDDR3 9800gs is completely pwned by this integrated GPU, and, the CPU is about 50-65% faster clock for clock to the last generation Core 2 Duo Penryn chips. Sure, my X9100 can overclock stably to 3.5GHZ but this one can get close even if all cores are fully taxed.

    Can't wait to see what the Broadwell die shrink brings, maybe a 6-core with Iris or a higher clocked 4-core?

    I too see that dual core versions of mobile Haswell with this integrated GPU would be beneficial. Could go into small 4.5 pounds laptops.

    AMD.....WTH are you going to do.
  • zodiacfml - Tuesday, June 4, 2013 - link

    AMD has to create a Crystalwell of their own. I never thought Intel could beat them to it since their integrated GPUs always has needed bandwidth ever since.
  • Spunjji - Tuesday, June 4, 2013 - link

    They also need to find a way past their manufacturing process disadvantage, which may not be possible at all. We're comparing 22nm Apples to 32/28nm Pears here; it's a relevant comparison because those are the realities of the marketplace, but it's worth bearing in mind when comparing architecture efficiencies.

Log in

Don't have an account? Sign up now