Sleeping Dogs

A Square Enix game, Sleeping Dogs is one of the few open world games to be released with any kind of benchmark, giving us a unique opportunity to benchmark an open world game. Like most console ports, Sleeping Dogs’ base assets are not extremely demanding, but it makes up for it with its interesting anti-aliasing implementation, a mix of FXAA and SSAA that at its highest settings does an impeccable job of removing jaggies. However by effectively rendering the game world multiple times over, it can also require a very powerful video card to drive these high AA modes.

Sleeping Dogs

At 1366 x 768 with medium quality settings, there doesn't appear to be much of a memory bandwidth limitation here at all. Vsync was disabled but there's a definite clustering of performance close to 60 fps. The gap between the 650M and Iris Pro is just under 7%. Compared to the 77W HD 4000 Iris Pro is good for almost a 60% increase in performance. The same goes for the mobile Trinity comparison.

Sleeping Dogs

At higher resolution/higher quality settings, there's a much larger gap between the 650M and Iris Pro 5200. At high quality defaults both FXAA and SSAA are enabled, which given Iris Pro's inferior texture sampling and pixel throughput results in a much larger victory for the 650M. NVIDIA maintains a 30 - 50% performance advantage here. The move from a 47W TDP to 55W gives Iris Pro an 8% performance uplift. If we look at the GT 640's performance relative to the 5200, it's clear that memory bandwidth alone isn't responsible for the performance delta here (although it does play a role).

Once more, compared to all other integrated solutions Iris Pro has no equal. At roughly 2x the performance of a 77W HD 4000, 20% better than a desktop Trinity and 40% better than mobile Trinity, Iris Pro looks very good.

BioShock: Infinite Tomb Raider (2013)
POST A COMMENT

173 Comments

View All Comments

  • prophet001 - Monday, June 03, 2013 - link

    I have no idea whether or not any of this article is factually accurate.

    However, the first page was a treat to read. Very well written.

    :)
    Reply
  • Teemo2013 - Monday, June 03, 2013 - link

    Great success by Intel.
    4600 is near GT630 and HD4650 (much better than 6450 which sells for $15 at newegg)
    5200 is better than GT640 and HD 6670 (currently sells like $50 at newegg)
    Intel's intergrated used to be worthless comparing with discret cards. It slowly catches up during the past 3 years, and now 5200 is beating a $50 card. Can't wait for next year!
    Hopefully this will finally push AMD and Nvidia to come up with meaningful upgrade to their low level product lines.
    Reply
  • Cloakstar - Monday, June 03, 2013 - link

    A quick check for my own sanity:
    Did you configure the A10-5800K with 4 sticks of RAM in bank+channel interleave mode, or did you leave it memory bandwidth starved with 2 sticks or locked in bank interleave mode?

    The numbers look about right for 2 sticks, and if that is the case, it would leave Trinity at about 60% of its actual graphics performance.

    I find it hard to believe that the 5800K is about a quarter the performace per watt of the 4950HQ in graphics, even with the massive, server-crushing cache.
    Reply
  • andrerocha - Monday, June 03, 2013 - link

    is this new cpu faster than the 4770k? it sure cost more? Reply
  • zodiacfml - Monday, June 03, 2013 - link

    impressive but one has to take advantage of the compute/quick sync performance to justify the increase in price over the HD 4600 Reply
  • ickibar1234 - Tuesday, June 04, 2013 - link

    Well, my Asus G50VT laptop is officially obsolete! A Nvidia 512MB GDDR3 9800gs is completely pwned by this integrated GPU, and, the CPU is about 50-65% faster clock for clock to the last generation Core 2 Duo Penryn chips. Sure, my X9100 can overclock stably to 3.5GHZ but this one can get close even if all cores are fully taxed.

    Can't wait to see what the Broadwell die shrink brings, maybe a 6-core with Iris or a higher clocked 4-core?

    I too see that dual core versions of mobile Haswell with this integrated GPU would be beneficial. Could go into small 4.5 pounds laptops.

    AMD.....WTH are you going to do.
    Reply
  • zodiacfml - Tuesday, June 04, 2013 - link

    AMD has to create a Crystalwell of their own. I never thought Intel could beat them to it since their integrated GPUs always has needed bandwidth ever since. Reply
  • Spunjji - Tuesday, June 04, 2013 - link

    They also need to find a way past their manufacturing process disadvantage, which may not be possible at all. We're comparing 22nm Apples to 32/28nm Pears here; it's a relevant comparison because those are the realities of the marketplace, but it's worth bearing in mind when comparing architecture efficiencies. Reply
  • Death666Angel - Tuesday, June 04, 2013 - link

    "What Intel hopes however is that the power savings by going to a single 47W part will win over OEMs in the long run, after all, we are talking about notebooks here."
    This plus simpler board designs and fewer voltage regulators and less space used.
    And I agree, I want this in a K-SKU.
    Reply
  • Death666Angel - Tuesday, June 04, 2013 - link

    And doesn't MacOS support Optimus?
    RE: "In our 15-inch MacBook Pro with Retina Display review we found that simply having the discrete GPU enabled could reduce web browsing battery life by ~25%."
    Reply

Log in

Don't have an account? Sign up now