Sleeping Dogs

While not necessarily a game on everybody’s lips, Sleeping Dogs is a strenuous game with a pretty hardcore benchmark that scales well with additional GPU power. The team over at Adrenaline.com.br are supreme for making an easy to use benchmark GUI, allowing a numpty like me to charge ahead with a set of four 1440p runs with maximum graphical settings.

One 7970

Sleeping Dogs - One 7970, 1440p, Max Settings

Sleeping Dogs seems to tax the CPU so little that the only CPU that falls behind by the smallest of margins is an E6400 (and the G465 which would not run the benchmark). Intel visually takes all the top spots, but AMD is all in the mix with less than 0.5 FPS splitting an X2-555 BE and an i7-3770K.

Two 7970s

Sleeping Dogs - Two 7970s, 1440p, Max Settings

A split starts to develop between Intel and AMD again, although you would be hard pressed to choose between the CPUs as everything above an i3-3225 scores 50-56 FPS. The X2-555 BE unfortunately drops off, suggesting that Sleeping Dogs is a fan of the cores and this little CPU is a lacking.

Three 7970s

Sleeping Dogs - Three 7970, 1440p, Max Settings

At three GPUs the gap is there, with the best Intel processors over 10% ahead of the best AMD. Neither PCIe lane allocation or memory seems to be playing a part, just a case of threads then single thread performance.

Four 7970s

Sleeping Dogs - Four 7970, 1440p, Max Settings

Despite our Beast machine having double the threads, an i7-3960X in PCIe 3.0 mode takes top spot.

It is worth noting the scaling in Sleeping Dogs. The i7-3960X moved from 28.2 -> 56.23 -> 80.85 -> 101.15 FPS, achieving +71% increase of a single card moving from 3 to 4. This speaks of a well written game more than anything.

One 580

Sleeping Dogs- One 580, 1440p, Max Settings

There is almost nothing to separate every CPU when using a single GTX 580.

Two 580s

Sleeping Dogs - Two 580s, 1440p, Max Settings

Same thing with two GTX 580s – even an X2-555 BE is within 1 FPS (3%) of an i7-3960X.

Sleeping Dogs Conclusion

Due to the successful scaling and GPU limited nature of Sleeping Dogs, almost any CPU you throw at it will get the same result. When you move into three GPUs or more territory, it seems that having the single thread CPU speed of an Intel processor gets a few more FPS at the end of the day.

GPU Benchmarks: Civilization V Final Results, Conclusions and Recommendations
Comments Locked

116 Comments

View All Comments

  • TheJian - Thursday, June 6, 2013 - link

    http://www.tomshardware.com/reviews/a10-6700-a10-6...
    Check the toms A10-6800 review. With only a 6670D card i3-3220 STOMPS the A10-6800k with the same 6670 radeon card in 1080p in F1 2012. 68fps to 40fps is a lot right? Both chips are roughly $145. Skyrim shows 6800k well, but you need 2133memory to do it. But faster Intel cpu's will leave this in the dust with a better gpu anyway.

    http://www.guru3d.com/articles_pages/amd_a10_6800k...
    You can say 100fps is a lot in far cry2 (it is) but you can see how a faster cpu is NOT limiting the 580 GTX here as all resolutions run faster. The i7-4770 allows GTX 580 to really stretch it's legs to 183fps, and drops to 132fps at 1920x1200. The FX 8350 however is pegged at 104 for all 4 resolutions. Even a GTX 580 is held back, never mind what you'd be doing to a 7970ghz etc. All AMD cpu's here are limiting the 580GTX while the Intel's run up the fps. Sure there are gpu limited games, but I'd rather be using the chip that runs away from slower models when this isn't the case. From what all the data shows amongst various sites, you'll be caught with your pants down a lot more than anandtech is suggesting here. Hopefully that's enough games for everyone to see it's far more than Civ5 and even with different cards affecting things. If both gpu sides double their gpu cores, we could have a real cpu shootout in many things at 1440p (and of course below this they will all spread widely even more than I've shown with many links/games).
  • roedtogsvart - Tuesday, June 4, 2013 - link

    Hey Ian, how come no Nehalem or Lynnfield data points? There are a lot of us on these platforms who are looking at this data to weigh vs. the cost of a Haswell upgrade. With the ol' 775 geezers represented it was disappointing not to see 1366 or 1156. Superb work overall however!
  • roedtogsvart - Tuesday, June 4, 2013 - link

    Other than 6 core Xeon, I mean...
  • A5 - Tuesday, June 4, 2013 - link

    Hasn't had time to test it yet, and hardware availability. He covers that point pretty well in this article and the first one.
  • chizow - Tuesday, June 4, 2013 - link

    Yeah I understand and agree, would definitely like to see some X58 and Kepler results.
  • ThomasS31 - Tuesday, June 4, 2013 - link

    Seems 1440p is too demanding on the GPU side to show the real gaming difference between these CPUs.

    Is 1440p that common in gaming these days?

    I have the impression (from my CPU change experiences) that we would see different differences at 1080p for example.
  • A5 - Tuesday, June 4, 2013 - link

    Read the first page?
  • ThomasS31 - Tuesday, June 4, 2013 - link

    Sure. Though still for single GPU, it would be a wiser choice to be "realistic" and do 1080p that is more common (on single monitor average Joe gamer type of scenario).
    And go 1440p (or higher) for multi GPUs and enthusiast.

    The purpose of the article is choosing a CPU and that needs to show some sort of scaling in near real life scenarios, but if the GPU kicks in from start it will not be possible to evaluate the CPU part of the performance equation in games.

    Or maybe it would be good to show some sort of combined score from all the test, so the Civ V and other games show some differentation at last in the recommendation as well, sort of.
  • core4kansan - Tuesday, June 4, 2013 - link

    The G2020 and G860 might well be the best bang-for-buck cpus, especially if you tested at 1080p, where most budget-conscious gamers would be anyway.
  • Termie - Tuesday, June 4, 2013 - link

    Ian,

    A couple of thoughts for you on methodology:

    (1) While I understand the issue of MCT is a tricky one, I think you'd be better off just shutting it off, or if you test with it, noting the actually core speeds that your CPUs are operating at, which should be 200MHz above nominal Turbo.

    (2) I don't understand the reference to an i3-3225+, as MCT should not have any effect on a dual-core chip, since it has no Turbo mode.

    (3) I understand the benefit of using time demos for large-scale testing like what you're doing, but I do think you should use at least one modern game. I'd suggest replacing Metro2033, which has incredibly low fps results due to a lack of engine optimization, with Tomb Raider, which has a very simple, quick, and consistent built-in benchmark.

    Thanks for all your hard work to add to the body of knowledge on CPUs and gaming.

    Termie

Log in

Don't have an account? Sign up now