Civilization V

A game that has plagued my testing over the past twelve months is Civilization V. Being on the older 12.3 Catalyst drivers were somewhat of a nightmare, giving no scaling, and as a result I dropped it from my test suite after only a couple of reviews. With the later drivers used for this review, the situation has improved but only slightly, as you will see below. Civilization V seems to run into a scaling bottleneck very early on, and any additional GPU allocation only causes worse performance.

Our Civilization V testing uses Ryan’s GPU benchmark test all wrapped up in a neat batch file. We test at 1440p, and report the average frame rate of a 5 minute test.

One 7970

Civilization V - One 7970, 1440p, Max Settings

Civilization V is the first game where we see a gap when comparing processor families. A big part of what makes Civ5 perform at the best rates seems to be PCIe 3.0, followed by CPU performance – our PCIe 2.0 Intel processors are a little behind the PCIe 3.0 models. By virtue of not having a PCIe 3.0 AMD motherboard in for testing, the bad rap falls on AMD until PCIe 3.0 becomes part of their main game.

Two 7970s

Civilization V - Two 7970s, 1440p, Max Settings

The power of PCIe 3.0 is more apparent with two 7970 GPUs, however it is worth noting that only processors such as the i5-2500K and above have actually improved their performance with the second GPU. Everything else stays relatively similar.

Three 7970s

Civilization V - Three 7970, 1440p, Max Settings

More cores and PCIe 3.0 are winners here, but no GPU configuration has scaled above two GPUs.

Four 7970s

Civilization V - Four 7970, 1440p, Max Settings

Again, no scaling.

One 580

Civilization V - One 580, 1440p, Max Settings

While the top end Intel processors again take the lead, an interesting point is that now we have all PCIe 2.0 values for comparison, the non-hyper threaded 2500K takes the top spot, 10% higher than the FX-8350.

Two 580s

Civilization V - Two 580s, 1440p, Max Settings

We have another Intel/AMD split, by virtue of the fact that none of the AMD processors scaled above the first GPU. On the Intel side, you need at least an i5-2500K to see scaling, similar to what we saw with the 7970s.

Civilization V conclusion

Intel processors are the clear winner here, though not one stands out over the other. Having PCIe 3.0 seems to be the positive point for Civilization V, but in most cases scaling is still out of the window unless you have a monster machine under your belt.

GPU Benchmarks: Dirt 3 GPU Benchmarks: Sleeping Dogs
Comments Locked

242 Comments

View All Comments

  • Patrese - Wednesday, May 8, 2013 - link

    Awesome article, thanks! Is it possible to include some sort of gaming physics testing? Now that PhysX is beginning to catch some momentum, I'd be great to see if a 8-module AMD processor handles physics stuff better than a 4-core comparable Intel one, and at what point does a dedicated physics card starts to make sense, if at all.

    I’d be also nice if a “mainstream gaming” article could be made too. Benchmarks at 1080p with cards like the 660Ti and 7850, for instance. No need for 3 way SLI/CF on those, so you'll not need as much time in Narnia. :)
  • araczynski - Wednesday, May 8, 2013 - link

    interesting read, although i find it too focused to be of much general use (or useful future reference). i'd like to have seen for example how an E8500 holds up (too big of a gap between E6500 and i52500), as well as at least ONE game i would even bother playing (skyrim/witcher/etc). and of course like you mentioned, even a slightly bigger sampling of graphics cards. (i think you mentioned that).

    anywho, i realize this wasn't meant to be anything exhaustive (i do appreciate having the CPU/GPU benches available here as a good reference though), and i do like the detail/explanation length you went into.

    so thanks :)
  • xinthius - Wednesday, May 8, 2013 - link

    But AMD offers good price to performance at lower tiers, they should be recommend.
  • yougotkicked - Wednesday, May 8, 2013 - link

    Regarding your comments on the role of artificial intelligence in game performance/programming: I've just finished a course in AI, and while implementations may vary quite a bit from game to game, many AI programs can be reduced to highly-parallel brute-force computation, simply evaluating the resulting states of many potential decisions for a numerical representation of their desirability, then selecting the best option from the set of evaluated actions. Obviously this is something that will vary greatly from game to game, but in games with many independent AI managed elements, I would expect a certain amount of the processing to be offloaded to the GPU.

    Other than that I agree with you on the demands of AI in games; my professor (who specializes in game AI and has experience in the industry) said that the AI is usually given about 10% of the CPU time in a game, so it's rarely a limiting factor.

    I'm still working through the whole article (really enjoying it so far) so I'm sure I'll have many more comments/questions later.
  • IanCutress - Wednesday, May 8, 2013 - link

    Based on previous CUDA experience, CUDA doesn't like a lot of IF statements in its routines. So if you're offloading different AI parts onto the GPU, unless all the elements are being put through the same set of if commands (and states), it won't work too well, with some warps taking a lot longer than others if there is large branch deviation. It's a task suited to MIMD environments, like a CPU. Then again, it really depends on the game. Clever AI is already here, because we confine it to a self-created system. One could argue that the bots in CounterStrike are not particularly smart, but the system can put their accuracy up to 100% to make it harder. It's a lot of give and take, perhaps. It is times like these I wish I did CompSci rather than Chemistry :) I need to take one of those MIT online AI courses. You know, inbetween testing!

    Ian
  • yougotkicked - Wednesday, May 8, 2013 - link

    I suppose conditionals would make offloading some AI components to the GPU impractical, but there still remains a subset of AI computations which seem very GPU friendly to me. State evaluation functions seems like a prime example, the CPU would be responsible for deciding which options to evaluate, building an array of states to be evaluated by the GPU. These situations probably don't come up very often in FPSs, but in something like Civilization I can see it being quite common.

    I've actually got to head over to that class now, I'll ask the professor if he knows of any AI's using GPU computing in modern games.
  • airmantharp - Wednesday, May 8, 2013 - link

    Like Ian said, GPU's aren't good 'branch' processors, but I do see where you're coming from. Things like real physics, audio environment maps, and pre-render lighting maps could be fed to AI routines running on the CPU. This would allow for a much greater 'simulation awareness' for AI actions.
  • yougotkicked - Wednesday, May 8, 2013 - link

    I spoke with my professor and he said that as far as he knows, many people have discussed to prospect of using GPU's for AI, but nobody has actually done so yet. He's going to ask some friends of his at some major game studios to see if they are working on it.

    He did agree with me that there are some aspects that could be computed on a GPU, but a lot of the existing AI methods are inherently sequential, so offloading it to the GPU will require new algorithms in many cases.
  • TheQweaker - Thursday, May 9, 2013 - link

    You may wish to check nVidia's GTC conference web site where you can find some GPU AI Research. Also, nVidia published various PDF slides on GPU Path Planning.

    If you look deeper in some specific AI Domains such as, say, AI Planning (first used in F.E.A.R. in 2005, lately used in KillZone 3 and Transformers 3: The Fall of the Cybertron) you can find papers investigating the use of GPUs.

    On of the bottom line of current GPU AI research is that GPUs crunch large numbers of data very fast so, currently, there is not much hope in using the many GPU threads for tiny amounts of data of state space search.

    Hoping this helps.

    -- The Qweaker.
  • yougotkicked - Thursday, May 9, 2013 - link

    Thanks for pointing me towards those papers, they look pretty interesting and I've been looking for a topic to write my final paper on ;)

Log in

Don't have an account? Sign up now