Civilization V

A game that has plagued my testing over the past twelve months is Civilization V. Being on the older 12.3 Catalyst drivers were somewhat of a nightmare, giving no scaling, and as a result I dropped it from my test suite after only a couple of reviews. With the later drivers used for this review, the situation has improved but only slightly, as you will see below. Civilization V seems to run into a scaling bottleneck very early on, and any additional GPU allocation only causes worse performance.

Our Civilization V testing uses Ryan’s GPU benchmark test all wrapped up in a neat batch file. We test at 1440p, and report the average frame rate of a 5 minute test.

One 7970

Civilization V - One 7970, 1440p, Max Settings

Civilization V is the first game where we see a gap when comparing processor families. A big part of what makes Civ5 perform at the best rates seems to be PCIe 3.0, followed by CPU performance – our PCIe 2.0 Intel processors are a little behind the PCIe 3.0 models. By virtue of not having a PCIe 3.0 AMD motherboard in for testing, the bad rap falls on AMD until PCIe 3.0 becomes part of their main game.

Two 7970s

Civilization V - Two 7970s, 1440p, Max Settings

The power of PCIe 3.0 is more apparent with two 7970 GPUs, however it is worth noting that only processors such as the i5-2500K and above have actually improved their performance with the second GPU. Everything else stays relatively similar.

Three 7970s

Civilization V - Three 7970, 1440p, Max Settings

More cores and PCIe 3.0 are winners here, but no GPU configuration has scaled above two GPUs.

Four 7970s

Civilization V - Four 7970, 1440p, Max Settings

Again, no scaling.

One 580

Civilization V - One 580, 1440p, Max Settings

While the top end Intel processors again take the lead, an interesting point is that now we have all PCIe 2.0 values for comparison, the non-hyper threaded 2500K takes the top spot, 10% higher than the FX-8350.

Two 580s

Civilization V - Two 580s, 1440p, Max Settings

We have another Intel/AMD split, by virtue of the fact that none of the AMD processors scaled above the first GPU. On the Intel side, you need at least an i5-2500K to see scaling, similar to what we saw with the 7970s.

Civilization V conclusion

Intel processors are the clear winner here, though not one stands out over the other. Having PCIe 3.0 seems to be the positive point for Civilization V, but in most cases scaling is still out of the window unless you have a monster machine under your belt.

GPU Benchmarks: Dirt 3 GPU Benchmarks: Sleeping Dogs
Comments Locked

242 Comments

View All Comments

  • JarredWalton - Wednesday, May 8, 2013 - link

    "While I haven't programmed AI..." Doesn't that make most of your other assumptions and guesses related to this area invalid?

    As for the rest, the point of the article isn't to compare HD 7970 with GTX 580, or to look at pure CPU scaling; rather, it's to look at CPU and GPU scaling in games at settings people are likely to use with a variety of CPUs, which necessitates using multiple motherboards. Given that in general people aren't going to buy two or three GPUs to run at lower resolutions and detail settings, the choice to run 1440p makes perfect sense: it's not so far out of reach that people don't use it, and it will allow the dual, triple, and quad GPU setups room to stretch (when they can).

    The first section shows CPU performance comparison, just as a background to the gaming comparisons. We can see how huge the gap is in CPU performance between a variety of processors, but how does that translate to gaming, and in particular, how does it translate to gaming with higher performance GPUs? People don't buy a Radeon HD 5450 for serious gaming, and they likely don't play games.

    For the rest: there is no subset of games that properly encompass "what people actually play". But if we're looking at what people play, it's going to include a lot of Flash games and Facebook games that work fine on Intel HD 4000. I guess we should just stop there? In other words, we know the limitations of the testing, and there will always be limitations. We can list many more flaws or questions that you haven't, but if you're interested in playing games on a modern PC, and you want to know a good choice for your CPU and GPU(s), the article provides a good set of data to help you determine if you might want to upgrade or not. If you're happy playing at 1366x768 and Medium detail, no, this won't help much. If you want minimum detail and maximum frame rate at 1080p, it's also generally useless. I'd argue however that the people looking for either of those are far less in number, or at least if they do exist they're not looking to research gaming performance until it affects them.
  • wcg66 - Wednesday, May 8, 2013 - link

    Ian, thanks for this. I'd really like to see how these tests change even higher resolutions, 3 monitor setups of 5760x1080, for example. There are folks claiming that the additional PCIe lanes in the i7 e-series makes for significantly better performance. Your results don't bare this out. If anything the 3930K is behind or sometimes barely ahead (if you consider error margins, arguably it's on par with the regular i7 chips.) I own an i7 2700K and 3930K.
  • Moon Patrol - Wednesday, May 8, 2013 - link

    Awesome review! Very impressed with the effort and time put into this! Thanks a lot!
    It be cool if you could maybe somewhere fit an i7 860 in somewhere over there. Socket 1156 is feeling left out :P I have i7 860...
  • Quizzical - Wednesday, May 8, 2013 - link

    Great data for people who want to overload their video card and figure out which CPU will help them do it. But it's basically worthless for gamers who want to make games run smoothly and look nice and want to know what CPU will help them do it.

    Would you do video card benchmarks by running undemanding games at minimum settings and using an old single core Celeron processor? That's basically the video card equivalent to treating this as a CPU benchmark. The article goes far out if its way to make things GPU-bound so that you can't see differences between CPUs, both by the games chosen and the settings within those games.

    But hey, if you want to compare a Radeon HD 7970 to a GeForce GTX 580, this is the definitive article for it and there will never be a better data set for that.
  • JarredWalton - Wednesday, May 8, 2013 - link

    Troll much? The article clearly didn't go too far out of the way to make things GPU bound, as evidenced by the fact that two of the games aren't GPU bound even with a single 7970. How many people out there buy a 7970 to play at anything less than 1080p -- or even at 1080p? I'd guess most 7970 owners are running at least 1440p or multi-monitor...or perhaps just doing Bitcoin, but that's not really part of the discussion here, unless the discussion is GPU hashing prowess.
  • Quizzical - Wednesday, May 8, 2013 - link

    If they're not GPU bound with a single 7970, then why does adding a second 7970 (or a second GTX 580) greatly increase performance in all four games? That can't happen if you're looking mostly at a CPU bottleneck, as it means that the CPU is doing a lot more work than before in order to deliver those extra frames. Indeed, sometimes it wouldn't happen even if you were purely GPU bound, as CrossFire and SLI don't always work properly.

    If you're trying to compare various options for a given component, you try to do tests that where the different benchmark results will mostly reflect differences in the particular component that you're trying to test. If you're trying to compare video cards, you want differences in scores to mostly reflect video card performance rather than being bottlenecked by something else. If you're trying to compare solid state drives, you want differences in scores to mostly reflect differences in solid state drive performance rather than being bottlenecked by something else. And if you're trying to compare processors, you want differences in scores to mostly reflect differences in CPU performance, not to get results that mostly say, hey, we managed to make everything mostly limited by the GPU.

    When you're trying to do benchmarks to compare video cards, you (or whoever does video card reviews on this site) understand this principle perfectly well. A while back, there was a review on this site in which the author (which might be you; I don't care to look it up) specifically said that he wanted to use Skyrim, but it was clearly CPU-bound for a bunch of video cards, so it wasn't included in the review.

    If you're not trying to make the games largely GPU bound, then why do you go to max settings? Why don't you turn off the settings that you know put a huge load on the GPU and don't meaningfully affect the CPU load? If you're doing benchmarking, the only reason to turn on settings that you know put a huge load on the GPU and no meaningful load on anything else is precisely that you want to be GPU bound. That makes sense for a video card review. Not so much if you're trying to compare processors.
  • JarredWalton - Wednesday, May 8, 2013 - link

    You go to max settings because that's what most people with a 7970 (or two or three or four) are going to use. This isn't a purely CPU benchmark article, and it's not a purely GPU benchmark article; it's both, and hence, the benchmarks and settings are going to have to compromise somewhat.

    Ian could do a suite of testing at 640x480 (or maybe just 1366x768) in order to move the bottleneck more to the CPU, but no one in their right mind plays at that resolution with a high-end GPU. On a laptop, sure, but on a desktop with an HD 7970 or a GTX 580? Not a chance! And when you drop settings down to minimum (or even medium), it does change the CPU dynamic a lot -- less textures, less geometry, less everything. I've encountered games where even when I'm clearly CPU limited, Ultra quality is half the performance of Medium quality.
  • IndianaKrom - Friday, May 10, 2013 - link

    Basically for the most part the single GPU game tests tell us absolutely nothing about the CPU because save for a couple especially old or low end CPUs, none of them even come close to hindering the already completely saturated GPU. The 2-4 GPU configurations are much more interesting because they show actual differences between different CPU and motherboard configurations. I do think it would be interesting to also show a low resolution test which would help reveal the impact of crossfire / SLI overhead versus a single more powerful GPU and could more directly expose the CPU limit.
  • Zink - Wednesday, May 8, 2013 - link

    You should use a DSLR and edit the pictures better. The cover image is noisy and lacks contrast.
  • makerofthegames - Wednesday, May 8, 2013 - link

    Very interesting article. And a lot of unwarranted criticism in the comments.

    I'm kind of disappointed that the dual Xeons failed so many benchmarks. I was looking to see how I should upgrade my venerable 2x5150 machine - whether to go with fast dual-cores, or with similar-speed quad-cores. But all the benchmarks for the Xeons was either "the same as every other CPU", or "no results".

    Oh well, I have more important things to upgrade on it anyways. And I realize that "people using Xeon 5150s for gaming" is a segment about as big as "Atom gamers".

Log in

Don't have an account? Sign up now