Civilization: Beyond Earth

Shifting gears from action to strategy, we have Civilization: Beyond Earth, the latest in the Civilization series of strategy games. Civilization is not quite as GPU-demanding as some of our action games, but at Ultra quality it can still pose a challenge for even high-end video cards. Meanwhile as the first Mantle-enabled strategy title Civilization gives us an interesting look into low-level API performance on larger scale games, along with a look at developer Firaxis’s interesting use of split frame rendering with Mantle to reduce latency rather than improving framerates.

Civilization: Beyond Earth - 3840x2160 - Ultra Quality

Civilization: Beyond Earth - 2560x1440 - Ultra Quality

Though not as intricate as Crysis 3 or Shadow of Mordor, Civilization still requires a very powerful GPU to run it at 4K if you want to hit 60fps. In fact of our single-GPU configurations the GTX Titan X is the only card to crack 60fps, delivering 69fps at the game’s most extreme setting. This is once again well ahead of the GTX 980 – beating it by 31% at 4K – and 40%+ ahead of the GK110 cards. On the other hand this is the closest AMD’s R9 290XU will get, with the GTX Titan X only beating it by 23% at 4K.

Meanwhile at 1440p it’s entirely possible to play Civilization at 120fps, making it one of a few games where the GTX Titan X can keep up with high refresh rate 1440p monitors.

Civilization: Beyond Earth - Min. Frame Rate - 3840x2160 - Ultra Quality

Civilization: Beyond Earth - Min. Frame Rate - 2560x1440 - Ultra Quality

When it comes to minimum framerates the GTX Titan X doesn’t dominate quite like it does at average framerates, but it still handily takes the top spot. Even at its worst, the GTX Titan X can still deliver 44fps at 4K under Civilization.

Middle Earth: Shadow of Mordor Dragon Age: Inquisition
Comments Locked

276 Comments

View All Comments

  • looncraz - Tuesday, March 17, 2015 - link

    If the most recent slides (allegedly leaked from AMD) hold true, the 390x will be at least as fast as the Titan X, though with only 8GB of RAM (but HBM!).

    A straight 4096SP GCN 1.2/3 GPU would be a close match-up already, but any other improvements made along the way will potentially give the 390X a fairly healthy launch-day lead.

    I think nVidia wanted to keep AMD in the dark as much as possible so that they could not position themselves to take more advantage of this, but AMD decided to hold out, apparently, until May/June (even though they apparently already have some inventory on hand) rather than give nVidia a chance to revise the Titan X before launch.

    nVidia blinked, it seems, after it became apparent AMD was just going to wait out the clock with their current inventory.
  • zepi - Wednesday, March 18, 2015 - link

    Unless AMD has achieved considerable increase in perf/w, they are going to have really hard time tuning those 4k shaders to a reasonable frequency without being a 450W card.

    Not that being a 500W is necessarily a deal breaker for everyone, but in practice cooling a 450W card without causing ear shattering level of noise is very difficult compared to cooling a 250W card.

    Let us wait and hope, since AMD really would need to get a break and make some money on this one...
  • looncraz - Wednesday, March 18, 2015 - link

    Very true. We know that with HBM there should already be a fairly beefy power savings (~20-30W vs 290X it seems).

    That doesn't buy them room for 1,280 more SPs, of course, but it should get them a healthy 256 of them. Then, GCN 1.3 vs 1.1 should have power advantages as well. GCN 1.2 vs 1.0 (R9 285 vs R9 280) with 1792 SPs showed a 60W improvement, if we assume GCN 1.1 to GCN 1.3 shows a similar trend the 390X should be pulling only about 15W more than the 290X with the rumored specs without any other improvements.

    Of course, the same math says the 290X should be drawing 350W, but that's because it assumes all the power is in the SPs... But I do think it reveals that AMD could possibly do it without drawing much, if any, more power without making any unprecedented improvements.
  • Braincruser - Wednesday, March 18, 2015 - link

    Yeah, but the question is, How well will the memory survive on top of a 300W GPU?
    Because the first part in a graphic card to die from high temperatures is the VRAM.
  • looncraz - Thursday, March 19, 2015 - link

    It will be to the side, on a 2.5d interposer, I believe.

    GPU thermal energy will move through the path of least resistance (technically, to the area with the greatest deltaT, but regulated by the material thermal conductivity coefficient), which should be into the heatsink or water block. I'm not sure, but I'd think the chips could operate in the same temperature range as the GPU, but maybe not. It may be necessary to keep them thermally isolated. Which shouldn't be too difficult, maybe as simple as not using thermal pads at all for the memory and allowing them to passively dissipate heat (or through interposer mounted heatsinks).

    It will be interesting to see what they have done to solve the potential issues, that's for sure.
  • Xenonite - Thursday, March 19, 2015 - link

    Yes, I agree that AMD would be able to absolutely destroy NVIDIA on the performance front if they designed a 500W GPU and left the PCB and waterblock design to their AIB partners.

    I would also absolutely love to see what kind of performance a 500W or even a 1kW graphics card would be able to muster; however, since a relatively constant 60fps presented with less than about 100ms of total system latency has been deemed sufficient for a "smooth and responsive" gaming experience, I simply can't imagine such a card ever seeing the light of day.
    And while I can understand everyone likes to pretend that they are saving the planet with their <150W GPUs, the argument that such a TDP would be very difficult to cool does not really hold much water IMHO.

    If, for instance, the card was designed from the ground up to dissipate its heat load over multiple 200W~300W GPUs, connected via a very-high-speed, N-directional data interconnect bus, the card could easily and (most importantly) quietly be cooled with chilled-watercooling dissipating into a few "quad-fan" radiators. Practically, 4 GM200-size GPUs could be placed back-to-back on the PCB, with each one rendering a quarter of the current frame via shared, high-speed frame buffers (thereby eliminating SLI-induced microstutter and "frame-pacing" lag). Cooling would then be as simple as installing 4 standard gpu-watercooling loops with each loop's radiator only having to dissipate the TDP of a single GPU module.
  • naxeem - Tuesday, March 24, 2015 - link

    They did solve that problem with a water-cooling solution. 390X WCE is probably what we'll get.
  • ShieTar - Wednesday, March 18, 2015 - link

    Who says they don't allow it? EVGA have already anounced two special models, a superclocked one and one with a watercooling-block:

    http://eu.evga.com/articles/00918/EVGA-GeForce-GTX...
  • Wreckage - Tuesday, March 17, 2015 - link

    If by fast you mean June or July. I'm more interested in a 980ti so I don't need a new power supply.
  • ArmedandDangerous - Saturday, March 21, 2015 - link

    There won't ever be a 980 Ti if you understand Nvidia's naming schemes. Ti's are for unlocked parts, there's nothing to further unlock on the 980 GM204.

Log in

Don't have an account? Sign up now