Total War: Rome 2

The second strategy game in our benchmark suite, Total War: Rome 2 is the latest game in the Total War franchise. Total War games have traditionally been a mix of CPU and GPU bottlenecks, so it takes a good system on both ends of the equation to do well here. In this case the game comes with a built-in benchmark that plays out over a forested area with a large number of units, definitely stressing the GPU in particular.


For this game in particular we’ve also gone and turned down the shadows to medium. Rome’s shadows are extremely CPU intensive (as opposed to GPU intensive), so this keeps us from CPU bottlenecking nearly as easily.

Total War: Rome 2 - 3840x2160 - Extreme Quality + Med. Shadows

Total War: Rome 2 - 3840x2160 - Very High Quality + Med. Shadows

Total War: Rome 2 - 2560x1440 - Extreme Quality + Med. Shadows

Total War: Rome 2 - 1920x1080 - Extreme Quality + Med. Shadows

Of all of our games, there is no better set of benchmarks for the GTX 980 than Total War: Rome II. Against both AMD and NVIDIA’s last-generation cards it never wins by as much as it wins here.

Compared to the GTX 780 Ti the GTX 980 is a consistent 16-17% ahead at all resolutions. Meanwhile against the R9 280XU this is an 18% lead at 1080p and 1440p. R9 290XU only begins to catch up at 4K Very High quality, where GTX 980 still leads by a respectable 8%.

This is also a very strong showing compared to the GTX 680. The overall lead is 80-95% depending on the resolution. The GTX 980 was not necessarily meant to double the GTX 680’s performance, but it comes very close to doing so here at 1440p.

Given what happens to the GK104 cards in this game, I suspect we’re looking at the results of either the ROP advantage and/or a very good case CUDA core occupancy improvements. The fact that the lead over the GTX 780 Ti is so consistent over all resolutions does point to the CUDA core theory, but we can’t really rule out the ROPs with the information we have.

As for results on an absolute basis, not even mighty GTX 980 is going to crack 30fps at 4K with Extreme settings. In lieu of that Very High quality comes off quite well at 49fps, and we’re just shy of hitting 60fps at 1440p with Extreme.

Crysis: Warhead Thief
Comments Locked

274 Comments

View All Comments

  • kron123456789 - Friday, September 19, 2014 - link

    Look at "Load Power Consuption — Furmark" test. It's 80W lower with 980 than with 780Ti.
  • Carrier - Friday, September 19, 2014 - link

    Yes, but the 980's clock is significantly lowered for the FurMark test, down to 923MHz. The TDP should be fairly measured at speeds at which games actually run, 1150-1225MHz, because that is the amount of heat that we need to account for when cooling the system.
  • Ryan Smith - Friday, September 19, 2014 - link

    It doesn't really matter what the clockspeed is. The card is gated by both power and temperature. It can never draw more than its TDP.

    FurMark is a pure TDP test. All NVIDIA cards will reach 100% TDP, making it a good way to compare their various TDPs.
  • Carrier - Friday, September 19, 2014 - link

    If that is the case, then the charts are misleading. GTX 680 has a 195W TDP vs. GTX 770's 230W (going by Wikipedia), but the 680 uses 10W more in the FurMark test.

    I eagerly await your GTX 970 report. Other sites say that it barely saves 5W compared to the GTX 980, even after they correct for factory overclock. Or maybe power measurements at the wall aren't meant to be scrutinized so closely :)
  • Carrier - Friday, September 19, 2014 - link

    To follow up: in your GTX 770 review from May 2013, you measured the 680 at 332W in FurMark, and the 770 at 383W in FurMark. Those numbers seem more plausible.
  • Ryan Smith - Saturday, September 20, 2014 - link

    680 is a bit different because it's a GPU Boost 1.0 card. 2.0 included the hard TDP and did away with separate power targets. Actually what you'll see is that GTX 680 wants to draw 115% TDP with NVIDIA's current driver set under FurMark.
  • Carrier - Saturday, September 20, 2014 - link

    Thank you for the clarification.
  • wanderer27 - Friday, September 19, 2014 - link

    Power at the wall (AC) is going to be different than power at the GPU - which is coming from the DC PSU.

    There are loses and efficiency difference in converting from AC to DC (PSU), plus a little wiggle from MB and so forth.
  • solarscreen - Friday, September 19, 2014 - link

    Here you go:

    http://books.google.com/books?id=v3-1hVwHnHwC&...
  • PhilJ - Saturday, September 20, 2014 - link

    As stated in the article, the power figures are total system power draw. The GTX980 is throwing out nearly double the FPS of the GTX680, so this is causing the rest of the system (mostly the CPU) to work harder to feed the card. This in tun drives the total system power consumption up, despite the fact the GTX980 itself is drawing less power than the GTX680.

Log in

Don't have an account? Sign up now