Total War: Warhammer II (DX11)

Last in our 2018 game suite is Total War: Warhammer II, built on the same engine of Total War: Warhammer. While there is a more recent Total War title, Total War Saga: Thrones of Britannia, that game was built on the 32-bit version of the engine. The first TW: Warhammer was a DX11 game was to some extent developed with DX12 in mind, with preview builds showcasing DX12 performance. In Warhammer II, the matter, however, appears to have been dropped, with DX12 mode still marked as beta, but also featuring performance regression for both vendors.

It's unfortunate because Creative Assembly themselves have acknowledged the CPU-bound nature of their games, and with re-use of game engines as spin-offs, DX12 optimization would have continued to provide benefits, especially if the future of graphics in RTS-type games will lean towards low-level APIs.

There are now three benchmarks with varying graphics and processor loads; we've opted for the Battle benchmark, which appears to be the most graphics-bound.

Total War: Warhammer II - 3840x2160 - Ultra Quality

Total War: Warhammer II - 2560x1440 - Ultra Quality

Total War: Warhammer II - 1920x1080- Ultra Quality

Wrapping things up for games is Total War: Warhammer II, and this title is one that is less than ideal for the matchup between the Radeon VII and its competitors. Much like GTA V, the card is able to make substantial improvements over its predecessor; at 4K, this is an astounding 47% increase over the RX Vega 64. Nevertheless, given the starting point of the RX Vega 64, the Radeon VII is still somewhere around 15% behind the reference RTX 2080, meaning that it performs a shade faster than the reference RTX 2070 at 4K/1440p. That's not where the Radeon VII ideally wants to be, despite being respectable way to iterate on the RX Vega 64.

F1 2018 Compute
Comments Locked

289 Comments

View All Comments

  • Kevin G - Thursday, February 7, 2019 - link

    Not a bad showing by AMD but this card isn't the victory that they needed either. The gaming side is OK and lines up with the GTX 1080 Ti and RTX 2080 fairly well. On the compute side it is actually very good with the extra memory capacity and more bandwidth. I have a feeling that this card should have shipped with 128 ROPs which would have given it an edge at higher resolutions.

    I'm also curious as to how this card would fair at even higher resolutions like 5K and 8K. The memory bandwidth is there to humor that idea and might be feasible to get playable frame rates on specific modern games. I'd also be interesting to see how it'd fair with some older, less demanding titles at these resolutions too.
  • Holliday75 - Friday, February 8, 2019 - link

    This card feels like its meant to full the gap and now allow Nvidia to be the only player in the game for an extended period of time. This buys them time for their next architecture release.
  • brokerdavelhr - Thursday, February 7, 2019 - link

    Can You please retest running the Radeon VII (an AMD part) on a Ryzen II with X470 with 16 gig of RAM? You always run AMD parts on a non AMD processor. Please retest and post results!
  • mkaibear - Thursday, February 7, 2019 - link

    The point of comparative benchmarking is to change just one thing so you can see the impact of the thing you're changing.
  • brokerdavelhr - Thursday, February 7, 2019 - link

    SO why do they only test on Intel machines? Why not run the same tests on an RYZEN/Nvidia and Ryzen/Radeon combo? My point is that it simply never happens. Put aside the fact that Radeon always fairs better on a AMD machine, it just seems odd is all. For the longest time, nearly every Intel machine ran Nvidia graphics. You are more likely to find a Radeon in a AMD machine than you will an Intel one.
    See my point?
  • DominionSeraph - Thursday, February 7, 2019 - link

    Even AMD benches their video cards on Intel processors. Intel is just faster.
  • brokerdavelhr - Thursday, February 7, 2019 - link

    What link is that DS - and if you ask me too google it, I will not take anything you say seriously. Or are you deliberately trolling? I know they do a side by side with intel processors with their own to show the diff, bu thats all. What is the link to the tests you are referring to? Either way - it is unbiased as they bench with both. Not so here which was my point.
  • Klimax - Friday, February 8, 2019 - link

    So can you post AMDs PR results that use AMD CPUs?
  • krazyfrog - Sunday, February 10, 2019 - link

    From AMD's Radeon VII page:

    "Testing done by AMD performance labs 1/21/19 on an Intel Core i7 7700k, 16GB DDR4 3000MHz, Radeon VII, Radeon RX Vega 64, AMD Driver 18.50 and Windows 10. Using Resident Evil 2 @ 3840x2160, Max settings, DirectX® 11:Radeon VII averaged 53 fps. Radeon RX Vega 64 averaged 41 fps. PC manufacturers may vary configurations yielding different results. All scores are an average of 3 runs with the same settings. Performance may vary based on use of latest drivers. RX-291"
  • mkaibear - Thursday, February 7, 2019 - link

    Because they are for the most part running gaming tests, and if you want to remove CPU bottlenecks you pick the CPU which you have that's fastest in games.

    Which is Intel.

    If you pick anything else then you are artificially constraining performance which tends to show a regression to the mean - in other words it'll make the difference between AMD and nVidia smaller (whichever one wins)

    Equally the fact that AMD works best with AMD means they absolutely should *not* put an AMD processor in the system - that way they are artificially boosting system performance and skewing their benchmarks.

    You really need to do some reading on how you do a/b testing. Wikipedia has a good article.

Log in

Don't have an account? Sign up now