Total War: Warhammer II (DX11)

Last in our 2018 game suite is Total War: Warhammer II, built on the same engine of Total War: Warhammer. While there is a more recent Total War title, Total War Saga: Thrones of Britannia, that game was built on the 32-bit version of the engine. The first TW: Warhammer was a DX11 game was to some extent developed with DX12 in mind, with preview builds showcasing DX12 performance. In Warhammer II, the matter, however, appears to have been dropped, with DX12 mode still marked as beta, but also featuring performance regression for both vendors.

It's unfortunate because Creative Assembly themselves have acknowledged the CPU-bound nature of their games, and with re-use of game engines as spin-offs, DX12 optimization would have continued to provide benefits, especially if the future of graphics in RTS-type games will lean towards low-level APIs.

There are now three benchmarks with varying graphics and processor loads; we've opted for the Battle benchmark, which appears to be the most graphics-bound.

Total War: Warhammer II - 3840x2160 - Ultra Quality

Total War: Warhammer II - 2560x1440 - Ultra Quality

Total War: Warhammer II - 1920x1080- Ultra Quality

Wrapping things up for games is Total War: Warhammer II, and this title is one that is less than ideal for the matchup between the Radeon VII and its competitors. Much like GTA V, the card is able to make substantial improvements over its predecessor; at 4K, this is an astounding 47% increase over the RX Vega 64. Nevertheless, given the starting point of the RX Vega 64, the Radeon VII is still somewhere around 15% behind the reference RTX 2080, meaning that it performs a shade faster than the reference RTX 2070 at 4K/1440p. That's not where the Radeon VII ideally wants to be, despite being respectable way to iterate on the RX Vega 64.

F1 2018 Compute
Comments Locked

289 Comments

View All Comments

  • Alistair - Thursday, February 7, 2019 - link

    Because everyone is already playing Anthem at 4k 60fps with a $400 card? Ray tracing is totally useless and we need way more rasterization performance per dollar than we have right now. Give me a 7nm 2080 ti without the RT cores for $699 and then we'll talk.
  • eva02langley - Friday, February 8, 2019 - link

    Fair, the main objective of gaming GPU are shaders per $. Gameworks gimmick are not something I call a selling factor... and Nvidia is forced to cook their books because of it.
  • RSAUser - Thursday, February 7, 2019 - link

    Why are you adding the Final Fantasy benchmark when it has known bias issues?
  • Zizy - Thursday, February 7, 2019 - link

    Eh, 2080 is slightly better for games and costs the same, while unfortunately MATLAB supports just CUDA so I can't even play with compute.
  • Hul8 - Thursday, February 7, 2019 - link

    On page 19, the "Load GPU Temperatur - FurMark" graph is duplicated.
  • Ryan Smith - Thursday, February 7, 2019 - link

    Thanks. The FurMark power graph has been put back where it belongs.
  • schizoide - Thursday, February 7, 2019 - link

    Man, I've never seen such a hostile response to an Anandtech article. People need to relax, it's just a videocard.

    I don't see this as a win for AMD. Using HBM2 the card is expensive to produce, so they don't have a lot of freedom to discount it. Without a hefty discount, it's louder, hotter, and slower than a 2080 at the same price. And of course no ray-tracing, which may or may not matter, but I'd rather have it just in case.

    For OpenCL work it's a very attractive option, but again, that's a loser for AMD because they ALREADY sold this card as a workstation product for a lot more money. Now it's discounted to compete with the 2080, meaning less revenue for AMD.

    Even once the drivers are fixed, I don't see this going anywhere. It's another Vega64.
  • sing_electric - Thursday, February 7, 2019 - link

    There's still a lot of people for whom a Radeon Instinct was just never going to happen, INCLUDING people who might have a workstation where they write code that will mostly run on servers, and it means you can run/test your code on your workstation with a fairly predictable mapping to final server performance.

    As Nate said in the review, it's also very attractive to academics, which benefits AMD in the long run if say, a bunch of professors and grad students learn to write ML/CL on Radeon before say, starting or joining companies.
  • schizoide - Thursday, February 7, 2019 - link

    Yes, it's attractive to anyone who values OpenCL performance. They're getting workstation-class hardware on the cheap. But that does devalue AMD's workstation productline.
  • Manch - Thursday, February 7, 2019 - link

    Not really. The instinct cards are still more performant. They tend to be bought by businesses where time/perf is more important than price/perf.

Log in

Don't have an account? Sign up now