Ashes of the Singularity: Escalation (DX12)

A veteran from both our 2016 and 2017 game lists, Ashes of the Singularity: Escalation remains the DirectX 12 trailblazer, with developer Oxide Games tailoring and designing the Nitrous Engine around such low-level APIs. The game makes the most of DX12's key features, from asynchronous compute to multi-threaded work submission and high batch counts. And with full Vulkan support, Ashes provides a good common ground between the forward-looking APIs of today. Its built-in benchmark tool is still one of the most versatile ways of measuring in-game workloads in terms of output data, automation, and analysis; by offering such a tool publicly and as part-and-parcel of the game, it's an example that other developers should take note of.

Settings and methodology remain identical from its usage in the 2016 GPU suite. To note, we are utilizing the original Ashes Extreme graphical preset, which compares to the current one with MSAA dialed down from x4 to x2, as well as adjusting Texture Rank (MipsToRemove in settings.ini).

We've updated some of the benchmark automation and data processing steps, so results may vary at the 1080p mark compared to previous data.

Ashes of the Singularity: Escalation - 3840x2160 - Extreme Quality

Ashes of the Singularity: Escalation - 2560x1440 - Extreme Quality

Ashes of the Singularity: Escalation - 1920x1080 - Extreme Quality

For the Radeon VII, the intended goal was to equal or trade blows with the RTX 2080. The situation in Ashes: Escalation is still in line with that intention at 4K, where despite trailing the GTX 1080 Ti FE/RTX 2080 duo is comfortably ahead of the RTX 2070 and RX Vega 64. The lead begins to dwindle at lower resolutions, but the Radeon VII can still claim a 20% speedup at 1440p over the RX Vega 64.

Ashes: Escalation - 99th Percentile - 3840x2160 - Extreme Quality

Ashes: Escalation - 99th Percentile - 2560x1440 - Extreme Quality

Ashes: Escalation - 99th Percentile - 1920x1080 - Extreme Quality

Far Cry 5 Wolfenstein II
Comments Locked

289 Comments

View All Comments

  • mapesdhs - Friday, February 8, 2019 - link

    It's going to be hillariously funny if Ryzen 3000 series reverses this accepted norm. :)
  • mkaibear - Saturday, February 9, 2019 - link

    I'd not be surprised - given anandtech's love for AMD (take a look at the "best gaming CPUs" article released today...)

    Not really "hilariously funny", though. More "logical and methodical"
  • thesavvymage - Thursday, February 7, 2019 - link

    It's not like itll perform any better though... Intel still has generally better gaming performance. There's no reason to artificially hamstring the card, as it introduces a CPU bottleneck
  • brokerdavelhr - Thursday, February 7, 2019 - link

    Once again - in gaming for the most part....try again with other apps and their is a marked difference. Many of which are in AMD's favor. try again.....
  • jordanclock - Thursday, February 7, 2019 - link

    In every scenario that is worth testing a VIDEO CARD, Intel CPUs offer the best performance.
  • ballsystemlord - Thursday, February 7, 2019 - link

    There choice of processor is kind of strange. An 8-core Intel on *plain* 14nm, now 2! years old, with rather low clocks at 4.3Ghz, is not ideal for a gaming setup. I would have used a 9900K or 2700X personally[1].
    For a content creator I'd be using a Threadripper or similar.
    Re-testing would be an undertaking for AT though. Probably too much to ask. Maybe next time they'll choose some saner processor.
    [1] 9900K is 4.7Ghz all cores. The 2700X runs at 4.0Ghz turbo, so you'd loose frequency, but then you could use faster RAM.
    For citations see:
    https://www.intel.com/content/www/us/en/products/p...
    https://images.anandtech.com/doci/12625/2nd%20Gen%...
    https://images.anandtech.com/doci/13400/9thGenTurb...
  • ToTTenTranz - Thursday, February 7, 2019 - link

    Page 3 table:
    - The MI50 uses a Vega 20, not a Vega 10.
  • Ryan Smith - Thursday, February 7, 2019 - link

    Thanks!
  • FreckledTrout - Thursday, February 7, 2019 - link

    I wonder why this card absolutely dominates in the "LuxMark 3.1 - LuxBall and Hotel" HDR test? Its pulling in numbers 1.7x higher than the RTX 2080 on that test. That's a funky outlier.
  • Targon - Thursday, February 7, 2019 - link

    How much video memory is used? That is the key. Since many games and benchmarks are set up to test with a fairly low amount of video memory being needed(so those 3GB 1050 cards can run the test), what happens when you try to load 10-15GB into video memory for rendering? Cards with 8GB and under(the majority) will suddenly look a lot slower in comparison.

Log in

Don't have an account? Sign up now