Ashes of the Singularity: Escalation (DX12)

A veteran from both our 2016 and 2017 game lists, Ashes of the Singularity: Escalation remains the DirectX 12 trailblazer, with developer Oxide Games tailoring and designing the Nitrous Engine around such low-level APIs. The game makes the most of DX12's key features, from asynchronous compute to multi-threaded work submission and high batch counts. And with full Vulkan support, Ashes provides a good common ground between the forward-looking APIs of today. Its built-in benchmark tool is still one of the most versatile ways of measuring in-game workloads in terms of output data, automation, and analysis; by offering such a tool publicly and as part-and-parcel of the game, it's an example that other developers should take note of.

Settings and methodology remain identical from its usage in the 2016 GPU suite. To note, we are utilizing the original Ashes Extreme graphical preset, which compares to the current one with MSAA dialed down from x4 to x2, as well as adjusting Texture Rank (MipsToRemove in settings.ini).

Ashes 1920x1080 2560x1440 3840x2160
Average FPS
99th Percentile

For Ashes, the 20 series fare a little worse in their gains over the 10 series, with an advantage at 4K around 14 to 22%. Here, the Founders Edition power and clock tweaks are essential in avoiding the 2080 FE outright losing to the 1080 Ti, though our results are putting the Founders Editions essentially neck-and-neck.

Far Cry 5 Wolfenstein II
Comments Locked

337 Comments

View All Comments

  • dustwalker13 - Sunday, September 23, 2018 - link

    Way too expensive. If those cards were the same price as the 1080 / TI it would be a generation change.

    This actually is a regression, the price has increased out of every proportion even if you completely were to ignore that newer generations are EXPECTED to be a lot faster than the older ones for essentially the same price (plus a bit of inflation max).

    paying 50% more for a 30% increase is a simple ripoff. no one should buy these cards ... sadly a lot of people will let themselves get ripped off once more by nvidia.

    and no: raytracing is not an argument here, this feature is not supported anywhere and by the time it will be adopted (if ever) years will have gone bye and these cards will be old and obsolete. all of this is just marketing and hot air.
  • mapesdhs - Thursday, September 27, 2018 - link

    There are those who claim buying RTX to get the new features is sensible for future proofing; I don't understand why they ignore that the performance of said features on these cards is so poor. NVIDIA spent years pushing gamers into high frequency monitors, 4K and VR, now they're trying to flip everyone round the other way, pretend that sub-60Hz 1080p is ok.

    And btw, it's a lot more than 50%. Where I am (UK) the 2080 Ti is almost 100% more than the 1080 Ti launch price. It's also crazy that the 2080 Ti does not have more RAM, it should have had at least 16GB. My guess is NVIDIA knows that by pushing gamers back down to lower resolutions, they don't need so much VRAM. People though have gotten used to the newer display tech, and those who've adopted high refresh displays physically cannot go back.
  • milkod2001 - Monday, September 24, 2018 - link

    I wonder if NV did not hike prices so much on purpose so they don't have to lower existing GTX prices that much. There is still ton of GTX in stock.
  • poohbear - Friday, September 28, 2018 - link

    Why the tame conclusion? Do us a favor will u? Just come out and say you'd have to be mental to pay these prices for features that aren't even available yet and you'd be better off buying previous gen video cards that are currently heavily discounted.
  • zozaino - Thursday, October 4, 2018 - link

    i really want to use it
  • zozaino - Thursday, October 4, 2018 - link

    i really want to use it https://subwaysurfers.vip/
    https://psiphon.vip/
    https://hillclimbracing.vip/
  • Luke212 - Wednesday, October 24, 2018 - link

    Why does the 2080ti not use the tensor cores for GEMM? I have not seen any benchmark anywhere showing it working. It would be a good news story if Nvidia secretly gimped the tensor cores.

Log in

Don't have an account? Sign up now