Ashes of the Singularity: Escalation (DX12)

A veteran from both our 2016 and 2017 game lists, Ashes of the Singularity: Escalation remains the DirectX 12 trailblazer, with developer Oxide Games tailoring and designing the Nitrous Engine around such low-level APIs. The game makes the most of DX12's key features, from asynchronous compute to multi-threaded work submission and high batch counts. And with full Vulkan support, Ashes provides a good common ground between the forward-looking APIs of today. Its built-in benchmark tool is still one of the most versatile ways of measuring in-game workloads in terms of output data, automation, and analysis; by offering such a tool publicly and as part-and-parcel of the game, it's an example that other developers should take note of.

Settings and methodology remain identical from its usage in the 2016 GPU suite. To note, we are utilizing the original Ashes Extreme graphical preset, which compares to the current one with MSAA dialed down from x4 to x2, as well as adjusting Texture Rank (MipsToRemove in settings.ini).

Ashes of the Singularity: Escalation - 3840x2160 - Extreme QualityAshes of the Singularity: Escalation - 2560x1440 - Extreme QualityAshes of the Singularity: Escalation - 1920x1080 - Extreme Quality

Somewhat surprisingly, the RTX 2060 (6GB) performs poorly in Ashes, closer to the GTX 1070 than the GTX 1070 Ti. Although it is still ahead of the RX Vega 56, it's not an ideal situation, where the lead over the GTX 1060 6GB is cut to around 40%.

Ashes: Escalation - 99th Percentile - 3840x2160 - Extreme QualityAshes: Escalation - 99th Percentile - 2560x1440 - Extreme QualityAshes: Escalation - 99th Percentile - 1920x1080 - Extreme Quality

 

Far Cry 5 Wolfenstein II
Comments Locked

134 Comments

View All Comments

  • Hameedo - Monday, January 7, 2019 - link

    Final Fantasy 15 already supports DLSS via a recently released patch, please correct that info
  • benedict - Monday, January 7, 2019 - link

    Right now the Radeon RX 580 (8GB) has the best price/performance. 590, 1060 and 2060 are much more expensive but not that much faster.
  • Hameedo - Monday, January 7, 2019 - link

    The 2060 is on par with 1070Ti, it means it's much faster than 580 and 1060 (about 60%)
  • sing_electric - Monday, January 7, 2019 - link

    FWIW, the RX 580 can regularly be had for ~$200 these days, so TECHNICALLY benedict is right - the 580 goes for ~57% of the 2060's MSRP ($349) but offers more like 60-65% of the performance (and that's assuming you can find the 2060 for MSRP, which we'll have to see).

    IMO, they're just in different markets. The 590 is looking like a misstep by AMD at this point, since it's midway between the 580 and the 2060 in price, but isn't much faster than the 580 in real world performance. The Vega series are interesting, but AMD and partners probably have limited ability to lower prices due to how expensive the HBM2 memory on those boards is.
  • Bluescreendeath - Monday, January 7, 2019 - link

    No, Benedict is not right. At best, he is only partially right on one out of at least three points, and only when viewed in the most favorable light. He said 1) the 2060 is "not much faster," 2) said that the 1060 is much more expensive and 3) said the 580 has the best price/performance ratio. The GTX2060 is clearly significantly faster than the RX580 (by more than 50%) so his first point is flat out wrong. His second point is also wrong because the GTX1060 costs about the same as the RX580 in terms of market price and performs about the same. His 3rd point is debateable and may be right. The 580 does have a marginally better price/performance ratio if you compare market price RX580 ($200) to MSRP GTX2060 ($350). However, if you compare MSRP RX580 ($240+) to MSRP FE GTX2060 ($350) then the equation changes as the RX580 no longer has the best price/performance either as the 2060 is 46% pricier but performs 50-60% better. And other models of the 2060 will surely be cheaper than the FE version.
  • Ananke - Monday, January 7, 2019 - link

    He is completely right - RTX2060 has 48 ROPs and 6 GB VRAM, which is pointless at $349, regardless how "fast" is the card. Raytracing is also pointless with so scarcely resourced card.
    At this point a person should either buy 2080Ti, if money are no object, or buy Radeon 580 8GB for under $200. Most modern games simply manage huge textures in VRAM - you need more RAM, quick memory buses and ROPs for that.
    I am no particular fan, but AMD will kill NVidia any moment with announcement of 7 nm chips - at smaller process brute force will have advantage no matter how smart is NVidia's prefetch, bus compression and all other "smart" staff. Same happened to Intel. And TSMC is already occupied working for AMD, Samsung is busy and any excess is reserved by Intel, so NVidia will have no access to 7nm fab for a while
  • Bluescreendeath - Monday, January 7, 2019 - link

    Real world results matter more than on paper specs. The actual real world benchmarks show the Gtx2060 is 50-60% faster than the rx580. You seem to be overly hung up on VRAM and ROPs. Anadtech and techspot have done plenty of tests on VRAM, and modern games dont use nearly as much VRAM as you think - even on 1440p. And take a look at the 4k benchmarks in the article. The 6Gb vram is clearly sufficient. Furthermore, the rx580 8gb costs about $200-250 new, whereas the 2060 is $350 for the more expensive founders edition. Realistically, the aftermarket cards will be something like 200-250 for new rx580s vs 300 for a new non FE 2060.

    As for your mention of future AMD chips, we are comparing 2060 vs 1060 vs rx580.
  • heavyarms1912 - Monday, January 7, 2019 - link

    Reality check, RX580s 8gb can be had below $200 and even the top end aftermaket ones and with 3 game titles.
  • wintermute000 - Monday, January 7, 2019 - link

    And its what 50% slower, so what's your point?
  • Ananke - Tuesday, January 8, 2019 - link

    There are several RX580 8GB cards sales today for around $160-170. It night be old, it might be hot, but runs 1080p just perfect on any game, and it costs half.
    Next generation consoles are already well into making, and developers are developing games for consoles, not for PCs, which will never change since the scale of business is tenfold if not larger. Next might be game services directly integrated in the TV or whatever device gives a mass market, but would never be a PC.
    Anyway, new consoles will have A LOT more VRAM, and 6GB will simply not cut it. And they are coming later this year/early next year, not like waiting ten years for it. I have lived through enough NVidia "cycles" and can tell you this is typical NVidia greed - narrow memory bus and limited memory - exactly like 12 years ago, when AMD had nothing, but came with the rough, hot unoptimized 5850 and collected half of the market.
    The card is OK, it is just not worth $350 today, that's my point.

Log in

Don't have an account? Sign up now