Ashes of the Singularity: Escalation

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of DirectX12s features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run a fixed v2.11 version of the game due to some peculiarities of the splash screen added after the merger with the standalone Escalation expansion, and have an automated tool to call the benchmark on the command line. (Prior to v2.11, the benchmark also supported 8K/16K testing, however v2.11 has odd behavior which nukes this.)

At both 1920x1080 and 4K resolutions, we run the same settings. Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at Extreme settings, and take the frame-time output for our average, percentile, and time under analysis.

For all our results, we show the average frame rate at 1080p first. Mouse over the other graphs underneath to see 99th percentile frame rates and 'Time Under' graphs, as well as results for other resolutions. All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6GB Performance


1080p

4K

Sapphire R9 Fury 4GB Performance


1080p

4K

Sapphire RX 480 8GB Performance


1080p

4K

Ashes Conclusion

Pretty much across the board, no matter the GPU or the resolution, Intel gets the win here. This is most noticable in the time under analysis, although AMD seems to do better when the faster cards are running at the lower resolution. That's nothing to brag about though.

Gaming Performance: Civilization 6 (1080p, 4K, 8K, 16K) Gaming Performance: Shadow of Mordor (1080p, 4K)
Comments Locked

176 Comments

View All Comments

  • Gulagula - Wednesday, July 26, 2017 - link

    Can anyone explain to me how the 7600k and in some cases the 7600 beating the 7700k almost consistenly. I don't doubt the Ryzen results but the Intel side of results confuses the heck out of me.
  • Ian Cutress - Wednesday, July 26, 2017 - link

    Sustained turbo, temperatures, quality of chips from binning (a good 7600 chip will turbo much longer than a 7600K will), time of day (air temperature is sometimes a pain - air conditioning doesn't really exist in the UK, especially in an old flat in London), speed shift response, uncore response, data locality (how often does the system stall, how long does it take to get the data), how clever the prefetchers are, how a motherboard BIOS ramps up and down the turbos or how accurate its thermal sensors are (I try and keep the boards constant for a full generation because of this). If it's only small margin between the data, there's not much to discuss.
  • Funyim - Thursday, August 10, 2017 - link

    Are you absolutely sure your 7700k isn't broken? It sure looks like it is. I understand your point about margins but numbers are numbers and yours look wrong. No other benchmarks I've seen to date aligns with your findings. And please for the love of god ammend this article if it is.
  • Hurr Durr - Monday, July 24, 2017 - link

    One wonders why would you relegate yourself to subpar performance of AMD processors.
  • Alistair - Tuesday, July 25, 2017 - link

    Your constant refrain belonged in the bulldozer era (when the single threaded performance difference was on the order of 80-100 percent). Apparently you can't move past the Ryzen launch. If a different company such as Samsung had launched these CPUs the reception would have been very different. I've never bought AMD before but my Ryzen 1700 is incredible for its price, and I had to be disillusioned by my terrible Skylake upgrade first before I was willing to purchase from AMD.
  • Gothmoth - Tuesday, July 25, 2017 - link

    don´t argue with trolls....
  • StevoLincolnite - Tuesday, July 25, 2017 - link

    Why would Intel enable HT when they could sell it as DLC?
    https://www.engadget.com/2010/09/18/intel-wants-to...
  • coolhardware - Tuesday, July 25, 2017 - link

    Glad to hear that the benchmarking is (becoming) less of a chore :-) Kudos and thank you for the great article!
  • fallaha56 - Tuesday, July 25, 2017 - link

    Surely that AVX drop -10 when overclocking was too much?

    What about delidding?
  • Samus - Monday, July 24, 2017 - link

    It still stands that the best value in this group is the Ryzen 1600X, mostly because it's platform cost is 1/3rd that of Intel's HEDT. So unless you need those platform advantages (PCIe, which even x299 doesn't completely have on these KBL-X CPU's) it really won't justify spending $300 more on a system, even if single threaded performance is 15-20% better.

    Just the fact an AMD system of less than half the cost can ice a high end Intel system in WinRAR speaks a lot to AMD's credibility here.

Log in

Don't have an account? Sign up now