Ashes of the Singularity: Escalation

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of DirectX12s features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run a fixed v2.11 version of the game due to some peculiarities of the splash screen added after the merger with the standalone Escalation expansion, and have an automated tool to call the benchmark on the command line. (Prior to v2.11, the benchmark also supported 8K/16K testing, however v2.11 has odd behavior which nukes this.)

At both 1920x1080 and 4K resolutions, we run the same settings. Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at Extreme settings, and take the frame-time output for our average, percentile, and time under analysis.

For all our results, we show the average frame rate at 1080p first. Mouse over the other graphs underneath to see 99th percentile frame rates and 'Time Under' graphs, as well as results for other resolutions. All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6GB Performance


1080p

4K

Sapphire R9 Fury 4GB Performance


1080p

4K

Sapphire RX 480 8GB Performance


1080p

4K

Ashes Conclusion

Pretty much across the board, no matter the GPU or the resolution, Intel gets the win here. This is most noticable in the time under analysis, although AMD seems to do better when the faster cards are running at the lower resolution. That's nothing to brag about though.

Gaming Performance: Civilization 6 (1080p, 4K, 8K, 16K) Gaming Performance: Shadow of Mordor (1080p, 4K)
Comments Locked

176 Comments

View All Comments

  • Firebat5 - Tuesday, July 25, 2017 - link

    Ian,

    i'm interested in the details of the agility benchmark? how many photos are in your dataset and at what resolution? am doing similar work and i notice the working time doesn't seem to be linear with the number of photos.
  • Firebat5 - Tuesday, July 25, 2017 - link

    agisoft* autocorrect strikes again.
  • damianrobertjones - Thursday, July 27, 2017 - link

    Capitals can be a good thing.
  • Gothmoth - Tuesday, July 25, 2017 - link

    reading this article again i must say im realyl ashamed. anandtech was once a great place but now it´s just like car magazines. who pays best gets the best reviews. where is the criticism? everyone and his grandmother things intel has big issues (tim, heat, pci lanes nonsense product) are you bend over so intel can inject more money more easily?
  • damianrobertjones - Thursday, July 27, 2017 - link

    Is your shift key broke? Where's are your capitals?
  • zodiacfml - Wednesday, July 26, 2017 - link

    Impressive benchmarks. I could not ask for more. This revealed that Intel clearly doesn't have the premium or value position anymore. It is simply not there. They have to be in the 10nm process now to be superior in value and/or performance.
  • Walkeer - Wednesday, July 26, 2017 - link

    Hi, what RAM frequency is the AMD platform running on? if its the official maximum of 2666MHz, you can get +10-15% more performance using 3200MHz or faster memory
  • warner001 - Wednesday, July 26, 2017 - link

    Hey, This is a very useful post for the new ones. Thanks a lot. please visit http://forums.cat.com/t5/user/viewprofilepage/user...
  • warner001 - Wednesday, July 26, 2017 - link

    nice blog
  • edsib1 - Wednesday, July 26, 2017 - link

    Please redo the Ryzen benchmarks using DDR3200 now it is officially supported, and also use the latest updates of the games - eg ROTR v770.1+ where Ryzen gets a 25% increase.

    You can't compare one platform with the latest updates, and the other without - thats pointless

Log in

Don't have an account? Sign up now