Ashes of the Singularity: Escalation

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of DirectX12s features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run a fixed v2.11 version of the game due to some peculiarities of the splash screen added after the merger with the standalone Escalation expansion, and have an automated tool to call the benchmark on the command line. (Prior to v2.11, the benchmark also supported 8K/16K testing, however v2.11 has odd behavior which nukes this.)

At both 1920x1080 and 4K resolutions, we run the same settings. Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at Extreme settings, and take the frame-time output for our average, percentile, and time under analysis.

For all our results, we show the average frame rate at 1080p first. Mouse over the other graphs underneath to see 99th percentile frame rates and 'Time Under' graphs, as well as results for other resolutions. All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6GB Performance


1080p

4K

Sapphire R9 Fury 4GB Performance


1080p

4K

Sapphire RX 480 8GB Performance


1080p

4K

Ashes Conclusion

Pretty much across the board, no matter the GPU or the resolution, Intel gets the win here. This is most noticable in the time under analysis, although AMD seems to do better when the faster cards are running at the lower resolution. That's nothing to brag about though.

Gaming Performance: Civilization 6 (1080p, 4K, 8K, 16K) Gaming Performance: Shadow of Mordor (1080p, 4K)
Comments Locked

176 Comments

View All Comments

  • Chaser - Monday, July 24, 2017 - link

    Go 2600K. LMAO!
  • YukaKun - Monday, July 24, 2017 - link

    Hey, I'm still using my 4.6Ghz 2700K, so these numbers bring joy to me!

    Cheers! :P
  • mapesdhs - Monday, July 24, 2017 - link

    4.6? Outrageous! I would be offended if I were a 2700K at a mere 4.6! Get that thing up to 5.0 asap. 8) Mbd-dependent I suppose, but I've built seven 2700K systems so far, 5.0 every time, low noise and good temps. Marvelous chip. And oh yeah, 2GB/sec with a 950 Pro. 8)
  • lowlymarine - Tuesday, July 25, 2017 - link

    Either you're water cooling those systems, or you should consider investing in lottery tickets. My 2600k wouldn't push past 4.4 without very worrying amounts of voltage (1.4V+) and even 4.4 ran so hot I on my 212+ I settled for 4.2 to keep the core under 1.3V.
  • soliloquist - Monday, July 24, 2017 - link

    Yeah, Sandy Bridge is holding up nicely. Its pretty ridiculous actually.
  • colonelclaw - Monday, July 24, 2017 - link

    Wait, am I reading these graphs correctly? Unless I'm going mad, they seem to say that for gaming there's no need to upgrade if you already have a 2600K. Huh?

    If true, and I have no reason to doubt the data, that would make the 2600K one of the greatest processors ever?
  • Icehawk - Monday, July 24, 2017 - link

    Yup, it's been said many times - if you have an i7 processor you really don't need to upgrade it for gaming, spend the money on a new GPU every few years. I have a 3700k & GF970, other than the video card the system is 6yrs old - I used to build a new one every other year. I've been considering the 7800\7820 though as I do a lot of encoding.
  • Gothmoth - Monday, July 24, 2017 - link

    "...Intel’s official line is about giving customers options. ..."

    yeah like.. if you want more PCI lanes to use all oyu mainboard features just buy the 999$ CPU..... LOL.
  • mapesdhs - Monday, July 24, 2017 - link

    Indeed, just like the "option" of a CPU like the 4820K (4-core but with 40 lanes) suddenly vanished after X79. :D Intel's current lineup is an insult.
  • Kalelovil - Monday, July 24, 2017 - link

    Some mistakes for the Ryzen entries in the comparisons on page 1.
    PCI-E (Ryzen die has 20 lanes non-chipset, not 16), clockspeeds (too high), TDP (1700 is 65W).

    Also, I see your point of comparing non-sale prices, but the 1700X seems to be widely and consistently available at near the i7-7740x MSRP. It's all but an official price cut.

Log in

Don't have an account? Sign up now