Ashes of the Singularity: Escalation

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of DirectX12s features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run a fixed v2.11 version of the game due to some peculiarities of the splash screen added after the merger with the standalone Escalation expansion, and have an automated tool to call the benchmark on the command line. (Prior to v2.11, the benchmark also supported 8K/16K testing, however v2.11 has odd behavior which nukes this.)

At both 1920x1080 and 4K resolutions, we run the same settings. Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at Extreme settings, and take the frame-time output for our average, percentile, and time under analysis.

For all our results, we show the average frame rate at 1080p first. Mouse over the other graphs underneath to see 99th percentile frame rates and 'Time Under' graphs, as well as results for other resolutions. All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6GB Performance


1080p

4K

Sapphire R9 Fury 4GB Performance


1080p

4K

Sapphire RX 480 8GB Performance


1080p

4K

Ashes Conclusion

Pretty much across the board, no matter the GPU or the resolution, Intel gets the win here. This is most noticable in the time under analysis, although AMD seems to do better when the faster cards are running at the lower resolution. That's nothing to brag about though.

Gaming Performance: Civilization 6 (1080p, 4K, 8K, 16K) Gaming Performance: Shadow of Mordor (1080p, 4K)
Comments Locked

176 Comments

View All Comments

  • Gothmoth - Tuesday, July 25, 2017 - link

    so why not test at 640x480... shifts the bottleneck even more to the cpu... you are kidding yourself.
  • silverblue - Tuesday, July 25, 2017 - link

    Not really. If the GPU becomes the bottleneck at or around 1440p, and as such the CPU is the limiting factor below that, why go so far down when practically nobody games below 1080p anymore?
  • Zaxx420 - Monday, July 24, 2017 - link

    "Over the last few generations, Intel has increased IPC by 3-10% each generation, making a 30-45% increase since 2010 and Sandy Bridge..."

    I have an old Sandy i5 2500K on an Asus Z68 that can do 5GHz all day on water and 4.8 on air. I know it's ancient IP...but I wonder if it could hold it's own vs a stock clocked Skylake i5? hmmmm...
  • hbsource - Tuesday, July 25, 2017 - link

    Great review. Thanks.

    I think I've picked the best nit yet: On the Civ 6 page, you inferred that Leonard Nimoy did the voiceover on Civ 5 when he actually did it on Civ 4.
  • gammaray - Tuesday, July 25, 2017 - link

    it's kind of ridiculous to see the Sandy bridge chip beating new cpus at 4k gaming...
  • Zaxx420 - Tuesday, July 25, 2017 - link

    Kinda makes me grin...

    I have an old Sandy i5 2500K on an Asus Z68 that can do 5GHz all day on water and 4.8 on air. I know it's ancient IP...but I wonder if it could hold it's own vs a stock clocked Skylake i5? hmmmm...
  • Mugur - Tuesday, July 25, 2017 - link

    Much ado about nothing. So the best case for 7740 is Office applications or opening PDF files? The author seems to have lost the sight of the forest because of the trees.

    Some benchmarks are odd, some are useless in the context. I watched the YouTube version of this: https://www.techspot.com/review/1442-intel-kaby-la... and it looked like a more realistic approach for a 7740k review.
  • Gothmoth - Tuesday, July 25, 2017 - link

    well i guess intel is putting more advertising money on anandtech.

    otherwise i cant´t explain how an overpriced product with heat problems and artificial crippled pci lanes on an enthusiast platform(!) can get so much praise without much criticism.

  • jabber - Tuesday, July 25, 2017 - link

    I miss the days when you saw a new bunch of CPUs come out and the reviews showed that there was a really good case for upgrading if you could afford to. You know a CPU upgrade once or twice a year. Now I upgrade (maybe) once every 6-7 years. Sure it's better but not so much fun.
  • Dragonstongue - Tuesday, July 25, 2017 - link

    Intel wins for the IO and chipset, offering 24 PCIe 3.0 lanes for USB 3.1/SATA/Ethernet/storage, while AMD is limited on that front, having 8 PCIe 2.0 from the chipset.

    Funny that is, seeing as AM4 has 16 pci-e lanes available to it unless when go down the totem pole those lanes get segregated differently , even going from the above table Intel is offering 16 for x299not 24 as you put directly into words, so who wins in IO, no one, they both offer 16 lanes. Now if you are comparing via price, x299 is obviously a premium product, at least compare to current AM4 premium end which is x370 chipset, pretty even footing on the motherboards when compared similar "specs" comparing the best AMD will offer in the form of x399, it makes the best "specs" of x299 laughable.

    AMD seems to NOT be shortchanging pci-e lanes, DRAM ability (or speed) functional, proper thermal interface used etc etc.

    Your $ by all means, but seriously folks need to take blinders off, how much raw power is this "95w TDP" processors using when ramped to 5+Ghz, sure in theory it will be the fastest for per core performance, but how long will the cpu last running at that level, how much extra power will be consumed, what price of an acceptable cooler is needed to maintain it within thermal spec and so forth.

    Interesting read, but much seems out of context to me. May not like it, but AMD has given a far better selection of product range this year for cpu/motherboard chipsets, more core, more threads, lots of IO connectivity options, fair pricing overall (the $ value in Canada greed of merchants does not count as a fault against AMD)

    Am done.

Log in

Don't have an account? Sign up now