Ashes of the Singularity Escalation

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of DirectX12s features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four-minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run a fixed v2.11 version of the game due to some peculiarities of the splash screen added after the merger with the standalone Escalation expansion, and have an automated tool to call the benchmark on the command line. (Prior to v2.11, the benchmark also supported 8K/16K testing, however v2.11 has odd behavior which nukes this.)

At both 1920x1080 and 4K resolutions, we run the same settings. Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at Extreme settings, and take the frame-time output for our average, percentile, and time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

 

MSI GTX 1080 Gaming 8G Performance


1080p

4K

CPU Gaming Performance: Civilization 6 CPU Gaming Performance: Shadow of Mordor
Comments Locked

222 Comments

View All Comments

  • Chaser - Thursday, October 5, 2017 - link

    Thank you AMD.
  • vanilla_gorilla - Thursday, October 5, 2017 - link

    Exactly! No matter what side you're on, you gotta love the fact that competition is back in the x86 desktop space! And it looks like AMD 1700X is now under $300 on Amazon. Works both ways!
  • TEAMSWITCHER - Friday, October 6, 2017 - link

    I just don't see it this way. Since the release of Haswell-E in 2014 we've had sub $400 six core processors. While some like to compartmentalize the industry into mainstream and HEDT, the fact is, I built a machine with similar performance three years ago, for a similar price. Today's full featured Z370 motherboards (like the ROG Maximus X) cost nearly as much as X99 motherboards from 2014. To say that Intel was pushed by AMD is simply not true.
  • watzupken - Friday, October 6, 2017 - link

    I feel the fact that Intel had to rush a 6 core mainstream processor out in the same year they introduced Kaby Lake is a sign that AMD is putting pressure on them. You may find a Haswell E chip for sub 400 bucks in 2014, but you need to be mindful that Intel historically have only increase prices due to the lack of competition. Now you are seeing a 6 core mainstream chip from both AMD and Intel below 200 bucks. Motherboard prices are difficult to compare since there are lots of motherboards out there that are over engineered and cost significantly more. Assuming you pick the cheapest Z370 motherboard out there, I don't believe it's more expensive than a X99 board.
  • mapesdhs - Friday, October 6, 2017 - link

    KL-X is dead, that's for sure. Some sites claim CFL was not rushed, in which case Intel knew KL-X would be pointless when it was launched. People claiming Intel was not affected by AMD have to choose: either CFL was rushed because of pressure from AMD, or Intel released a CPU for a mismatched platform they knew would be irrelevant within months.

    There's plenty of evidence Intel was in a hurry here, especially the way X299 was handled, and the horrible heat issues, etc. with SL-X.
  • mapesdhs - Friday, October 6, 2017 - link

    PS. Is it just me or are we almost back to the days of the P4, where Intel tried to maintain a lead really by doing little more than raising clocks? It wasn't that long ago there was much fanfare when Intel released its first minimum-4GHz part (4790K IIRC), even though we all knew they could run their CPUs way quicker than that if need be (stock voltage oc'ing has been very productive for a long time). Now all of a sudden Intel is nearing 5GHz speeds, but it's kinda weird there's no accompanying fanfare given the reaction to their finally reaching 4GHz with the 4790K. At least in th mainstream, has Intel really just reverted to a MHz race to keep its performance up? Seems like it, but OS issues, etc. are preventing those higher bins from kicking in.
  • KAlmquist - Friday, October 6, 2017 - link

    Intel has been pushing up clock speeds, but (unlike the P4), not at the expense of IPC. The biggest thing that Intel has done to improve performance in this iteration is to increase the number of cores.
  • mapesdhs - Tuesday, October 10, 2017 - link

    Except in reality it's often not that much of a boost at all, and in some cases slower because of how the OS is affecting turbo levels.

    Remember, Intel could have released a CPU like this a very long time ago. As I keep having to remind people, the 3930K was an 8-core chip with two cores disabled. Back then, AMD couldn't even compete with SB, never mind SB-E, so Intel held back, and indeed X79 never saw a consumer 8-core part, even though the initial 3930K was a XEON-sourced crippled 8-core.

    Same applies to the mainstream, we could have had 6 core models ages ago. All they've really done to counter the lack of IPC improvements is boost the clocks way up. We're approaching standard bin levels now that years ago were considered top-notch oc's unless one was definitely using giant air coolers, decent AIOs or better.
  • wr3zzz - Thursday, October 5, 2017 - link

    I hope Anandtech solves the Civ6 AI benchmark soon. It's almost as important as compression and encoding benchmarks for me to decide CPU price-performance options as I am almost always GPU constrained in games.
  • Ian Cutress - Saturday, October 7, 2017 - link

    We finally got in contact with the Civ6 dev team to integrate the AI benchmark into our suite better. You should see it moving forward.

Log in

Don't have an account? Sign up now