Ashes of the Singularity: Escalation

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of DirectX12s features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run a fixed v2.11 version of the game due to some peculiarities of the splash screen added after the merger with the standalone Escalation expansion, and have an automated tool to call the benchmark on the command line. (Prior to v2.11, the benchmark also supported 8K/16K testing, however v2.11 has odd behavior which nukes this.)

At both 1920x1080 and 4K resolutions, we run the same settings. Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at Extreme settings, and take the frame-time output for our average, percentile, and time under analysis.

For all our results, we show the average frame rate at 1080p first. Mouse over the other graphs underneath to see 99th percentile frame rates and 'Time Under' graphs, as well as results for other resolutions. All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6GB Performance


1080p

4K

Sapphire R9 Fury 4GB Performance


1080p

4K

Sapphire RX 480 8GB Performance


1080p

4K

Ashes Conclusion

Pretty much across the board, no matter the GPU or the resolution, Intel gets the win here. This is most noticable in the time under analysis, although AMD seems to do better when the faster cards are running at the lower resolution. That's nothing to brag about though.

Gaming Performance: Civilization 6 (1080p, 4K, 8K, 16K) Gaming Performance: Shadow of Mordor (1080p, 4K)
Comments Locked

176 Comments

View All Comments

  • Santoval - Tuesday, July 25, 2017 - link

    That is not how IPC works, since it explicitly refers to single core - single thread performance. As the number of cores rises the performance of a *single* task never scales linearly because there is always some single thread code involved (Amdahl's law). For example if your task has 90% parallel and 10% serial code its performance will max out at x10 that of a single core at ~512 cores. From then on even if you had a CPU with infinite cores you couldn't extract half an ounce of additional performance. If your code was 95% parallel the performance of your task would plateau at x20. For that though you would need ~2048 cores. And so on.

    Of course Amdahl's law does not provide a complete picture. It assumes, for example, that your task and its code will remain fixed no matter how many cores you add on them. And it disregards the possibility of computing distinct tasks in parallel on separate cores. That's where Gustafson's Law comes in. This "law" is not concerned with speeding up the performance of tasks but computing larger and more complex tasks at the same amount of time.

    An example given in Wikipedia involves boot times : Amdahl's law states that you can speed up the boot process, assuming it can be made largely parallel, up to a certain number of cores. Beyond that -when you become limited by the serial code of your bootloader- adding more cores does not help. Gustafson's law, on the contrary, states that instead of speeding up the boot process by adding more cores and computing resources, you could add colorful GUIs, increase the resolution etc, while keeping the boot time largely the same. This idea could be applied to many -but not all- computing tasks, for example ray tracing (for more photorealistic renderings) and video encoding (for smaller files or videos with better quality), and many other heavily multi-threaded tasks.
  • Rickyxds - Monday, July 24, 2017 - link

    I just agree XD.
  • Diji1 - Wednesday, July 26, 2017 - link

    "Overall speed increase 240%."

    LMAO. Ridiculous.
  • Alistair - Wednesday, July 26, 2017 - link

    No reason to laugh. I compared the 6600k vs the Ryzen 1700. 1 year speed increase of 144 percent (2.44 times the speed). Same as this: 1135 vs 466 points.

    http://cpu.userbenchmark.com/Compare/Intel-Core-i5...
  • Dr. Swag - Tuesday, July 25, 2017 - link

    I disagree, best value is 1600 as it oces as well as 1600x, comes with a decent stock cooler, and is cheaper.
  • vext - Monday, July 24, 2017 - link

    Interesting article but it seems intended to play down the extremely bad press x299 has received which is all over the internet and Youtube.

    Once you get past Mr. Cuttress' glowing review, it's clear that the I5-7640x is not worth the money because of lackluster performance, the I7-7740X is marginally faster than the older 7700k, and the I7-7800x is regularly beaten by the 7740X in many benchmarks that actually count and is a monstrously inefficient energy pig. Therefore the only Intel CPUs of this batch worth buying are the 7700k/7740x, and there is no real advantage to x299. In summary, it doesn't actually change anything.

    It's very telling that Mr. Cutress doesn't comment on the absolutely egregious energy consumption of the 7800x. The Test Bed setup section doesn't list the 7800x at all. The 7840x and 7740x are using a Thermalright True Copper (great choice!) but no info on the 7800x cooler. Essentially, the 7800x cameo appearance is only to challenge the extremely strong Ryzen multi-threaded results, but its negative aspects are not discussed, perhaps because it might frighten people from x299. Tsk, tsk. As my 11 year old daughter would say "No Fair." By the way, the 7800x is selling for ~ $1060 right now on Newegg, not $389.

    Proudly typed on my Ryzen 1800x/Gigabyte AB350 Gaming 3. # ;-)
  • Ian Cutress - Monday, July 24, 2017 - link

    You may not have realised but this is the Kaby Lake-X review, so it focuses on the KBL-X parts. We already have a Skylake-X review for you to mull over. There are links on the first page.
  • mapesdhs - Monday, July 24, 2017 - link

    Nevertheless, the wider picture is relevant here. The X299 platform is a mess. Intel is aiming KL-X at a market which doesn't exist, they've locked out features that actually make it useful, it's more power hungry, and a consumer needs a lot of patience and plenty of coffee to work out what the heck works and what doesn't on a mbd with a KL-X fitted.

    This is *exactly* the sort of criticism of Intel which should have been much stronger in the tech journalism space when Intel started pulling these sorts of stunts back with the core-crippled 3930K, heat-crazy IB and PCIe-crippled 5820K. Instead, except for a few exceptions, the tech world has been way too forgiving of Intel's treading-on-water attitude ever since SB, and now they've panicked in response to Ryzen and released a total hodgebodge of a chipset and CPU lineup which makes no sense at all. And if you get any disagreement about what I've said by anyone at Intel, just wave a 4820K in their face and say well explain this then (quad-core chip with 40 PCIe lanes, da daa!).

    I've been a big fan of Z68 and X79, but nothing about Intel's current lineup appeals in the slightest.
  • serendip - Tuesday, July 25, 2017 - link

    There's also the funny bit about motherboards potentially killing KBL-X CPUs if a Skylake-X was used previously.

    What's with Intel's insane product segmentation strategy with all the crippling and inconsistent motherboard choices? It's like they want to make it hard to choose, so buyers either get the cheapest or most expensive chip.
  • Haawser - Tuesday, July 25, 2017 - link

    'EmergencyLake-X' is just generally embarrassing. Intel should just find a nearby landfill site and quietly bury it.

Log in

Don't have an account? Sign up now