Ashes of the Singularity Escalation

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of DirectX12s features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run a fixed v2.11 version of the game due to some peculiarities of the splash screen added after the merger with the standalone Escalation expansion, and have an automated tool to call the benchmark on the command line. (Prior to v2.11, the benchmark also supported 8K/16K testing, however v2.11 has odd behavior which nukes this.)

At both 1920x1080 and 4K resolutions, we run the same settings. Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at Extreme settings, and take the frame-time output for our average, percentile, and time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

AMD gets in the mix a lot with these tests, and in a number of cases pulls ahead of the Ryzen chips in the Time Under analysis.

CPU Gaming Performance: Civilization 6 (1080p, 4K, 8K, 16K) CPU Gaming Performance: Shadow of Mordor (1080p, 4K)
Comments Locked

347 Comments

View All Comments

  • Kjella - Thursday, August 10, 2017 - link

    In the not so distant past - like last year - you'd have to pay Intel some seriously overpriced HEDT money for 6+ cores. Ryzen gave us 8 cores and most games can't even use that. ThreadRipper is a kick-ass processor in the workstation market. Why anyone would consider it for gaming I have no idea. It's giving you tons of PCIe lanes just as AMD is downplaying CF with Vega, nVidia has offically dropped 3-way/4-way support, even 2-way CF/SLI has been a hit-and-miss experience. I went from a dual card setup to a single 1080 Ti, don't think I'll ever do multi-GPU again.
  • tamalero - Thursday, August 10, 2017 - link

    Probably their target is for those systems that have tons of cards with SATA RAID ports or PCI-E accelerators like AMD's or Nvidia's?
  • mapesdhs - Thursday, August 10, 2017 - link

    And then there's GPU acceleration for rendering (eg. CUDA) where the SLI/CF modes are not needed at all. Here's my old X79 CUDA box with quad 900MHz GTX 580 3GB:

    http://www.sgidepot.co.uk/misc/3930K_quad580_13.jp...

    I recall someone who does quantum chemistry saying they make significant use of multiple GPUs, and check out the OctaneBench CUDA test, the top spot has eleven 1080 Tis. :D (PCIe splitter boxes)
  • GreenMeters - Thursday, August 10, 2017 - link

    There is no such thing as SHED. Ryzen is a traditional desktop part. That it raises the bar in that segment compared to Intel's offering is a good thing--a significant performance and feature boost that we haven't seen in years. Threadripper is a HEDT part. That it raises the bar in that segment compared to Intel's offering is a good thing--a significant performance and feature boost that we haven't seen in years.
  • Ian Cutress - Thursday, August 10, 2017 - link

    Ryzen 7 was set as a HEDT directly against Intel's HEDT competition. This is a new socket and a new set over and above that, and not to mention that Intel will be offering its HCC die on a consumer platform for the first time, increasing the consumer core count by 8 in one generation which has never happened before. If what used to be HEDT is still HEDT, then this is a step above.

    Plus, AMD call it something like UHED internally. I prefer SHED.
  • FreckledTrout - Thursday, August 10, 2017 - link

    I think AMD has the better division of what is and isn't HEDT. Going forward Intel really should follow suite and make it 8+ cores to get into the HEDT lineup as what they have done this go around is just confusing and a bit goofy.
  • ajoy39 - Thursday, August 10, 2017 - link

    Small nitpick but

    "AMD could easily make those two ‘dead’ silicon packages into ‘real’ silicon packages, and offer 32 cores"

    That's exactly what the, already announced, EPYC parts are doing is it not?

    Great review otherwise, these parts are intriguing but I don't personally have a workload that would suit them. Excited to see what sort of innovation this brings about though, about time Intel had some competition at this end of the market.
  • Dr. Swag - Thursday, August 10, 2017 - link

    I assume they're referring to putting 32 cores on TR4
  • mapesdhs - Thursday, August 10, 2017 - link

    Presumably a relevant difference being that such a 32c TR would have the use of all of its I/O connections, instead of some of them used to connect to other EPYC units. OTOH, with a 32c TR, how the heck could mbd vendors cram enough RAM slots on a board to feed the 8 channels? Either that or stick with 8 slots and just fiddle around somehow so that the channel connections match the core count in a suitable manner, eg. one per channel for 32c, 2 per channel for 16c, etc.

    Who knows whether AMD would ever release a full 32c TR for TR4 socket, but at least the option is there I suppose if enough people buy it would happily go for a 32c part (depends on the task).
  • smilingcrow - Thursday, August 10, 2017 - link

    Considering the TDP with just a 16C chip to go 32C would hit the clock speeds badly unless they were able to keep the turbo speeds when ONLY 16 or less of the cores are loaded?
    The 32C server parts have much lower max turbo speeds seemingly when less loaded.

Log in

Don't have an account? Sign up now