Ashes of the Singularity Escalation

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of DirectX12s features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run a fixed v2.11 version of the game due to some peculiarities of the splash screen added after the merger with the standalone Escalation expansion, and have an automated tool to call the benchmark on the command line. (Prior to v2.11, the benchmark also supported 8K/16K testing, however v2.11 has odd behavior which nukes this.)

At both 1920x1080 and 4K resolutions, we run the same settings. Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at Extreme settings, and take the frame-time output for our average, percentile, and time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

MSI GTX 1080 Gaming 8G Performance


1080p

4K

ASUS GTX 1060 Strix 6G Performance


1080p

4K

Sapphire Nitro R9 Fury 4G Performance


1080p

4K

Sapphire Nitro RX 480 8G Performance


1080p

4K

AMD gets in the mix a lot with these tests, and in a number of cases pulls ahead of the Ryzen chips in the Time Under analysis.

CPU Gaming Performance: Civilization 6 (1080p, 4K, 8K, 16K) CPU Gaming Performance: Shadow of Mordor (1080p, 4K)
Comments Locked

347 Comments

View All Comments

  • blublub - Sunday, August 13, 2017 - link

    From what I have read is that all TR do 3.9hhz and some even 4-4.1ghz on all cores .

    What are your temp when running all 10c @4.6ghz prime for 1-2hrs
  • Zingam - Sunday, August 13, 2017 - link

    Ian, how about testing mobile CPUs - for games and for office work. Aren't mobile CPUs selling much larger numbers thatn desktop ones these days?
    I can't find a single benchmark comparing i5-7300hq vs i7-7700hq vs i7-7700K showing the difference in productivity workloads and not just for rendering pretty pictures but also for more specific tasks as compiling software etc.

    I also would like to see some sort of comparison of new generation to all generations upto 10 years back in time. I'd like to know how much did performance increase since the age of Nehelem. At least from now on there should be a single test to display the relative performance increase over the last few generations. The average user doesn't upgrade their PC every year. The average user maybe upgrades every 5 years and it is really difficult to find out how much peformance increase would one get with an upgrade.
  • SanX - Sunday, August 13, 2017 - link

    I agree, there must be 5-7 years old processors in the charts
  • SanX - Sunday, August 13, 2017 - link

    Why one core of Apple A10 costs $10 but one core of Intel 7900x costs 10x more?
  • oranos - Sunday, August 13, 2017 - link

    so its complete dogsh*t for the segment which is driving the PC market right now: gaming. got it.
  • ballsystemlord - Sunday, August 13, 2017 - link

    Hey Ian, you've been talking about anandtech's great database where we can see all the cool info. Well, according to your database the Phenom II 6 core 1090T is equally powerful when compared to the 16 core threadripper!!!!!!! http://www.anandtech.com/bench/product/1932?vs=146
    With those sorts of numbers why would anyone plan an upgrade?
    (And there is also only one metric displayed, strange!)
    Not to play the Intel card on you as others do, but this is a serious problem for at least the AMD lineup of processors.
  • jmelgaard - Monday, August 14, 2017 - link

    o.O... I don't know how you derived that conclusion? you need a guide on how to read the database?...
  • BurntMyBacon - Monday, August 14, 2017 - link

    For anyone looking for an overall fps for two pass encoding here is your equation (hope my math is correct):
    FPS = 2*FPS1*FPS2/(FPS2+FPS1)

    No, you can't just average the FPS scores from each pass as the processor will spend more time in the slower pass.

    For the x264 encoding test, for example, a few relevant FPS scores end up being:
    i9-7900X: 122.56
    i7-7820X: 114.37
    i7-6900K: 95.26
    i7-7740X: 82.74

    TR-1950X: 118.13
    TR-1950X(g): 117.00
    TR-1920X: 111.74
    R7-1800X: 100.19

    Since two pass encoding requires both passes to be usable, getting an overall FPS score seems somewhat relevant. Alternately, using time to completion is would present the same information in a different manner. Though, it would be difficult to extrapolate performance results to estimate performance in other encodes without also posting the number of frames encoded.
  • goldgrenade - Thursday, January 4, 2018 - link

    Take all those Intel FPS performance counters and multiply them by .7 and you have what their chips actually run at without a major security flaw in them.

    Let's see that would be...

    i9-7900X: 85.792
    i7-7820X: 80.059
    i7-6900K: 66.682
    i7-7740X: 57.918

    And that's at best. It can be up to 50% degradation when rendering or having to do many small file accesses or repeated operations with KAISER.
  • Gastec - Tuesday, August 15, 2017 - link

    I've having a hard time trying to swallow "Threadripper is a consumer focused product" line considering the prices to "consume" it: $550 for the MB, $550 for the TR1900X ($800 or $1000 for the others is just dreaming) then the RAM. The MB(at least the Asus one) should be $200 less, but I get it, they are trying to squeeze as much as possible from the...consumers. Now don't get me wrong and I mean no offence for the rich ones among you, but those CPU are for Workstations. WORK, not gamestations. Meaning you would need them to help you make your money, faster.

Log in

Don't have an account? Sign up now