Gaming: Ashes Classic (DX12)

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of the DirectX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run Ashes Classic: an older version of the game before the Escalation update. The reason for this is that this is easier to automate, without a splash screen, but still has a strong visual fidelity to test.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Ashes: Classic RTS Mar
2016
DX12 720p
Standard
1080p
Standard
1440p
Standard
4K
Standard

Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at the above settings, and take the frame-time output for our average and percentile numbers.

All of our benchmark results can also be found in our benchmark engine, Bench.

Ashes Classic IGP Low Medium High
Average FPS
95th Percentile

As a game that was designed from the get-go to punish CPUs and showcase the benefits of DirectX 12-style APIs, Ashes is one of our more CPU-sensitive tests. Above 1080p results still start running together due to GPU limits, but at or below that, we get some useful separation. In which case what we see is that the 9900K ekes out a small advantage, putting it in the lead and with the 9700K right behind it.

Notably, the game doesn’t scale much from 1080p down to 720p. Which leads me to suspect that we’re looking at a relatively pure CPU bottleneck, a rarity in modern games. In which case it’s both good and bad for Intel’s latest CPU; it’s definitely the fastest thing here, but it doesn’t do much to separate itself from the likes of the 8700K, holding just a 4% advantage at 1080p. This being despite its frequency and core count advantage. So assuming this is not in fact a GPU limit, then it means we may be encroaching on another bottleneck (memory bandwidth?), or maybe the practical frequency gains on the 9900K just aren’t all that much here.

But if nothing else, the 9900K and even the 9700K do make a case for themselves here versus the 9600K. Whether it’s the core or the clockspeeds, there’s a 10% advantage for the faster processors at 1080p.

Gaming: Civilization 6 (DX12) Gaming: Strange Brigade (DX12, Vulkan)
Comments Locked

274 Comments

View All Comments

  • GreenReaper - Friday, October 19, 2018 - link

    The answer is "yes, with a but". Certain things scale really well with hyperthreading. Other things can see a severe regression, as it thrashes between one workload and another and/or overheats the CPU, reducing its ability to boost.

    Cache contention can be an issue: the i9-9900K has only 33% more cache than the i7-9700K, not 100% (and even if there were, it wouldn't have the same behaviour unless it was strictly partitioned). Memory bandwidth contention is a thing, too. And within the CPU, some parts can not be partitioned - it just relies on them running fast enough to supplky the parts which can.

    And clearly hyperthreading has an impact on overclocking ability. It might be interesting to see the gaming graphs with the i7-9700K@5.3Ghz vs. i9-9900K@5.0Ghz (or, if you want to save 50W, i7-9700K@5.0Ghz vs. i9-9900K@4.7Ghz - basically the i9-9900K's default all-core boost, but 400Mhz above the i7-9700K's 4.6Ghz all-core default, both for the same power).
  • NaterGator - Friday, October 19, 2018 - link

    Any chance y'all would be willing to run those HT-bound tests with the 9900K's HT disabled in the BIOS?
  • ekidhardt - Friday, October 19, 2018 - link

    Thanks for the review!

    I think far too much emphasis has been placed on 'value'. I simply want the fastest, most powerful CPU that isn't priced absurdly high.

    While the 9900k msrp is high, it's not in the realm of irrational spending, it's a few hundred dollars more. For a person that upgrades once every 5-6 years--a few hundred extra is not that important to me.

    I'd also like to argue against those protesting pre-order logic. I pre-ordered. And my logic is this: intel has a CLEAR track record of great CPU's. There hasn't been any surprisingly terrible CPU's released. They're consistently reliable.

    Anyway! I'm happy I pre-ordered and don't care that it costs a little bit extra; I've got a fast 8 core 16 thread CPU that should last quite a while.
  • Schmich - Friday, October 19, 2018 - link

    You have the numbers anyway. Not everyone buys the highest end and then wait many years to upgrade. That isn't the smartest choice because you spend so much money and then after 2-3 years you're just a mid-ranger.

    For those who want high-end they can still get a 2700x today, and then the 3700x next year with most likely better performance than your 9900k due to 7nm, PLUS have money over PLUS a spare 2700x they can sell.

    Same thing for GPU except for this gen. I never understood those who buy the xx80Ti version and then upgrade after 5 years. Your overall experience would be better only getting the xx70 but upgrading more often.
  • Spunjji - Monday, October 22, 2018 - link

    This is what actual logic looks like!
  • Gastec - Sunday, November 4, 2018 - link

    Basically "The more you buy, the more you save" :-\
  • shaolin95 - Friday, October 19, 2018 - link

    Exactly. I think the ones beating the value dead horse are mainly AMD fanboys defending their 2700x purchase
  • eva02langley - Friday, October 19, 2018 - link

    Sorry, value is a huge aspect. The reason why RTX is such an issue. Also, at this price point, I would go HEDT if compute was really that important for me.

    It is not with 10-15% performance increase over a 2700x at 1080p with a damn 1080 TI that I will see a justified purchase.
  • Arbie - Friday, October 19, 2018 - link

    Gratuitous trolling, drags down thread quality. Do you really still need to be told what AMD has done for this market? Do you even think this product would exist without them - except at maybe twice the already high price? Go pick on someone that deserves your scorn, such as ... Intel.
  • Great_Scott - Friday, October 19, 2018 - link

    What a mess. I guess gaming really doesn't depend on the CPU any more. Those Ryzen machines were running at a 1Ghz+ speed deficit and still do decently.

    Intel needs a new core design and AMD needs a new fab.

Log in

Don't have an account? Sign up now