Gaming: Ashes Classic (DX12)

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of the DirectX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run Ashes Classic: an older version of the game before the Escalation update. The reason for this is that this is easier to automate, without a splash screen, but still has a strong visual fidelity to test.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Ashes: Classic RTS Mar
2016
DX12 720p
Standard
1080p
Standard
1440p
Standard
4K
Standard

Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at the above settings, and take the frame-time output for our average and percentile numbers.

All of our benchmark results can also be found in our benchmark engine, Bench.

Ashes Classic IGP Low Medium High
Average FPS
95th Percentile

As a game that was designed from the get-go to punish CPUs and showcase the benefits of DirectX 12-style APIs, Ashes is one of our more CPU-sensitive tests. Above 1080p results still start running together due to GPU limits, but at or below that, we get some useful separation. In which case what we see is that the 9900K ekes out a small advantage, putting it in the lead and with the 9700K right behind it.

Notably, the game doesn’t scale much from 1080p down to 720p. Which leads me to suspect that we’re looking at a relatively pure CPU bottleneck, a rarity in modern games. In which case it’s both good and bad for Intel’s latest CPU; it’s definitely the fastest thing here, but it doesn’t do much to separate itself from the likes of the 8700K, holding just a 4% advantage at 1080p. This being despite its frequency and core count advantage. So assuming this is not in fact a GPU limit, then it means we may be encroaching on another bottleneck (memory bandwidth?), or maybe the practical frequency gains on the 9900K just aren’t all that much here.

But if nothing else, the 9900K and even the 9700K do make a case for themselves here versus the 9600K. Whether it’s the core or the clockspeeds, there’s a 10% advantage for the faster processors at 1080p.

Gaming: Civilization 6 (DX12) Gaming: Strange Brigade (DX12, Vulkan)
Comments Locked

274 Comments

View All Comments

  • Ian Cutress - Monday, October 22, 2018 - link

    Emn13: Base code with compiler optimizations only, such as those a non-CompSci scientist would use, as was the original intention of the 3DPM test, vs hand tuned AVX/AVX2/AVX512 code.
  • just4U - Saturday, October 20, 2018 - link

    The only problem I really have with the product is for the price it should have come with a nice fancy cooler like the 2700x which is in it's own right a stellar product at close to 60% of the cost. Not sure what intel's game plan is with this but It's priced close to a second gen entry threadripper and for it's cost you might as well just make the leap for a little more.
  • khanikun - Monday, October 22, 2018 - link

    I'm the other way. I'd much rather they lower the cost and have no cooler. Although, Intel doesn't decrease the cost without the cooler, which sucks.

    I'm either getting a new waterblock or drilling holes in the waterblock bracket to make it fit. Well I just upgraded, so I'm not in the market for any of these procs.
  • brunis.dk - Saturday, October 20, 2018 - link

    no prayers for AMD?
  • ingwe - Friday, October 19, 2018 - link

    I don't see the value in it though I understand that this isn't sold as a value proposition--it is sold for performance. Seems to do the job it sets out to do but isn't spectacularly exciting to me.
  • jospoortvliet - Saturday, October 20, 2018 - link

    Given how the quoted prices ignore the fact that right now Intel CPU prices art 30-50% higher than MSRP, yes, nobody thinking about value for money buys these...
  • DanNeely - Friday, October 19, 2018 - link

    Seriously though, I'm wondering about the handful of benchmarks that showed the i7 beating the i9 by significant amounts. 1-2% I assume is sampling noise in cases where the two are tied, but flipping through the article I saw a few where the i7 won by significant margins.
  • Ian Cutress - Friday, October 19, 2018 - link

    Certain benchmarks seem to be core-resource bound. In HT mode, certain elements of the core are statically partitioned, giving each thread half, and if only one thread is there, you still only get half. With no HT, a thread gets the full core to work with.
  • 0ldman79 - Friday, October 19, 2018 - link

    I'd love to see some low level data on the i5 vs i7 on that topic.

    If the i5 is only missing HT then the i7 without HT should score identically (more or less) with the i5 winning on occasion vs the HT enabled i7. I always figured there was a significant bit of idle resources (ALU pipelines) in the i5 vs the i7, HT allowed 100% (or as close as possible) usage of all of the pipelines.

    I wish Intel would release detailed info on that.
  • abufrejoval - Friday, October 19, 2018 - link

    Well I guess you should be able to measure, if you have the chips. My understanding has alway been, that i7/i5 differentiation is all about voltage levels with i5 parts needing too much voltage/power to pass the TDP restrictions rather than defective logic precluding the use of 'one hyperthread'. I find it hard to imagine managing defects via partitions in the register file or by disabling certain ALUs: If core CPU logic is hit with a defect it's dead, because you can't isolate and route around the defective part at that granularity. It's the voltage levels on the long wires that determine a CPUs fate AFAIK.

    It's a free choice between a lower clock and HT or the higher clock without HT at the binning point and Intel will determine the fate of a chips on sales opportunities rather than hardware. And it's somewhat similar with the fully enabled lower power -T parts and the high-frequency -K parts, which are most likely the same (or very similar) top tier bins, sold at two distinct voltage levels yet rather similar premium prices, because you trade power and clocks and pay premium for efficiency.

    Real chips defects can only be 'compensated' via cutting off cache blocks or whole cores, but again I'd tend to think that even that will be more driven by voltage considerations than 'hairs in the soup': With all this multi-patterning and multi-masking going on and the 3D structures they are lovingly creating for every FinFeT their control over the basic structures is so great, that it's mainly the layer alignment/conductivity that's challenging the yields.

Log in

Don't have an account? Sign up now