Gaming: Ashes Classic (DX12)

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of the DirectX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run Ashes Classic: an older version of the game before the Escalation update. The reason for this is that this is easier to automate, without a splash screen, but still has a strong visual fidelity to test.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Ashes: Classic RTS Mar
2016
DX12 720p
Standard
1080p
Standard
1440p
Standard
4K
Standard

Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at the above settings, and take the frame-time output for our average and percentile numbers.

[game list table]

All of our benchmark results can also be found in our benchmark engine, Bench.

Ashes: Classic IGP Low Medium High
Average FPS
95th Percentile

.

Gaming: Civilization 6 (DX12) Gaming: Strange Brigade (DX12, Vulkan)
Comments Locked

136 Comments

View All Comments

  • johngardner58 - Monday, February 24, 2020 - link

    Again it depends on the need. If you need speed, there is no alternative. You can't get it by just running blades because not everything can be broken apart into independent parallel processes. Our company once ran an analysis that took a very long time. When time is money this is the only thing that will fill the bill for certain workloads. Having shared high speed resources (memory and cache) make the difference. That is why 255 Raspberry PIs clustered will not outperform most home desktops unless they are doing highly independent parallel processes. Actually the MIPS per watt on such a processor is probably lower than having individual processors because of the combined inefficiencies of duplicate support circuitry.
  • SanX - Friday, February 1, 2019 - link

    Every second home has few running space heaters 1500W at winter time
  • johngardner58 - Monday, February 24, 2020 - link

    Server side: depends on workload, usually yes a bladed or multiprocessor setup is usually better for massively parallel (independent) tasks, but cores can talk to each other much much much faster than blades, as they share caches, memory. So for less parallel work loads (single process multiple threads: e.g. rendering, numerics & analytics) this can provide far more performance and reduced costs. Probably the best example of the need for core count is GPU based processing. Intel also had specialized high core count XEON based accelerator cards with 96 cores at one point. There is a need even if limited.
  • Samus - Thursday, January 31, 2019 - link

    The problem is in the vast majority of the applications an $1800 CPU from AMD running on a $300 motherboard (that's an overall platform savings of $2400!) the AMD CPU either matches or beats the Intel Xeon. You have to cherry-pick the benchmarks Intel leads in, and yes, it leads by a healthy margin, but they basically come down to 7-zip, random rendering tasks, and Corona.

    Disaster strikes when you consider there is ZERO headroom for overclocking the Intel Xeon, where the AMD Threadripper has some headroom to probably narrow the gap on these few and far between defeats.

    I love Intel but wow what the hell has been going on over there lately...
  • Jimbo2K7 - Wednesday, January 30, 2019 - link

    Baby's on fire? Better throw her in the water!

    Love the Eno reference!
  • repoman27 - Wednesday, January 30, 2019 - link

    Nah, I figure Ian for more of a Die Antwoord fan. Intel’s gone zef style to compete with AMD’s Zen style.
  • Ian Cutress - Wednesday, January 30, 2019 - link

    ^ repoman gets it. I actually listen mostly to melodic/death metal and industrial. Something fast paced to help overclock my brain
  • WasHopingForAnHonestReview - Wednesday, January 30, 2019 - link

    My man
  • IGTrading - Wednesday, January 30, 2019 - link

    Was testing done with mediation regarding the specific windows BUG that affects AMD's CPUs with more than 16 cores? Or was it done with no attempt to ensure normal processing conditions for ThreadRipper, despite the known bug?
  • eva02langley - Thursday, January 31, 2019 - link

    Insomnium, Kalmah, Hypocrisy, Dark Tranquility, Ne Obliviscaris...

    By the way, Saor and Rotting Christ are releasing their albums in two weeks.

    You might want to check out Carpenter Brut - Leether Teeths and Rivers of Nihil - Where Owls Know My Name.

Log in

Don't have an account? Sign up now