Gaming: Ashes Classic (DX12)

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of the DirectX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run Ashes Classic: an older version of the game before the Escalation update. The reason for this is that this is easier to automate, without a splash screen, but still has a strong visual fidelity to test.

Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at the above settings, and take the frame-time output for our average and percentile numbers.

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low Medium High
Average FPS
95th Percentile
Gaming: Civilization 6 (DX12) Gaming: Strange Brigade (DX12, Vulkan)
Comments Locked

114 Comments

View All Comments

  • vortmax2 - Monday, May 18, 2020 - link

    Anyone know why the 3300X is at the top of the Digicortex 1.20 bench?
  • gouthamravee - Monday, May 18, 2020 - link

    I'm guessing here, but the 3300X has all its cores on a single CCX and if Digicortex is one of those benches that's highly dependent on latency that could explain why the 3300X is at the top of the list here.

    I checked the previous 3300x article and it seems to be the same story there.
  • wolfesteinabhi - Monday, May 18, 2020 - link

    Thanks for a great Article Ian and AT.

    the main problem with mid/lower range CPU (review) like this Ryzen 3600/X and even i5/i3's is that their reviews are almost always focused on "Gaming" (for some reason everything budget oriented is just gaming) ... no one talks about AI workloads or MATLABs, Tensorflows,etc many people and developers dont want to shell out monies for 2080Ti and Ryzen 9 3950X or even TR's .... they have to make do with lower end or say "reasonable" CPU's ... and products like these Ryzen 5 that makes sensible choice in this segment ... a developer/learner on budget.

    a lot of people would appreciate if there are some more pages dedicated to such development workflows (AI,Tensor,compile, etc) even for such mid range CPU's.
  • DanNeely - Monday, May 18, 2020 - link

    Ian periodically tweets requests for scriptable benchmarks for those categories and for anyone with connections at commercial vendors in those spaces who can provide evaluation licenses for commercial products. He's gotten minimal uptake on the former and doesn't have time to learn enough about $industry to create a reasonable benchmark from scratch using their FOSS tools. On the commercial side, the various engineering software companies don't care about reviews from sites like this one and their PR contacts can't/won't give out licenses.
  • webdoctors - Monday, May 18, 2020 - link

    Because office tasks don't require any computation, and gaming is what's most mainstream that actually requires computation.

    Scientific stuff like MATLAB, Folding@Home needs computation but if that's useful you'd just buy the higher end parts. Price diff between 3600x and 3700x (6 vs 8core) is $100, $200 vs $300 at retail prices. For someone working, $100 is nothing for improving your commercial or academic output. These are parts you use for 5+ years.

    I agree a TR doesnt make sense if you can get the consumer version like a 3800x much cheaper.
  • Impetuous - Monday, May 18, 2020 - link

    Logged in to second this. I think a lot of students and professionals like me who do research on-the-side (and are on pretty tight Grants/allowances) would appreciate a MATLAB benchmark. This looks like a great option for a grad student workstation!
  • brucethemoose - Monday, May 18, 2020 - link

    I think one MKL TF benchmark is enough, as you'd have to be crazy to buy a 3600 over a cheap GPU for AI training training. If money is that tight, you're probably not buying a new system and/or using Google Colab.

    +1 for more compilation benchmarking. I'd like a Python benchmark too, if theres any demand for such a thing.
  • PeachNCream - Monday, May 18, 2020 - link

    A lot of people don't have money to throw away at hardware, moreso now than ever before so we are going to make older equipment work for longer or buy less compute at a lower price. It's important to get hardware out of its comfort zone because these general purpose processors will be used in all sorts of ways beyond a narrow set of games and unzipping a huge archive file. After all, if you want to play games, buying as much GPU as you can afford and then feeding it enough power solves the problem for the most part. That answer has been the case for years so we really don't need more text and time spent on telling us that. Say it once for each new generation and then get to reviewing hardware more relevant to how people actually use their computers.
  • jabber - Tuesday, May 19, 2020 - link

    Plus most of us don't upgrade hardware as much as we used to. back in the day (single core days) I was upgrading my CPU every 6-8 months. Each upgrade pushed the graphics from 28FPS to 32FPS to 36FPS which made a difference. Now with modest setups pushing past 60FPS...why bother. I upgrade my CPU every 6 years or so now.
  • wolfesteinabhi - Tuesday, May 19, 2020 - link

    as i said in one of the replies below... maybe TF is not a good example ..but its not like it will be purely on a CPU for TF work, but some benchmark around it ...and similar other work/development related tasks.

    Most of us have to depend on these gaming only benchmarks to guesstimate how good/bad a cpu will be for dev work. maybe a fewer core cpu might have been better with extra cache and extra clocks or vice versa ... but almost no reviews tell that kind of story for mid/low range CPU's.... having said that..i dont expect that kind of analysis from dual cores and such CPU ..but higherup there are a lot of CPU that can be made to do a lot of good job even beyond gaming (even if it needs to pair up with some GPU)

Log in

Don't have an account? Sign up now