Gaming: Ashes Classic (DX12)

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of the DirectX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run Ashes Classic: an older version of the game before the Escalation update. The reason for this is that this is easier to automate, without a splash screen, but still has a strong visual fidelity to test.

 

Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at the above settings, and take the frame-time output for our average and percentile numbers.

 

All of our benchmark results can also be found in our benchmark engine, Bench.

AnandTech IGP Low Medium High
Average FPS
95th Percentile
Gaming: Final Fantasy XV Gaming: Strange Brigade (DX12, Vulkan)
POST A COMMENT

220 Comments

View All Comments

  • Darkworld - Wednesday, May 20, 2020 - link

    10500k? Reply
  • Chaitanya - Wednesday, May 20, 2020 - link

    Pointless given R5 3000 family of CPUs. Reply
  • yeeeeman - Wednesday, May 20, 2020 - link

    Yeah right. Except it will beat basically all and lineup in games. Otherwise it is pointless. Reply
  • yeeeeman - Wednesday, May 20, 2020 - link

    All AMD lineup* Reply
  • SKiT_R31 - Wednesday, May 20, 2020 - link

    Yeah with a 2080 Ti the flagship 10 series CPU beats AMD in most titles, generally by a single digit margin. Who is pairing a mid-low end CPU with such a GPU? Also if there were to be a 10500K, you probably don't need to look much further than the 9600K in the charts above.

    This may have been missed on you, but what CPU reviews like the above show is: unless you are running the most top end flagship GPU and are low resolution high fps gaming, AMD is better at every single price point. Just accept it, and move on.
    Reply
  • Drkrieger01 - Wednesday, May 20, 2020 - link

    It also means that if you have purchased an Intel 6th gen CPU in i5 or i7, there's not much reason to upgrade unless you need more threads. And it will only be faster if you're using those said threads effectively. I'm still running an i5 6600K, granted it's running at 4.6GHz - there's no reason for me to upgrade until either Intel and/or AMD come up with better architecture and frequency combination (IPC + clock speed).
    I'll likely be taking the jump back to AMD for the Ryzen 4000's after a long run since the Sandy Bridge era.

    Anyone needing only 4-6 cores should wait until then as well.
    Reply
  • Samus - Thursday, May 21, 2020 - link

    That's most people, including me. I'm still riding my Haswell 4C/8T because for my applications the only thing more cores will get me is faster unraring of my porn. Reply
  • Lord of the Bored - Thursday, May 21, 2020 - link

    Hey, that's an important task! Reply
  • Hxx - Wednesday, May 20, 2020 - link

    at 1440p intel still leads in gaming. It may not lead by much or may not lead by enough to warranty buying it over Intel but the person buying this chip is rocking a high end gpu and will likely upgrade to a high end gpu and the performance gap will only widen in intel's favor as the gpu becomes less of a bottleneck. So yeah pairing this with a 2060 makes no sense, go AMD. but pairing this with a 2080ti and a soon to be released 3080TI oh yeah this lineup will be a better choice. Reply
  • DrKlahn - Thursday, May 21, 2020 - link

    By that logic the new games released since the Ryzen 3x000 series debut last year should show a larger gap at 1440+ between Intel and AMD. But they don't. And judging by past trends I doubt they will in the future either.As GPUs advance so does the eye candy in the newer engines, keeping the bottleneck pretty much where it always is at higher resolutions and detail levels, the GPU. Reply

Log in

Don't have an account? Sign up now