We’ve been following DirectX 12 for about 2 years now, watching Microsoft’s next-generation low-level graphics API go from an internal development project to a public release. Though harder to use than earlier high-level APIs like DirectX 11, DirectX 12 gives developers more control than ever before, and for those who can tame it, they can unlock performance and develop rendering techniques simply not possible with earlier APIs. Coupled with the CPU bottlenecks of DirectX 11 coming into full view as single-threaded performance increases have slowed and CPUs have increased their core counts instead, and DirectX 12 could not have come at a better time.

Although DirectX 12 was finalized and launched alongside Windows 10 last summer, we’ve continued to keep an eye on the API as the first games are developed against it. As developers need the tools before they can release games, there’s an expected lag period between the launch of Windows 10 and when games using the API are ready for release, and we are finally nearing the end of that lag period. Consequently we’re now getting a better and clearer picture of what to expect with games utilizing DirectX 12 as those games approach their launch.

There are a few games vying for the title of the first major DirectX 12 game, but at this point I think it’s safe to say that the first high profile game to be released will be Ashes of the Singularity. This is due to the fact that the developer, Oxide, has specifically crafted an engine and a game meant to exploit the abilities of the API – large numbers of draw calls, asynchronous compute/shading, and explicit multi-GPU – putting it a step beyond adding DX12 rendering paths to games that were originally designed for DX11. As a result, both the GPU vendors and Microsoft itself have used Ashes and earlier builds of its Nitrous engine to demonstrate the capabilities of the API, and this is something we’ve looked at with both Ashes and the Star Swarm technical demo.

Much like a number of other games these days, Ashes of the Singularity for its part has been in a public beta via Steam early access, while its full, golden release on March 22nd is fast approaching. To that end Oxide and publisher Stardock are gearing up to release the second major beta of the game, and the last beta before the game goes gold. At the same time they’ve invited the press to take a look at the beta and its updated benchmark ahead of tomorrow’s early access release, so today we’ll be taking a second and more comprehensive look at the game.

The first time we poked at Ashes was to investigate an early alpha of the game’s explicit multi-GPU functionality. Though only in a limited form at the time, Oxide demonstrated that they had a basic implementation of DX12 multi-GPU up and running, allowing us to not only pair up similar video cards, but dissimilar cards from opposing vendors, making a combined GeForce + Radeon setup a reality. This early version of Ashes showed a lot of promise for DX12 multi-GPU, and after some additional development it is now finally being released to the public as part of this week’s beta.

Since that release Oxide has also been at work both cleaning up the code to prepare it for release, and implementing even more DX12 functionality. The latest beta adds greatly improved support another one of DX12’s powerhouse features: asynchronous shading/computing. By taking advantage of DX12’s lower-level access, games and applications can directly interface with the various execution queues on a GPU, scheduling work on each queue and having it executed independently. Async shading is another one of DX12’s optimization features, allowing for certain tasks to be completed in less time (lower throughput latency) and/or to better utilize all of a GPU’s massive arrays of shader ALUs.

Between its new functionality, updated graphical effects, and a significant amount of optimization work since the last beta, the latest beta for Ashes gives us quite a bit to take a look at today, so let’s get started.

More on Async Shading, the New Benchmark, & the Test
POST A COMMENT

153 Comments

View All Comments

  • rarson - Wednesday, February 24, 2016 - link

    Yeah, because people who bought a 980 Ti are already looking to replace them... Reply
  • Aspiring Techie - Wednesday, February 24, 2016 - link

    I'm pretty sure that Nvidia's Pascal cards will be optimized for DX12. Still, this gives AMD a slight advantage, which they need pretty badly now. Reply
  • testbug00 - Wednesday, February 24, 2016 - link

    *laughs*
    Pascal is more of the same as Maxwell when it comes to gaming.
    Reply
  • Mondozai - Thursday, February 25, 2016 - link

    Pascal is heavily compute-oriented, which will affect how the gaming lineup arch will be built. Do your homework. Reply
  • testbug00 - Thursday, February 25, 2016 - link

    Sorry, Maxwell already can support packed FP16 operations at 2x the rate of FP32 with X1.
    The rat of compute will be pretty much exclusive to GP100. Like how Kepler had a gaming line and GK110 for compute.
    Reply
  • MattKa - Thursday, February 25, 2016 - link

    *laughs*
    I'd like to borrow your crystal ball...

    You lying sack of shit. Stop making things up you retarded ass face.
    Reply
  • testbug00 - Thursday, February 25, 2016 - link

    What does Pascal have over Maxwell according to Nvidia again? Bolted on FP64 units? Reply
  • CiccioB - Sunday, February 28, 2016 - link

    I have not read anything about Pascal from nvidia outside the FP16 capabilities that are HPC oriented (deep learning).
    Where have you read anything about how Pascal cores/SMX/cache and memory controller are organized? Are they still using crossbar or they finally passed to a ring bus? Are caches bigger or faster? What are the ratio of cores/ROPs/TMUs? How much bandwidth for each core? How much has the compressed memory technology improved? Have cores doubled the ALUs or they have made more independent core? How much independent? Is the HW scheduler now able to preempt the graphics thread or it still can't? How many threads can it support? Is the Voxel support better and able to be used heavily in scenes to make global illumination quality difference?

    I have not read anything about this points. Have you some more info about them?
    Because what I could see is that at a first glance even Maxwell was not really different than Kepler. But in reality the performance were quite different in many ways.

    I think you really do not know anything about what you are talking about,.
    You are just expressing your desire and hopes like any other fanboy as a mirror of the frustration you have suffered all these years with the less capable AMD architecture you have been using up to now. You just hop nvidia has stopped and AMD finally made a step forward. It may be you are right. But you can't say now, nor I would going telling such stupid thing you were saying without anything as a fact.
    Reply
  • anubis44 - Thursday, February 25, 2016 - link

    I think nVidia's been caught with their pants down, and Pascal doesn't have hardware schedulers to perform async compute, either. It may be that AMD has seriously beaten them this time. Reply
  • anubis44 - Thursday, February 25, 2016 - link

    nVidia wasn't expecting AMD to force Microsoft's hand and release DX12 so soon. I have a feeling Pascal, like Maxwell, doesn't have hardware schedulers, either. It's beginning to look like nVidia's been check-mated by AMD here. Reply

Log in

Don't have an account? Sign up now