Early Direct3D 12 Demos

Wrapping things up, while DirectX 12 is not scheduled for public release until the Holiday 2015 time period, Microsoft tells us that they’ve already been working on the API for a number of years now. So although the API is 18-20 months off from its public release, Microsoft already has a very early version up and running on partner NVIDIA’s hardware.

In their demos Microsoft showed off a couple of different programs. The first of which was Futuremark’s 3DMark 2011, which along with being a solid synthetic benchmark for heavy workloads, also offers the ability to easily be dissected to find bottlenecks and otherwise monitor the rendering process.


3DMark 2011 CPU Time: Direct3D 11 vs. Direct3D 12

As part of their presentation Microsoft showed off some CPU utilization data comparing the Direct3D 11 and Direct3D 12 versions of 3DMark, which succinctly summarize the CPU performance gains. By moving the benchmark to Direct3D 12, Microsoft and Futuremark were able to significantly reduce the single-threaded bottlenecking, distributing more of the User Mode Driver workload across multiple threads. Meanwhile the use of the Kernel Mode Driver and the CPU time it consumed were eliminated entirely, as was some time within the Windows kernel itself. Finally, the amount of time spent within Direct3D was again reduced.

This benchmark likely leans towards a best case outcome for the use of Direct3D 12, but importantly it does show all of the benefits of a low level API at once. Some of the CPU workload has been distributed to other threads, other aspects of the CPU workload have been eliminated entirely. Yet despite all of this there’s still a clear “master” thread, showcasing the fact that not even the use of a low level graphics API can result in the workload being perfectly distributed among CPU threads. So there will still be a potential single-threaded bottleneck even with Direct3D 12, however it will be greatly diminished compared to the kinds of bottlenecking that could occur before.

Moving on, Microsoft’s other demo was a game demo, showcasing Forza Motorsport 5 running on a PC. Developer Turn 10 had ported the game from Direct3D 11.X to Direct3D 12, allowing the game to easily be run on a PC. Powered by a GeForce GTX Titan Black, Microsoft tells us the demo is capable of sustaining 60fps.

First Thoughts

Wrapping things up, it’s probably best to start with a reminder that this is a beginning rather than an end. While Microsoft has finally publically announced DirectX 12, what we’ve seen thus far is the parts that they are ready to show off to the public at large, and not what they’re telling developers in private. So although we’ve seen some technical details about the graphics API, it’s very clear that we haven’t seen everything DirectX 12 will bring. Even a far as Direct3D is concerned, it’s a reasonable bet right now that Microsoft will have some additional functionality in the works – quite possibly functionality relating to next-generation GPUs – that will be revealed as the API is closer to completion.

But even without a complete picture, Microsoft has certainly released enough high level and low level information for us to get a good look at what they have planned; and based on what we’re seeing we have every reason to be excited. A lot of this is admittedly a rehash of we’ve said several months ago when Mantle was unveiled, but then again if Direct3D 12 and Mantle are as similar as some developers are hinting, then there may not be very many differences to discuss.

The potential for improved performance in PC graphics is clear, as are the potential benefits to multi-platform developers. A strong case has been laid out by AMD, and now Microsoft, NVIDIA, and Intel that we need a low level graphics API to better map to the capabilities of today’s GPUs and CPUs. Direct3D 12 in turn will be the common API needed to bring those benefits to everyone at once, as only a common API can do.

It’s important to be exceedingly clear that at least for the first phase the greatest benefits are on the CPU side and not the GPU side – something we’ve already seen in practice with Mantle – so the benefits in GPU-bound scenarios will not be as great at first. But in the long run this means changing how the GPU itself is fed work and how that work is processed, so through features such as descriptor heaps the door to improved GPU efficiency is at least left open. But since we are facing an increasing gap between GPU performance and single-threaded CPU performance, even just the CPU bottlenecking reductions alone can be worth it as developers look to push larger and larger batches.

Finally, while I feel it’s a bit too early to say anything definitive, I do want to close with the question of what this means for AMD’s Mantle. For low level PC graphics APIs Mantle will be the only game in town for the next 18-20 months; but after that, then what? If nothing else Mantle is an incredibly important public proving ground for the benefits of low level graphics APIs, so even if Direct3D 12 were to supplant Mantle, Mantle has done its job. But I’m nowhere close to declaring Mantle’s fate yet, as we only have a handful of details on Direct3D 12 and Mantle itself is still in beta. Does Mantle continue alongside Direct3D 12, an easy target for porting since the two APIs are (apparently) so similar? Does Mantle disappear entirely? Or does AMD take Mantle and make it an open API, setting it up against Direct3D 12 in a similar manner as OpenGL sits against Direct3D 11 today? I imagine AMD already has a plan in mind, but that will be a discussion for another day…

Game Development, Consoles, and Mobile Devices
Comments Locked

105 Comments

View All Comments

  • ninjaquick - Tuesday, March 25, 2014 - link

    It is not BS at all. Developers have been asking, even crying out, for low level access to GPU hardware on PC for ages. The Xbox One was the last straw though, currently it is no more programmable than a PC. This caused Crytek a massive headache as they budgeted rendering based on 'to the metal' efficiency, and were instead met with massive draw overheads forcing them to severely reduce the quality of their work in 'Ryse'. Other developers have complained on the very same thing.. The Xbox 360 is more programmable. The benefit D3D12 has this time around is that the X1 is based on a hybrid WindowsRT/8x64, meaning d3d12 can be pushed to all win8 gen devices.
  • rootheday3 - Monday, March 24, 2014 - link

    I am pretty sure that at least one of the demos (3D MARK?) was actually run on Haswell iGpu - meaning Intel is well along on driver development. Some if the announced features (support for order independent transparency) also sound like Intrl extensions on dx11 (pixel sync)
  • rootheday3 - Monday, March 24, 2014 - link

    Also- reducing driver cost and single threaded perf should also help ensure that mobile gaming on laptops and tablets is less likely to be cpu bound due to frequency constraints. Should also allow more of the thermal budget to go to gpu for better rendering/ less throttling.
  • Zak - Tuesday, March 25, 2014 - link

    "Powered by a GeForce GTX Titan Black, Microsoft tells us the demo is capable of sustaining 60fps."

    Titan Black? No kidding. At what resolution?
  • Scali - Tuesday, March 25, 2014 - link

    "To use consoles as an example once again, this is why they are capable of so much with such a (relatively) weak CPU, as they’re better able to utilize their multiple CPU cores than a high level programmed PC can."

    This is patently false.
    Namely, the PS3 with its Cell has only a single regular CPU core. The SPEs are very limited and not suitable for batching up draw calls.
    The XBox 360 is a more 'regular' CPU, but it has 'only' 3 cores, and the rendering is mostly done on a single core. (PS4 and XBox One are too new to draw any conclusions yet, so 'console efficiency' is what we know of consoles that are not all about multithreading).

    You are confusing low-level with multithreading. Low-level is about programming the GPU with a very direct path, little abstraction. That is why it is efficient on consoles. There is a much thinner layer between OS, GPU, driver, API and application than on a regular PC.

    Multithreading is another way of speeding up graphics, but this does not necessarily require programming the GPU directly. Which is also not what DX12 is going to do. It will still abstract the hardware to a point where vendor and implementation are not relevant. But it will allow better control of batching up calls on threads other than the master rendering thread (D3D11 already has support for multithreading, but it has some limitations).

    It seems that AMD has done a great job on confusing the general public, with its Mantle propaganda.
  • Death666Angel - Tuesday, March 25, 2014 - link

    All these articles make me think of John Carmack's QuakeCon 2013 (I think) keynote where he talked about having the same access to the GPU as he does to the CPU and basically programming in machine code. Hope this is coming. :) I need the performance for 120fps/4k Oculus Rift games! :D
  • Scali - Tuesday, March 25, 2014 - link

    The ironly is that the original D3D had low-level access to the GPU, with its execute buffer system. This would easily allow multiple threads to batch up commands for the GPU in parallel efficiently.
    But Carmack complained that the API was too hard to use.
    It looks like we're going back in time somewhat, getting closer to the original D3D.
  • Ramon Zarat - Tuesday, March 25, 2014 - link

    The only thing I really want to know is this:

    Does the current gen GPU hardware, mainly Kepler/Maxwell and Tahiti/Hawaii, *technically* ABLE to support DX12? With strong emphasis on ABLE, as even if they are able, I seriously doubt AMD or Nvidia will be "charitable" enough to actually do it and instead, force us all to upgrade again, thanks to the wonder of artificial market segmentation.

    Will we see modded driver enabling (at least partially) DX12 featues on current hardware? That would be interesting... For example, I'm currently running a modded Intel OROM BIOS on my Z68 board so it can use TRIM under RAID0 with SSD. With the Z77 and Z87, no TRIM problem out of the box and they are 99.9% identical to the Z68 SATA controller. I had ZERO problem in 2 1/2 years, so yeah, TRIM work in RAID0 SSD on the Z68, thanks a lot, Intel...for nothing.
  • inighthawki - Wednesday, March 26, 2014 - link

    Considering nvidia made it a huge point that Fermi and above will be supported and represent >50% of the existing market, yeah probably. Why would they announce that then not write drivers for it?
  • TheJian - Wednesday, March 26, 2014 - link

    "For low level PC graphics APIs Mantle will be the only game in town for the next 18-20 months; but after that, then what?"

    Ignorance or Dumbest comment I've seen this year ;) Either ignorant of just trying to pretend OpenGL can't do this already for years. You forgot the OpenGL speeches showing 5-30x better draw calls and it is ALREADY here, not 2yrs away like DX12.
    http://blogs.nvidia.com/blog/2014/03/20/opengl-gdc...
    Still showing bias here I guess...Fee free to watch the 52min video in that link on DRAW CALLS and how pointless Mantle is as OpenGL has had this stuff for years. He even shows some code.
    "How to get a crap-ton more draw calls" (a new technical term he said in the video…LOL), which clearly is telling you in the first minute, there is no need for mantle as it is about 10x draw calls right? Mantle=dead. If not by DX12 (but this isn’t out for a while-Win9?), then OpenGL/SteamOS/Android pushing OpenGL.

    NV’s speech wasn't a SMALL speech or there wouldn't be a 130 slide doc (at least you guys mentioned it, that's not enough) explaining what they covered right (on top of the 52min dev day video covering Draw Calls +many others)? Considering it is ALREADY working in opengl (same crap as mantle, even carmack said you can do the same thing already and get as close to metal as you'd like in OpenGL with extensions a YEAR ago), I'm sure the OpenGL speeches had more concrete info than the DX speech at GDC (valve didn’t say a word about anything BUT OpenGL recently and how to port DX9 to it). How can you not detail the info on OpenGL and call yourself a hardware site? I know you guys hate cuda (or you'd test it vs. opencl/AMD repeatedly to death), but why the hate for OpenGL which NV has no control over?

    Mantle's main competitor for 2yrs while we wait for DX12 is ...wait for it...OPENGL, which already does the same crap mantle does... ;) I expected nothing less from Ryan Smith. Where is the big OpenGL speeches coverage? Devs skipped the VR speech to hear about the draw call speech (it was going on right next door and John Mcdonald even says at the end of the vid he'd be in there if he wasn't giving the speech...LOL) and NV was a bit surprised those devs skipped VR. They expected 5 people, got a crowd wanting OpenGL draw call info ;)

    But believe Ryan, Mantle's only competition is DX even though OpenGL (ES) is the only thing really used in games on mobile which will further Valves desktop opengl push too. Steam Dev Days was all about leaving DX and going OpenGL, much of GDC was the same or on ES3.1 mobile info. DX12 won't get far unless you believe MS will take out Android/SteamOS/iOS. They are already stuck in cement being so far ahead with pure unit sales off the charts, game devs' mind share already on ios/android, etc, they can easily push OpenGL together and all 3 want DX/Wintel dead.

    Wintel lost 21% of ALL notebook share last year. This year we have 64bit cpus coming from all ARM soc players and they'll will use those to go further up the chain from crapbooks (chromebooks to me...LOL) to REAL notebooks, and low-end desktops (+some servers) and move up again with the next revs into everything high-end that x86 owns today.

    NV’s speech wasn't a SMALL speech or there wouldn't be a 130 slide doc explaining what they covered right (on top of the 52min video from steam dev days+many others)? Considering it is ALREADY working in opengl (same stuff as mantle, even carmack said you can do the same thing already and get as close to metal as you'd like in OpenGL with extensions a YEAR ago on stage), I'm sure the OpenGL speeches had more concrete info than the DX speech at GDC (valve didn’t say a word about anything BUT OpenGL). How can you not detail the info on OpenGL and call yourself a hardware site? I know you guys hate cuda (or you'd test it vs. opencl/AMD repeatedly to death), but why the hate for OpenGL which NV has no control over? It hurts mantle, is DONE now, AMD pays you for a portal etc, so it's off limits?

Log in

Don't have an account? Sign up now