The Current State of DirectX 12 & WDDM 2.0

Although DirectX 12 is up and running in the latest public release of Windows 10, it and many of its related components are still under development. Windows 10 itself is still feature-incomplete, so what we’re looking at here today doesn’t even qualify as beta software. As a result today’s preview should be taken as just that: an early preview. There are still bugs, and performance and compatibility is subject to change. But as of now everything is far enough along that we can finally get a reasonable look at what DirectX 12 is capable of.

From a technical perspective the DirectX 12 API is just one part of a bigger picture. Like Microsoft’s last couple of DirectX 11 minor version upgrades, DirectX 12 goes hand-in-hand with a new version of the Windows Display Driver Model, WDDM 2.0. In fact WDDM 2.0 is the biggest change to WDDM since the driver model was introduced in Windows Vista, and as a result DirectX 12 itself represents a very large overhaul of the Windows GPU ecosystem.

Top: Radeon R9 290X. Bottom: GeForce GTX 980

Microsoft has not released too many details on WDDM 2.0 so far – more information will be released around GDC 2015 – but WDDM 2.0 is based around enabling DirectX 12, adding the necessary features to the kernel and display drivers in order to support the API above it. Among the features tied to WDDM 2.0 are DX12’s explicit memory management and dynamic resource indexing, both of which wouldn’t have been nearly as performant under WDDM 1.3. WDDM 2.0 is also responsible for some of the baser CPU efficiency optimizations in DX12, such as changes to how memory residency is handled and how DX12 applications can more explicitly control residence.

The overhauling of WDDM for 2.0 means that graphics drivers are impacted as well as the OS, and like Microsoft, NVIDIA and AMD have been preparing for WDDM 2.0 with updated graphics drivers. These drivers are still a work in progress, and as a result not all hardware support is enabled and not all bugs have been worked out.

DirectX 12 Support Status
  Current Status Supported At Launch
AMD GCN 1.2 (285) Working Yes
AMD GCN 1.1 (290/260 Series) Working Yes
AMD GCN 1.0 (7000/200 Series) Buggy Yes
NVIDIA Maxwell 2 (900 Series) Working Yes
NVIDIA Maxwell 1 (750 Series) Working Yes
NVIDIA Kepler (600/700 Series) Working Yes
NVIDIA Fermi (400/500 Series) Not Active Yes

In short, among AMD and NVIDIA their latest products are up and running in WDDM 2.0, but not on all of their earlier products. In AMD’s case GCN 1.0 cards are supported under their WDDM 2.0 driver, but we are encountering texturing issues in Star Swarm that do not occur with GCN 1.1 and later. Meanwhile in NVIDIA’s case, as is common for NVIDIA beta drivers they only ship with support enabled for their newer GPUs – Kepler, Maxwell 1, and Maxwell 2 – with Fermi support disabled. Both AMD and NVIIDA have already committed to supporting DirectX 12 (and by extension WDDM 2.0) on GCN 1.0 and later and Fermi and later respectively, so while we can’t test these products today, they should be working by the time DirectX 12 ships.

Also absent for the moment is a definition for DirectX 12’s Feature Level 12_0 and DirectX 11’s 11_3. Separate from the low-level API itself, DirectX 12 and its high-level counterpart DirectX 11.3 will introduce new rendering features such as volume tiled resources and conservative rasterization. While all of the above listed video cards will support the DirectX 12 low-level API, only the very newest video cards will support FL 12_0, and consequently be fully DX12 compliant on both a feature and API basis. Like so many other aspects of DirectX 12, Microsoft is saving any discussion of feature levels for GDC, at which time we should find out what the final feature requirements will be and which (if any) current cards will fully support FL 12_0.

Finally, with Microsoft’s announcement of their Windows 10 plans last month, Microsoft is also finally clarifying their plans for the deployment of DirectX 12. Because DirectX 12 and WDDM 2.0 are tied at the hip, and by extension tied to Windows 10, DirectX 12 will only be available on Windows 10. Windows 8/8.1 and Windows 7 will not be receiving DirectX 12 support.

DirectX 12 Supported OSes
  Will Support DX12? Required WDDM Version
Windows 10 Yes 2.0
Windows 8.1 No N/A
Windows 8 No N/A
Windows 7 No N/A

Backporting DirectX 12 to earlier OSes would require backporting WDDM 2.0 as well, which brings with it several issues due to the fact that WDDM 2.0 is a kernel component. Microsoft would either have to compromise on WDDM 2.0 features in order to make it work on these older kernels, or alternatively would have to more radically overhaul these kernels to accommodate the full WDDM 2.0 feature set, the latter of which is a significant engineering task and carries a significant risk of breaking earlier Windows installations. Microsoft has already tried this once before in backporting parts of Direct3D 11.1 and WDDM 1.2 to Windows 7, only to discover that even that smaller-scale project had compatibility problems. A backport of DirectX 12 would in turn be even more problematic.

The bright side of all of this is that with Microsoft’s plans to offer Windows 10 as a free upgrade for Windows 7/8/8.1 users, the issue is largely rendered moot. Though DirectX 12 isn’t being backported, Windows users will instead be able to jump forward for free, so unlike Windows 8 this will not require spending money on a new OS just to gain access to the latest version of DirectX. This in turn is consistent with Microsoft’s overall plans to bring all Windows users up to Windows 10 rather than letting the market get fragmented among different Windows versions (and risk repeating another XP), so the revelation that DirectX 12 will not get backported has largely been expected since Microsoft’s Windows 10 announcement.

Meanwhile we won’t dwell on the subject too much, but DirectX 12 being limited to Windows 10 does open up a window of opportunity for Mantle and OpenGL Next. With Mantle already working on Windows 7/8 and OpenGL Next widely expected to be similarly portable, these APIs will be the only low-level APIs available to earlier Windows users.

The DirectX 12 Preview Star Swarm & The Test
Comments Locked

245 Comments

View All Comments

  • alaricljs - Friday, February 6, 2015 - link

    It takes time to devise such tests and more time validate that the test is really doing what you want and yet more time to DO the testing... and meanwhile I'm pretty sure they're not going to just drop everything else that's in the pipeline.
  • monstercameron - Friday, February 6, 2015 - link

    and amd knows that well but maybe nvidia should also...maybe?
  • JarredWalton - Friday, February 6, 2015 - link

    As an owner -- an owner that actually BOUGHT my 970, even though I could have asked for one -- of a GTX 970, I can honestly say that the memory segmentation issue isn't much of a concern. The reality is that when you're running settings that are coming close to 3.5GB of VRAM use, you're also coming close to the point where the performance is too low to really matter in most games.

    Case in point: Far Cry 4, or Assassin's Creed Unity, or Dying Light, or Dragon Age: Inquisition, or pretty much any other game I've played/tested, the GTX 980 is consistently coming in around 20-25% faster than the GTX 970. In cases where we actually come close to the 4GB VRAM on those cards (e.g. Assassin's Creed Unity at 4K High or QHD Ultra), both cards struggle to deliver acceptable performance. And there are dozens of other games that won't come near 4GB VRAM that are still providing unacceptable performance with these GPUs at QHD Ultra settings (Metro: Last Light, Crysis 3, Company of Heroes 2 -- though that uses SSAA so it really kills performance at higher quality settings).

    Basically, with current games finding a situation where GTX 980 performs fine but the GTX 970 performance tanks is difficult at best, and in most cases it's a purely artificial scenario. Most games really don't need 4GB of textures to look good, and when you drop texture quality from Ultra to Very High (or even High), the loss in quality is frequently negligible while the performance gains are substantial.

    Finally, I think it's worth noting again that NVIDIA has had memory segmentation on other GPUs, though perhaps not quite at this level. The GTX 660 Ti has a 192-bit memory interface with 2GB VRAM, which means there's 512MB of "slower" VRAM on one of the channels. That's one fourth of the total VRAM and yet no one really found cases where it mattered, and here we're talking about 1/8 of the total VRAM. Perhaps games in the future will make use of precisely 3.75GB of VRAM at some popular settings and show more of an impact, but the solution will still be the same: twiddle a few settings to get back to 80% of the GTX 980 performance rather than worrying about the difference between 10 FPS and 20 FPS, since neither one is playable.
  • shing3232 - Friday, February 6, 2015 - link

    Those people who own two 970 will not agree with you.
  • JarredWalton - Friday, February 6, 2015 - link

    I did get a second one, thanks to Zotac (I didn't pay for that one, though). So sorry to disappoint you. Of course, there are issues at times, but that's just the way of multiple GPUs, whether it be SLI or CrossFire. I knew that going into the second GPU acquisition.

    At present, I can say that Far Cry 4 and Dying Light are not working entirely properly with SLI, and neither are Wasteland 2 or The Talos Principle. Assassin's Creed: Unity seems okay to me, though there is a bit of flicker perhaps on occasion. All the other games I've tried work fine, though by no means have I tried "all" the current games.

    For CrossFire, the list is mostly the same with a few minor additions. Assassin's Creed: Unity, Company of Heroes 2, Dying Light, Far Cry 4, Lords of the Fallen, and Wasteland 2 all have problems, and scaling is pretty poor on at least a couple other games (Lichdom: Battlemage and Middle-Earth: Shadow of Mordor scale, but more like by 25-35% instead of 75% or more).

    Overall, GTX 970 SLI and R9 290X CF are basically tied at both 4K and QHD testing in my results across quite a few games, with NVIDIA taking a slight lead at 1080p and lower. In fact for single GPUs, 290X wins on average by 10% at 4K (but neither card is typically playable except at lower quality settings), while the difference is 1% or less at QHD Ultra.
  • Cryio - Saturday, February 7, 2015 - link

    "Overall, GTX 970 SLI and R9 290X CF are basically tied at both 4K and QHD testing in my results across quite a few games, with NVIDIA taking a slight lead at 1080p and lower."

    By virtue of *every* benchmark I've seen on the internets, on literally every game, 4K, maxed out settings, CrossFire of 290Xs are faster than both SLI 970s *and* 980s.

    In 1080p and 1440p, by all intents and purposes, 290Xs trade blows with 970s and the 980s reign supreme. But at 4K, the situation completely shifts and the 290Xs come on top.
  • JarredWalton - Saturday, February 7, 2015 - link

    Note that my list of games is all relatively recent stuff, so the fact that CF fails completely in a few titles certainly hurts -- and that's reflected in my averages. If we toss out ACU, CoH2, DyLi, FC4, LotF... then yes, it would do better, but then I'm cherry picking results to show the potential rather than the reality of CrossFire.
  • Kjella - Saturday, February 7, 2015 - link

    Owner of 2x970s here, reviews show that 2x780 Ti generally wins current games at 3840x2160 with only 3GB of memory so it doesn't seem to matter much today, I've seen no non-synthetic benchmarks at playable resolutions/frame rates to indicate otherwise. Nobody knows what future games will bring but I would have bought them as a "3.5 GB" card too, though of course I feel a little cheated that they're worse than the 980 GTX in a way I didn't expect.
  • JarredWalton - Saturday, February 7, 2015 - link

    I don't have 780 Ti (or 780 SLI for that matter), but interestingly GTX 780 just barely ends up ahead of a single GTX 970 at QHD Ultra and 4K High/Ultra settings. There are times where 970 leads, but the times when 780 leads are by slightly higher margins. Effectively, GTX 970 is equal to GTX 780 but at a lower price point and with less power.
  • mapesdhs - Tuesday, February 10, 2015 - link


    That's the best summary I've read on all this IMO, ie. situations which would demonstrate
    the 970's RAM issue are where performance isn't good enough anyway, typically 4K
    gaming, so who cares? Right now, if one wants better performance at that level, then
    buy one or more 980, 290X, whatever, because two of any lesser card aren't going
    to be quick enough by definition.

    I bought two 980s, first all-new GPU purchase since I bought two of EVGA's infamous
    GTX 460 FTW cards when they first came out. Very pleased with the 980s, they're
    excellent cards. Bought a 3rd for benchmarking, etc., the three combined give 8731
    for Fire Strike Ultra (result no. 4024577), I believe the highest S1155 result atm, but
    the fps numbers still aren't really that high.

    Truth is, by the time a significant number of people will be concerned about a typical
    game using more than 3.5GB RAM, GPU performance needs to be a heck of a lot
    quicker than a 970. It's a non-issue. None of the NVIDIA-hate I've seen changes the
    fact that the 970 is a very nice card, and nothing changes how well it performs as
    shown in initial reviews. I'll probably get one for my brother's bday PC I'm building,
    to go with a 3930K setup.

    Most of those complaining about all this are people who IMO have chosen to believe
    that NVIDIA did all of this deliberately, because they want that to be the case, irrespective
    of what actually happened, and no amount of evidence to the contrary will change their
    minds. The 1st Rule gets broken again...

    As I posted elsewhere, all those complainig about the specs discrepancy do however
    seem perfectly happy for AMD (and indeed NVIDIA) to market dual-GPU cards as
    having double RAM numbers which is completely wrong, not just misleading. Incredible
    hypocrasy here.

    Ian.

Log in

Don't have an account? Sign up now