With GDC 2014 having drawn to a close, we have finally seen what is easily the most exciting piece of news for PC gamers. As previously teased by Microsoft, Microsoft took to the stage last week to announce the next iteration of DirectX: DirectX 12. And as hinted at by the session description, Microsoft’s session was all about bringing low level graphics programming to Direct3D.

As is often the case for these early announcements Microsoft has been careful on releasing too many technical details at once. But from their presentation and the smaller press releases put together by their GPU partners, we’ve been given our first glimpse at Microsoft’s plans for low level programming in Direct3D.

Preface: Why Low Level Programming?

The subject of low level graphics programming has become a very hot topic very quickly in the PC graphics industry. In the last 6 months we’ve gone from low level programming being a backburner subject, to being a major public initiative for AMD, to now being a major initiative for the PC gaming industry as a whole through Direct3D 12. The sudden surge in interest and development isn’t a mistake – this is a subject that has been brewing for years – but it’s within the last couple of years that all of the pieces have finally come together.

But why are we seeing so much interest in low level graphics programming on the PC? The short answer is performance, and more specifically what can be gained from returning to it.

Something worth pointing out right away is that low level programming is not new or even all that uncommon. Most high performance console games are written in such a manner, thanks to the fact that consoles are fixed platforms and therefore easily allow this style of programming to be used. By working with hardware at such a low level programmers are able to tease out a great deal of performance of this hardware, which is why console games look and perform as well as they do given the consoles’ underpowered specifications relative to the PC hardware from which they’re derived.

However with PCs the same cannot be said. PCs, being a flexible platform, have long worked off of high level APIs such as Direct3D and OpenGL. Through the powerful abstraction provided by these high level APIs, PCs have been able to support a wide variety of hardware and over a much longer span of time. With low level PC graphics programming having essentially died with DOS and vendor specific APIs, PCs have traded some performance for the convenience and flexibility that abstraction offers.

The nature of that performance tradeoff has shifted over the years though, requiring that it be reevaluated. As we’ve covered in great detail in our look at AMD’s Mantle, these tradeoffs were established at a time when CPUs and GPUs were growing in performance by leaps and bounds year after year. But in the last decade or so that has changed – CPUs are no longer rapidly increasing in performance, especially in the case of single-threaded performance. CPU clockspeeds have reached a point where higher clockspeeds are increasingly power-expensive, and the “low hanging fruit” for improving CPU IPC has long been exhausted. Meanwhile GPUs have roughly continued their incredible pace of growth, owing to the embarrassingly parallel nature of graphics rendering.

The result is that when looking at single threaded CPU performance, GPUs have greatly outstripped CPU performance growth. This in and of itself isn’t necessarily a problem, but it does present a problem when coupled with the high level APIs used for PC graphics. The bulk of the work these APIs do in preparing data for GPUs is single threaded by its very nature, causing the slowdown in CPU performance increases to create a bottleneck. As a result of this gap and its ever-increasing nature, the potential for bottlenecking has similarly increased; the price of abstraction is the CPU performance required to provide it.

Low level programming in contrast is more resistant against this type of bottlenecking. There is still the need for a “master” thread and hence the possibility of bottlenecking on that master, but low level programming styles have no need for a CPU-intensive API and runtime to prepare data for GPUs. This makes it much easier to farm out work to multiple CPU cores, protecting against this bottlenecking. To use consoles as an example once again, this is why they are capable of so much with such a (relatively) weak CPU, as they’re better able to utilize their multiple CPU cores than a high level programmed PC can.

The end result of this situation is that it has become time to seriously reevaluate the place of low level graphics programming in the PC space. Game developers and GPU vendors alike want better performance. Meanwhile, though it’s a bit cynical, there’s a very real threat posed by the latest crop of consoles, putting PC gaming in a tight spot where it needs to adapt to keep pace with the consoles. PCs still hold a massive lead in single-threaded CPU performance, but given the limits we’ve discussed earlier, too much bottlenecking can lead to the PC being the slower platform despite the significant hardware advantage. A PC platform that can process fewer draw calls than a $400 game console is a poor outcome for the industry as a whole.

Direct3D 12 In Depth
Comments Locked

105 Comments

View All Comments

  • nathanddrews - Monday, March 24, 2014 - link

    Forza 5 runs 60fps at 1080p on Xbone. I think the point of the Titan Black demonstration was that with only "four months of man-hours of work" they were able to flawlessly port it not only to DX12, but also to PC. It showcases the ease of porting using DX12 and the compatibility of DX12 with Kepler. Given that the TItan Black is 3-4x faster than the GPU in the Xbone, it stands to reason that taking more time with a port or developing side-by-side would yield a much better experience on the PC side.

    I'm sure that somewhere there's a Xfanboy claiming that the Xbone is as powerful as a Titan Black.
  • ninjaquick - Monday, March 24, 2014 - link

    Not just that, but to non-AMD hardware, which means not only does it port over "easily", it works on hardware from all vendors.
  • krumme - Monday, March 24, 2014 - link

    Damn nice article.

    How can fermi be compatible when it doesnt support blindless textures?
  • SydneyBlue120d - Monday, March 24, 2014 - link

    It seems even more funny the fact that Nvidia Maxwell doesn't fully support Direct X 11.1, yet it seems they're all Direct X 12 compliant :)
  • inighthawki - Monday, March 24, 2014 - link

    Don't confuse the software with the feature set. Maxwell works on DX11.1, it just not 100% compliant with all features exposed by 11.1. DX12 may also expose hardware features that are incompatible with Maxwell but will still run at a lower "11.0" feature level.
  • YazX_ - Monday, March 24, 2014 - link

    i believe this will only benefit the low end CPU users base, and specifically all AMD $hitty CPUs. on high end CPUs, there is no bottleneck so the gain will be very minimal.
  • kyuu - Monday, March 24, 2014 - link

    There is only no bottleneck with high-end CPUs because game developers design the game within the limitations of the CPU, which, as stated in the article, have not kept pace in terms of performance growth with GPUs. A big limitation that developers lament is in the number of draw calls.

    So while you're correct that current games will not see much benefit when run on higher-end CPUs, future games will be able to do more and therefore games will improve for everyone on all hardware. Also, you should consider that a high-end CPU becomes mid-end and then low-end over time -- these DX12 (and Mantle) improvements mean that it becomes less necessary to upgrade your CPU, which saves money that can be put into another part of your system (say, the GPU?).
  • Homeles - Monday, March 24, 2014 - link

    "i believe this will only benefit the low end CPU users base, and specifically all AMD $hitty CPUs. on high end CPUs, there is no bottleneck so the gain will be very minimal. "

    In other words, most computers.
  • ninjaquick - Monday, March 24, 2014 - link

    D3D12 is not a response to Mantle, as you would assume, rather it is a response to substantial developer feedback/pushback against the massive decrease in low-level access and programmability of the X1 compared to the X360. Microsoft has a unified platform vision that they are stubborn to stick to, so the d3d12 development advances made for the X1 are directly portable from the WindowsX1 (RT/8-x64 hybrid) to WindowsRT/WP8/Win8.

    Mantle is a far broader implementation, and is only possible thanks to AMD's hardware scope, as HDMA/hUMA and the massive improvement in GPU DMA are really only possible (as of yet) on AMD's hardware and software packages. D3D12 will not make much of a difference on platforms other than the X1, where developers [should be] getting more DMA for GPU task, beyond D3D buffer allocation, etc.
  • jwcalla - Monday, March 24, 2014 - link

    If you want your game to have a mobile presence and be on Steam Machines, you're going to need OpenGL. You can get access to just about all the hardware features and performance you want with OpenGL 4.4.

    Time for devs to give it a second look IMO.

Log in

Don't have an account? Sign up now