With GDC 2014 having drawn to a close, we have finally seen what is easily the most exciting piece of news for PC gamers. As previously teased by Microsoft, Microsoft took to the stage last week to announce the next iteration of DirectX: DirectX 12. And as hinted at by the session description, Microsoft’s session was all about bringing low level graphics programming to Direct3D.

As is often the case for these early announcements Microsoft has been careful on releasing too many technical details at once. But from their presentation and the smaller press releases put together by their GPU partners, we’ve been given our first glimpse at Microsoft’s plans for low level programming in Direct3D.

Preface: Why Low Level Programming?

The subject of low level graphics programming has become a very hot topic very quickly in the PC graphics industry. In the last 6 months we’ve gone from low level programming being a backburner subject, to being a major public initiative for AMD, to now being a major initiative for the PC gaming industry as a whole through Direct3D 12. The sudden surge in interest and development isn’t a mistake – this is a subject that has been brewing for years – but it’s within the last couple of years that all of the pieces have finally come together.

But why are we seeing so much interest in low level graphics programming on the PC? The short answer is performance, and more specifically what can be gained from returning to it.

Something worth pointing out right away is that low level programming is not new or even all that uncommon. Most high performance console games are written in such a manner, thanks to the fact that consoles are fixed platforms and therefore easily allow this style of programming to be used. By working with hardware at such a low level programmers are able to tease out a great deal of performance of this hardware, which is why console games look and perform as well as they do given the consoles’ underpowered specifications relative to the PC hardware from which they’re derived.

However with PCs the same cannot be said. PCs, being a flexible platform, have long worked off of high level APIs such as Direct3D and OpenGL. Through the powerful abstraction provided by these high level APIs, PCs have been able to support a wide variety of hardware and over a much longer span of time. With low level PC graphics programming having essentially died with DOS and vendor specific APIs, PCs have traded some performance for the convenience and flexibility that abstraction offers.

The nature of that performance tradeoff has shifted over the years though, requiring that it be reevaluated. As we’ve covered in great detail in our look at AMD’s Mantle, these tradeoffs were established at a time when CPUs and GPUs were growing in performance by leaps and bounds year after year. But in the last decade or so that has changed – CPUs are no longer rapidly increasing in performance, especially in the case of single-threaded performance. CPU clockspeeds have reached a point where higher clockspeeds are increasingly power-expensive, and the “low hanging fruit” for improving CPU IPC has long been exhausted. Meanwhile GPUs have roughly continued their incredible pace of growth, owing to the embarrassingly parallel nature of graphics rendering.

The result is that when looking at single threaded CPU performance, GPUs have greatly outstripped CPU performance growth. This in and of itself isn’t necessarily a problem, but it does present a problem when coupled with the high level APIs used for PC graphics. The bulk of the work these APIs do in preparing data for GPUs is single threaded by its very nature, causing the slowdown in CPU performance increases to create a bottleneck. As a result of this gap and its ever-increasing nature, the potential for bottlenecking has similarly increased; the price of abstraction is the CPU performance required to provide it.

Low level programming in contrast is more resistant against this type of bottlenecking. There is still the need for a “master” thread and hence the possibility of bottlenecking on that master, but low level programming styles have no need for a CPU-intensive API and runtime to prepare data for GPUs. This makes it much easier to farm out work to multiple CPU cores, protecting against this bottlenecking. To use consoles as an example once again, this is why they are capable of so much with such a (relatively) weak CPU, as they’re better able to utilize their multiple CPU cores than a high level programmed PC can.

The end result of this situation is that it has become time to seriously reevaluate the place of low level graphics programming in the PC space. Game developers and GPU vendors alike want better performance. Meanwhile, though it’s a bit cynical, there’s a very real threat posed by the latest crop of consoles, putting PC gaming in a tight spot where it needs to adapt to keep pace with the consoles. PCs still hold a massive lead in single-threaded CPU performance, but given the limits we’ve discussed earlier, too much bottlenecking can lead to the PC being the slower platform despite the significant hardware advantage. A PC platform that can process fewer draw calls than a $400 game console is a poor outcome for the industry as a whole.

Direct3D 12 In Depth
Comments Locked

105 Comments

View All Comments

  • ninjaquick - Tuesday, March 25, 2014 - link

    And the second look will wind up the same way. Indipendents who can starve a little longer will probably make sure to release on the Steam Machines, but larger developers, with larger codebases and way more stuff on their minds can't just jump ship without spending way too much time on re-engineering much of their code.
  • martixy - Monday, March 24, 2014 - link

    I see a bright future for the gaming industry...
    On that note, does anyone happen to have a time machine? Or a ship that goes really really fast?
  • Rezurecta - Monday, March 24, 2014 - link

    What piqued my interest is the fact that even MS uses Chrome. ;)

    Seriously though, posted the same on Overclock.net. Given the expected time to launch, it seems that this was only thought about because of AMD and Mantle. It is a shame that AMD paved the way and may not be a vastly supported API.

    Hopefully, Nvidia and Intel accept AMD's open offer to join Mantle and we can put the control in the IHV's instead of the OS maker.
  • errorr - Monday, March 24, 2014 - link

    MS has a lot of work to do if they want to be relevant for mobile. OpenGL ES has been largely optimized for tile-based solutions and takes into account the numerous benefits and flaws compared to desktop GPUs. Just about everything in the mobile space is created to limit memory access which is slow, narrow, and power intensive. The entire paradigm is completely different. Adreno is also VLIW which means any low-level api stuff is bound to be very hard to implement. At least it will work on Nvidia chips I guess but that is still only 10% of the market at best.
  • errorr - Monday, March 24, 2014 - link

    On another note, there was some desire to get some better understanding on mobile GPU chips in the powerVR article and the ARM Mali blog at least did the math on publicly available statements and outlined the capabilities of each "shader core".

    Each Mali has 1-16 shader cores (4-8 usu.). Each shader core has 1-4 Arithmetic pipes (SIMD). Each pipe has 128-bit quad-word registers. The registers can be flexibly accessed as either 2 x FP64, 4 x FP32, 8 x FP16, 2 x int64, 4 x int32, 8 x int16, or 16 x int8. There is a speed penalty for FP64 and a speed bump for FP16 etc. from the 17 FP32 FLOPS per pipeline per clock. So at max with 16 shader cores with 4 pipes per core @ 600mhz that gives a theoretical compute of 652 FP32 GFLOPS. Although it seems like a 16/2 design (T-760) will do 326 FP32 GFLOPS as the more likely.
    There is also a load/store pipeline and a texture pipeline (1 textel per clock or 1/2 textel w/ trilinear filtering)

    Wasn't sure where to put this but they have been sharing/implying a bunch of info on their cores publicly for a while.
  • lightyears - Monday, March 24, 2014 - link

    Please give your opinion about following question:
    What about notebooks with nVidia Optimus? I have a notebook with a GTX680M dedicated graphics combined with Ivy Bridge integrated graphics. So the 680M will support DirectX12, but the Ivy Bridge dedicated probably wont.
    Unfortunately those two are connected by nVidia Optimus technology. A technology that it seems is impossible to put off. I looked already in my usual BIOS but I cant get rid of it. Whether I like it or not I am forced to have Optimus.

    So will Optimus automatically select the 680M for DX12 applications automatically?

    Or wont it work at all. And wont the game be installed because my stupid integrated graphics card doesnt support it?

    The last option would be a true shame and I would really be frsutrated. Given that I spend a lot of money on a high end notebook. And I paid a lot to have a heavy (DX12 capable) 680M in it. And I still wont be able to do DX12 altough I have a DX12 capable card...
  • Ryan Smith - Tuesday, March 25, 2014 - link

    "What about notebooks with nVidia Optimus?"

    There is no reason that I'm aware of that this shouldn't work on Optimus. The Optimus shim should redirect any flagged game to the dGPU, where it will detect a D3D12 capable device and be able to use that API.
  • ninjaquick - Tuesday, March 25, 2014 - link

    Awesome use of the word shim.
  • lightyears - Tuesday, March 25, 2014 - link

    I looked at the internet and it looks like it wont be a real problem indeed. Back in 2011 the same situation existed with DX11. Some Optimus notebooks had Sandy Bridge CPU (DX 10.1 capacle) and GTX 555 (DX 11 capable). By some people the Optimus didnt automatically detect the DX 11 capable device and they had some problem,. But after some changes in the settings they managed to get DX 11 going with the GTX 555 on the Optimus notebooks.Altough the Sandy Bridge was not DX 11 capable.
    So I suppose Optimus also wont be a problem this time with DX12. Good news.
    Altough I truely hate Optimus. It already forbid me to use stereoscopic 3D on a supported 3DTV.
  • ericore - Monday, March 24, 2014 - link

    "But why are we seeing so much interest in low level graphics programming on the PC? The short answer is performance, and more specifically what can be gained from returning to it."

    That's absolute BS.
    The reason is 3 fold: 1. For Xbox One 2. To prevent surge of Linux Gaming 3. To fulfill alliance/pack with Intel and Nvidia

Log in

Don't have an account? Sign up now