Starting with Windows Vista, Microsoft began the first steps of what was to be a long campaign to change how Windows would interact with GPUs. XP, itself based on Windows 2000, used a driver model that predated the term “GPU” itself. While graphics rendering was near and dear to the Windows kernel for performance reasons, Windows still treated the video card as more of a peripheral than a processing device. And as time went on that peripheral model became increasingly bogged down as GPUs became more advanced in features, and more important altogether.

With Vista the GPU became a second-class device, behind only the CPU itself. Windows made significant use of the GPU from the moment you turned it on due to the GPU acceleration of Aero, and under the hood things were even more complex. At the API level Microsoft added Direct3D 10, a major shift in the graphics API that greatly simplified the process of handing work off to the GPU and at the same time exposed the programmability of GPUs like never before. Finally, at the lowest levels of the operating system Microsoft completely overhauled how Windows interacts with GPUs by implementing the Windows Display Driver Model (WDDM) 1.0, which is still the basis of how Windows interacts with modern GPUs.

One of the big goals of WDDM was that it would be extensible, so that Microsoft and GPU vendors could add features over time in a reasonable way. WDDM 1.0 brought sweeping changes that among other things took most GPU management away from games and put the OS in charge of it, greatly improving support for and the performance of running multiple 3D applications at once. In 2009, Windows 7 brought WDDM 1.1, which focused on reducing system memory usage by removing redundant data, and support for heterogeneous GPU configurations, a change that precluded modern iGPU + dGPU technologies such as NVIDIA’s Optimus. Finally, with Windows 8, Microsoft will be introducing the next iteration of WDDM, WDDM 1.2.

So what does WDDM 1.2 bring to the table? Besides underlying support for Direct3D 11.1 (more on that in a bit), it has several features that for the sake of brevity we’ll reduce to three major features. The first is power management, through a driver feature Microsoft calls DirectFlip. DirectFlip is a change in the Aero composition model that reduces the amount of memory bandwidth used when playing videos back in full screen and thereby reducing memory power consumption, as power consumption there has become a larger piece of total system power consumption in the age of GPU video decoders. At the same time WDDM 1.2 will also introduce a new overarching GPU power management model that will see video drivers work with the operating system to better utilize F-states and P-states to keep the GPU asleep more often.

The second major feature of WDDM 1.2 is GPU preemption. As of WDDM 1.1, applications effectively use a cooperative multitasking model to share the GPU; this model makes sharing the GPU entirely reliant on well-behaved applications and can break down in the face of complex GPU computing uses. With WDDM 1.2, Windows will be introducing a new pre-emptive multitasking model, which will have Windows preemptively switching out GPU tasks in order to ensure that every application gets its fair share of execution time and that the amount of time any application spends waiting for GPU access (access latency) is kept low. The latter is particularly important for a touch environment, where a high access latency can render a device unresponsive. Overall this is a shift that is very similar to how Windows itself evolved from Windows 3.1 to Windows 95, as Microsoft moved from a cooperative multitasking to a preemptive multitasking scheduling system for scheduling applications on the CPU.

The final major feature of WDDM 1.2 is improved fault tolerance, which goes hand in hand with GPU preemption. With WDDM 1.0 Microsoft introduced the GPU Timeout and Detection Recovery (TDR) mechanism, which caught the GPU if it hung and reset it, thereby providing a basic framework to keep GPU hangs from bringing down the entire system. TDR itself isn’t perfect however; the reset mechanism requires resetting the whole GPU, and given the use of cooperative multitasking, TDR cannot tell the difference between a hung application and one that is not yet ready to yield. To solve the former, Microsoft will be breaking down GPUs on a logical level – MS calls these GPU engines – with WDDM 1.2 being able to do a per-engine reset to fix the affected engine, rather than needing to reset the entire GPU. As for unyielding programs, this is largely solved as a consequence of pre-emption: unyielding programs can choose to opt-out of TDR so long as they make themselves capable of being quickly preempted, which will allow those programs full access to the GPU while not preventing the OS and other applications from using the GPU for their own needs. All of these features will be available for GPUs implementing WDDM 1.2.

And what will be implementing WDDM 1.2? While it’s still unclear at this time where SoC GPUs will stand, so far all Direct3D 11 compliant GPUs will be implementing WDDM 1.2 support; so this means the GeForce 400 series and better, the Radeon HD 5000 series and better, and the forthcoming Intel HD Graphics 4000 that will debut with Ivy Bridge later this year. This is consistent with how WDDM has been developed, which has been to target features that were added in previous generations of GPUs in order let a large hardware base build up before the software begins using it. WDDM 1.0 and 1.1 drivers and GPUs will still continue to work in Windows 8, they just won't support the new features in WDDM 1.2.

Direct3D 11.1

Now that we’ve had a chance to take a look at the underpinnings of Windows 8’s graphical stack, how will things be changing at the API layer? As many of our readers are well aware, Windows 8 will be introducing the next version of Direct3D, Direct3D 11.1. As the name implies, D3D 11.1 is a relatively minor update to Direct3D similar in scope to Direct3D 10.1 in 2008, and will focus on adding a few features to Direct3D rather than bringing in any kind of sweeping change.

So what can we look forward to in Direct3D 11.1? The biggest end user feature is going to be the formalization of Stereo 3D support into the D3D API. Currently S3D is achieved by either partially going around D3D to present a quad buffer to games and applications that directly support S3D, or in the case of driver/middleware enhancement manipulating the rendering process itself to get the desired results. Formalizing S3D won’t remove the need for middleware to enable S3D on games that choose not to implement it, but for games that do choose to directly implement it such as Deus Ex, it will now be possible to do this through Direct3D and to do so more easily.


AMD’s Radeon HD 7970: The First Direct3D 11.1 Compliant Video Card

The rest of the D3D11.1 feature set otherwise isn’t going to be nearly as visible, but it will still be important for various uses. Interoperability between graphics, video, and compute is going to be greatly improved, allowing video via Media Foundation to be sent through pixel and compute shaders, among other things. Meanwhile Target Independent Rasterization will provide high performance, high quality GPU based anti-aliasing for Direct2D, allowing rasterization to move from the CPU to the GPU. Elsewhere developers will be getting some new tools: some new buffer commands should give developers a few more tricks to work with, shader tracing will enable developers to better trace shader performance through Direct3D itself, and double precision (FP64) support will be coming to pixel shaders on hardware that has FP64 support, allowing developers to use higher precision shaders.

Many of these features should be available on existing Direct3D11 compliant GPUs in some manner, particularly S3D support. The only thing we’re aware of that absolutely requires new hardware support is Target Independent Rasterization; for that you will need the latest generation of GPUs such as the Radeon HD 7000 series, or as widely expected, the Kepler generation of GeForces.

Under the Hood: Networking Improvements and Drivers Metro Apps Overview: Mail, Calendar, Messaging, People, Photos, and Camera
Comments Locked

286 Comments

View All Comments

  • aguilpa1 - Friday, March 9, 2012 - link

    I understand the use of multi-monitors where windows knows you have more than one monitor but how does it handle support when you have multiple monitors aka Nvidia Vision Surround or Eyefinity? In these situations you have multiple monitors being reported as a single for example 5760x1080 (3 monitors) or higher resolution screen? Will it be up to Nvidia and ATI to provide support to allow the manipulation of taskbar or icons on the monitor areas that you would like to have?
  • Andrew.a.cunningham - Friday, March 9, 2012 - link

    Short answer: if the OS just sees one monitor, it will treat the system as it would any single monitor system, which I believe would mean Metro stretched across a 5760x1080 screen. :-)
  • silverblue - Friday, March 9, 2012 - link

    I'm imagining multi-monitor touchscreen goodness right about now...
  • mcnabney - Friday, March 9, 2012 - link

    No.
    It sticks Metro in one and the desktop in the other. It looks completely bizarre to me and essentially eliminates the cohesiveness until Metro is turned off.
  • Andrew.a.cunningham - Friday, March 9, 2012 - link

    That's the behavior with a standard multi-monitor setup - is that also true of an Eyefinity setup where multiple monitors are combined to form one continuous display? I believe that's what the OP was asking.
  • PopinFRESH007 - Sunday, April 15, 2012 - link

    No, As you suspected the graphics card basically "glues" the screens together in the driver, so to windows it's a single massively wide monitor. It results in a very wide bright colored stretched out backdrop with tiles on the far left hand side and a whole bunch of wasted space on the right.
  • theangryintern - Friday, March 9, 2012 - link

    I've currently got the Customer Preview running on a Dell D630 that was retired from my company (so I was able to take it home and keep for personal use) We got our D630s with the nVidia Quadro cards and 4GB of RAM. Seems to be running pretty good so far, but I really haven't had a chance to do any serious testing with it.
  • mevensen - Friday, March 9, 2012 - link

    None of the test systems had SSD caching (that I noticed), is there any brave soul that's tried on their system with an SSD cache setup?

    I'm not foolhardy enough to convert my main system (with SSD caching) to the Win8 preview, but I'm curious how well they play together.

    On another note, I've put the Win8 preview on my MacBook Air using Parallels with some pretty decent results, making a nice hybrid with good (multi)touchpad functionality. Still playing with it, and have no idea of what higher performance needs will bring (i.e. gaming), but there are definitely some things to like.

    I hope they find a way to better integrate add-ons (in particular Flash) into the Metro version of IE, as it is particularly jarring to dump to the desktop just to access Flash content.
  • Andrew.a.cunningham - Friday, March 9, 2012 - link

    Not sure about SSD caching, but Metro IE does not and apparently will never support plug-ins: http://www.anandtech.com/show/4816/metro-ie10-to-b...
  • cjm14 - Friday, March 9, 2012 - link

    "There are basic categories for games, social apps, music apps, and a few others, but there doesn't appear to be any sort of search functionality"

    You can search the Store by bringing up the Search charm while the Store is up. In fact, all of the charms (except Start) are app-context sensitive though apps can choose not to implement some of them.

Log in

Don't have an account? Sign up now