Starting with Windows Vista, Microsoft began the first steps of what was to be a long campaign to change how Windows would interact with GPUs. XP, itself based on Windows 2000, used a driver model that predated the term “GPU” itself. While graphics rendering was near and dear to the Windows kernel for performance reasons, Windows still treated the video card as more of a peripheral than a processing device. And as time went on that peripheral model became increasingly bogged down as GPUs became more advanced in features, and more important altogether.

With Vista the GPU became a second-class device, behind only the CPU itself. Windows made significant use of the GPU from the moment you turned it on due to the GPU acceleration of Aero, and under the hood things were even more complex. At the API level Microsoft added Direct3D 10, a major shift in the graphics API that greatly simplified the process of handing work off to the GPU and at the same time exposed the programmability of GPUs like never before. Finally, at the lowest levels of the operating system Microsoft completely overhauled how Windows interacts with GPUs by implementing the Windows Display Driver Model (WDDM) 1.0, which is still the basis of how Windows interacts with modern GPUs.

One of the big goals of WDDM was that it would be extensible, so that Microsoft and GPU vendors could add features over time in a reasonable way. WDDM 1.0 brought sweeping changes that among other things took most GPU management away from games and put the OS in charge of it, greatly improving support for and the performance of running multiple 3D applications at once. In 2009, Windows 7 brought WDDM 1.1, which focused on reducing system memory usage by removing redundant data, and support for heterogeneous GPU configurations, a change that precluded modern iGPU + dGPU technologies such as NVIDIA’s Optimus. Finally, with Windows 8, Microsoft will be introducing the next iteration of WDDM, WDDM 1.2.

So what does WDDM 1.2 bring to the table? Besides underlying support for Direct3D 11.1 (more on that in a bit), it has several features that for the sake of brevity we’ll reduce to three major features. The first is power management, through a driver feature Microsoft calls DirectFlip. DirectFlip is a change in the Aero composition model that reduces the amount of memory bandwidth used when playing videos back in full screen and thereby reducing memory power consumption, as power consumption there has become a larger piece of total system power consumption in the age of GPU video decoders. At the same time WDDM 1.2 will also introduce a new overarching GPU power management model that will see video drivers work with the operating system to better utilize F-states and P-states to keep the GPU asleep more often.

The second major feature of WDDM 1.2 is GPU preemption. As of WDDM 1.1, applications effectively use a cooperative multitasking model to share the GPU; this model makes sharing the GPU entirely reliant on well-behaved applications and can break down in the face of complex GPU computing uses. With WDDM 1.2, Windows will be introducing a new pre-emptive multitasking model, which will have Windows preemptively switching out GPU tasks in order to ensure that every application gets its fair share of execution time and that the amount of time any application spends waiting for GPU access (access latency) is kept low. The latter is particularly important for a touch environment, where a high access latency can render a device unresponsive. Overall this is a shift that is very similar to how Windows itself evolved from Windows 3.1 to Windows 95, as Microsoft moved from a cooperative multitasking to a preemptive multitasking scheduling system for scheduling applications on the CPU.

The final major feature of WDDM 1.2 is improved fault tolerance, which goes hand in hand with GPU preemption. With WDDM 1.0 Microsoft introduced the GPU Timeout and Detection Recovery (TDR) mechanism, which caught the GPU if it hung and reset it, thereby providing a basic framework to keep GPU hangs from bringing down the entire system. TDR itself isn’t perfect however; the reset mechanism requires resetting the whole GPU, and given the use of cooperative multitasking, TDR cannot tell the difference between a hung application and one that is not yet ready to yield. To solve the former, Microsoft will be breaking down GPUs on a logical level – MS calls these GPU engines – with WDDM 1.2 being able to do a per-engine reset to fix the affected engine, rather than needing to reset the entire GPU. As for unyielding programs, this is largely solved as a consequence of pre-emption: unyielding programs can choose to opt-out of TDR so long as they make themselves capable of being quickly preempted, which will allow those programs full access to the GPU while not preventing the OS and other applications from using the GPU for their own needs. All of these features will be available for GPUs implementing WDDM 1.2.

And what will be implementing WDDM 1.2? While it’s still unclear at this time where SoC GPUs will stand, so far all Direct3D 11 compliant GPUs will be implementing WDDM 1.2 support; so this means the GeForce 400 series and better, the Radeon HD 5000 series and better, and the forthcoming Intel HD Graphics 4000 that will debut with Ivy Bridge later this year. This is consistent with how WDDM has been developed, which has been to target features that were added in previous generations of GPUs in order let a large hardware base build up before the software begins using it. WDDM 1.0 and 1.1 drivers and GPUs will still continue to work in Windows 8, they just won't support the new features in WDDM 1.2.

Direct3D 11.1

Now that we’ve had a chance to take a look at the underpinnings of Windows 8’s graphical stack, how will things be changing at the API layer? As many of our readers are well aware, Windows 8 will be introducing the next version of Direct3D, Direct3D 11.1. As the name implies, D3D 11.1 is a relatively minor update to Direct3D similar in scope to Direct3D 10.1 in 2008, and will focus on adding a few features to Direct3D rather than bringing in any kind of sweeping change.

So what can we look forward to in Direct3D 11.1? The biggest end user feature is going to be the formalization of Stereo 3D support into the D3D API. Currently S3D is achieved by either partially going around D3D to present a quad buffer to games and applications that directly support S3D, or in the case of driver/middleware enhancement manipulating the rendering process itself to get the desired results. Formalizing S3D won’t remove the need for middleware to enable S3D on games that choose not to implement it, but for games that do choose to directly implement it such as Deus Ex, it will now be possible to do this through Direct3D and to do so more easily.


AMD’s Radeon HD 7970: The First Direct3D 11.1 Compliant Video Card

The rest of the D3D11.1 feature set otherwise isn’t going to be nearly as visible, but it will still be important for various uses. Interoperability between graphics, video, and compute is going to be greatly improved, allowing video via Media Foundation to be sent through pixel and compute shaders, among other things. Meanwhile Target Independent Rasterization will provide high performance, high quality GPU based anti-aliasing for Direct2D, allowing rasterization to move from the CPU to the GPU. Elsewhere developers will be getting some new tools: some new buffer commands should give developers a few more tricks to work with, shader tracing will enable developers to better trace shader performance through Direct3D itself, and double precision (FP64) support will be coming to pixel shaders on hardware that has FP64 support, allowing developers to use higher precision shaders.

Many of these features should be available on existing Direct3D11 compliant GPUs in some manner, particularly S3D support. The only thing we’re aware of that absolutely requires new hardware support is Target Independent Rasterization; for that you will need the latest generation of GPUs such as the Radeon HD 7000 series, or as widely expected, the Kepler generation of GeForces.

Under the Hood: Networking Improvements and Drivers Metro Apps Overview: Mail, Calendar, Messaging, People, Photos, and Camera
POST A COMMENT

286 Comments

View All Comments

  • Impulses - Friday, March 09, 2012 - link

    I plan to start my own riot once I'm done reading if there isn't any multi-display discussion... :p Reply
  • MrSpadge - Friday, March 09, 2012 - link

    AMD fans can be quite thin-skinned.. Reply
  • Kristian Vättö - Friday, March 09, 2012 - link

    My system is not included in the table but don't worry, it's Intel based as well ;-) Z68 and i5-2500K to be exact. Reply
  • futurepastnow - Friday, March 09, 2012 - link

    I installed it and have been playing with it on an AMD-based system (laptop with a Turion II P540 processor, HD4250 graphics and 8GB of DDR3). It runs fine.

    I mean, actually using Win8 is like sticking a fork in my hand, but there are no performance issues whatsoever on what is now a basically low-end AMD system.
    Reply
  • george1976 - Sunday, March 11, 2012 - link

    It is not a funny post. The answer I am sure you know it very well, it is all about the money, money makes the world go round etc. Reply
  • Andrew.a.cunningham - Monday, March 12, 2012 - link

    So, wait. Intel paid me money to use years-old CPUs of theirs in a review of a beta product that another company made?

    I like this story. Tell me more.
    Reply
  • medi01 - Monday, March 12, 2012 - link

    We shouldn't be telling you fairy tales.
    Having 8 systems with Intel and 0 with AMD you should have better argument than "oh, I've forgotten it in my pocket".type.

    Why is it that you " have no AMD test systems on hand at present" please?
    Reply
  • Andrew.a.cunningham - Monday, March 12, 2012 - link

    Because this is a review of Windows 8's new features, and it doesn't matter what hardware I run it on because an x86 processor is an x86 processor. Because I'm also an OS X writer and AMD doesn't come in Macs. Because Intel offered bang/buck and battery life last time I was in the market for a laptop. Because the business-class PCs that I usually buy lean heavily toward Intel.

    You wanna buy me an AMD system? Please do. Otherwise, I'm sorry I don't have anything in my arsenal, but not sorry enough to spend $400-600+ on computing equipment I won't otherwise use.
    Reply
  • medi01 - Tuesday, March 13, 2012 - link

    It doesn't matter what hardware eh?

    "This broad list of hardware, most of it at least a couple of years old, should be representative of most machines that people will actually be thinking about upgrading to Windows 8"

    And this, coming from a hardware reviewer, is insulting humanity:

    "Because Intel offered bang/buck and battery life last time I was in the market for a laptop"

    You can have good AMD notebooks (with good battery life AND performance, including GPU) at price points where there is NO Intel offering.
    Reply
  • Andrew.a.cunningham - Tuesday, March 13, 2012 - link

    "Insulting humanity?" Dude, perspective. I'm trying very hard to engage you in a rational conversation, so try to extend the same courtesy to me. They're just CPUs, and I don't understand why you're attacking me personally about them.

    I'm not sure what notebooks you're referring to - even a cursory glace at Newegg, Best Buy, and other retailers shows Intel offerings featuring Pentiums and Core i3s (both Nehalem and Sandy Bridge-based) competing in the sub-$500 (and sub-$400) market where AMD is offering Brazos and Llano chips - AMD's GPUs are going to be much better but Intel's CPUs are also much better, so what you buy depends on what your workload is. Some of the AMD laptops I'm seeing use single-core processors, which I wouldn't recommend to anyone in 2012 regardless of GPU.

    The difference becomes more apparent once you start looking at higher-end laptops - I've had a very hard time finding a 14" or 15" AMD laptop with anything other than a 1366x768 display, for example, and an equally hard time finding an AMD notebook with dedicated graphics. I've looked not just on Newegg and other retailers, but also on the websites of major OEMs like Dell, HP, and Lenovo - their AMD offerings are pretty sparse.

    This is AMD's problem right now, at least in notebooks - it's "good enough" at the low end, but get into the middle and high-end and (without even considering performance) you very rarely even have an AMD option.

    Also, for the record, the last time I was in the market for a laptop was about two years ago when I bought my E6410 - this was well before Brazos and Llano.
    Reply

Log in

Don't have an account? Sign up now