In-Depth with the Windows 8 Consumer Preview
by Andrew Cunningham, Ryan Smith, Kristian Vättö & Jarred Walton on March 9, 2012 10:30 AM EST- Posted in
- Microsoft
- Operating Systems
- Windows
- Windows 8
Starting with Windows Vista, Microsoft began the first steps of what was to be a long campaign to change how Windows would interact with GPUs. XP, itself based on Windows 2000, used a driver model that predated the term “GPU” itself. While graphics rendering was near and dear to the Windows kernel for performance reasons, Windows still treated the video card as more of a peripheral than a processing device. And as time went on that peripheral model became increasingly bogged down as GPUs became more advanced in features, and more important altogether.
With Vista the GPU became a second-class device, behind only the CPU itself. Windows made significant use of the GPU from the moment you turned it on due to the GPU acceleration of Aero, and under the hood things were even more complex. At the API level Microsoft added Direct3D 10, a major shift in the graphics API that greatly simplified the process of handing work off to the GPU and at the same time exposed the programmability of GPUs like never before. Finally, at the lowest levels of the operating system Microsoft completely overhauled how Windows interacts with GPUs by implementing the Windows Display Driver Model (WDDM) 1.0, which is still the basis of how Windows interacts with modern GPUs.
One of the big goals of WDDM was that it would be extensible, so that Microsoft and GPU vendors could add features over time in a reasonable way. WDDM 1.0 brought sweeping changes that among other things took most GPU management away from games and put the OS in charge of it, greatly improving support for and the performance of running multiple 3D applications at once. In 2009, Windows 7 brought WDDM 1.1, which focused on reducing system memory usage by removing redundant data, and support for heterogeneous GPU configurations, a change that precluded modern iGPU + dGPU technologies such as NVIDIA’s Optimus. Finally, with Windows 8, Microsoft will be introducing the next iteration of WDDM, WDDM 1.2.
So what does WDDM 1.2 bring to the table? Besides underlying support for Direct3D 11.1 (more on that in a bit), it has several features that for the sake of brevity we’ll reduce to three major features. The first is power management, through a driver feature Microsoft calls DirectFlip. DirectFlip is a change in the Aero composition model that reduces the amount of memory bandwidth used when playing videos back in full screen and thereby reducing memory power consumption, as power consumption there has become a larger piece of total system power consumption in the age of GPU video decoders. At the same time WDDM 1.2 will also introduce a new overarching GPU power management model that will see video drivers work with the operating system to better utilize F-states and P-states to keep the GPU asleep more often.
The second major feature of WDDM 1.2 is GPU preemption. As of WDDM 1.1, applications effectively use a cooperative multitasking model to share the GPU; this model makes sharing the GPU entirely reliant on well-behaved applications and can break down in the face of complex GPU computing uses. With WDDM 1.2, Windows will be introducing a new pre-emptive multitasking model, which will have Windows preemptively switching out GPU tasks in order to ensure that every application gets its fair share of execution time and that the amount of time any application spends waiting for GPU access (access latency) is kept low. The latter is particularly important for a touch environment, where a high access latency can render a device unresponsive. Overall this is a shift that is very similar to how Windows itself evolved from Windows 3.1 to Windows 95, as Microsoft moved from a cooperative multitasking to a preemptive multitasking scheduling system for scheduling applications on the CPU.
The final major feature of WDDM 1.2 is improved fault tolerance, which goes hand in hand with GPU preemption. With WDDM 1.0 Microsoft introduced the GPU Timeout and Detection Recovery (TDR) mechanism, which caught the GPU if it hung and reset it, thereby providing a basic framework to keep GPU hangs from bringing down the entire system. TDR itself isn’t perfect however; the reset mechanism requires resetting the whole GPU, and given the use of cooperative multitasking, TDR cannot tell the difference between a hung application and one that is not yet ready to yield. To solve the former, Microsoft will be breaking down GPUs on a logical level – MS calls these GPU engines – with WDDM 1.2 being able to do a per-engine reset to fix the affected engine, rather than needing to reset the entire GPU. As for unyielding programs, this is largely solved as a consequence of pre-emption: unyielding programs can choose to opt-out of TDR so long as they make themselves capable of being quickly preempted, which will allow those programs full access to the GPU while not preventing the OS and other applications from using the GPU for their own needs. All of these features will be available for GPUs implementing WDDM 1.2.
And what will be implementing WDDM 1.2? While it’s still unclear at this time where SoC GPUs will stand, so far all Direct3D 11 compliant GPUs will be implementing WDDM 1.2 support; so this means the GeForce 400 series and better, the Radeon HD 5000 series and better, and the forthcoming Intel HD Graphics 4000 that will debut with Ivy Bridge later this year. This is consistent with how WDDM has been developed, which has been to target features that were added in previous generations of GPUs in order let a large hardware base build up before the software begins using it. WDDM 1.0 and 1.1 drivers and GPUs will still continue to work in Windows 8, they just won't support the new features in WDDM 1.2.
Direct3D 11.1
Now that we’ve had a chance to take a look at the underpinnings of Windows 8’s graphical stack, how will things be changing at the API layer? As many of our readers are well aware, Windows 8 will be introducing the next version of Direct3D, Direct3D 11.1. As the name implies, D3D 11.1 is a relatively minor update to Direct3D similar in scope to Direct3D 10.1 in 2008, and will focus on adding a few features to Direct3D rather than bringing in any kind of sweeping change.
So what can we look forward to in Direct3D 11.1? The biggest end user feature is going to be the formalization of Stereo 3D support into the D3D API. Currently S3D is achieved by either partially going around D3D to present a quad buffer to games and applications that directly support S3D, or in the case of driver/middleware enhancement manipulating the rendering process itself to get the desired results. Formalizing S3D won’t remove the need for middleware to enable S3D on games that choose not to implement it, but for games that do choose to directly implement it such as Deus Ex, it will now be possible to do this through Direct3D and to do so more easily.
AMD’s Radeon HD 7970: The First Direct3D 11.1 Compliant Video Card
The rest of the D3D11.1 feature set otherwise isn’t going to be nearly as visible, but it will still be important for various uses. Interoperability between graphics, video, and compute is going to be greatly improved, allowing video via Media Foundation to be sent through pixel and compute shaders, among other things. Meanwhile Target Independent Rasterization will provide high performance, high quality GPU based anti-aliasing for Direct2D, allowing rasterization to move from the CPU to the GPU. Elsewhere developers will be getting some new tools: some new buffer commands should give developers a few more tricks to work with, shader tracing will enable developers to better trace shader performance through Direct3D itself, and double precision (FP64) support will be coming to pixel shaders on hardware that has FP64 support, allowing developers to use higher precision shaders.
Many of these features should be available on existing Direct3D11 compliant GPUs in some manner, particularly S3D support. The only thing we’re aware of that absolutely requires new hardware support is Target Independent Rasterization; for that you will need the latest generation of GPUs such as the Radeon HD 7000 series, or as widely expected, the Kepler generation of GeForces.
286 Comments
View All Comments
yannigr - Thursday, March 15, 2012 - link
May I say something here?Sorry for my English in advance.
I don't know if your work at Anandtech is a full time job or more like an occasional work. When you see a site like Anandtech you think that this is more like a big company with full time employees not a site with people that come and go just to write an article, or a review, at their spare time with hardware that they buy or if they are lucky get from the big companies as a gift for a presentation/review.
So when you are thinking Anandtech (and this is where maybe we misjudge you) as a big company you don't expect to read stuff that you read from a 16 years old kid in a small forum with 2-5-10 thousand members about his last review. I can not accept an excuse like this that you give. If you are in the BIGGEST and MORE RESPECTED hardware review site on the internet, and I don't think I am wrong here, you buy hardware that you also DO NOT LIKE or is not good enough for YOU. Why? Because that is your job or/and because you are writing for ANANDTECH not YannigrTech.
When you have the time to fast-test 8 machines you try to find an AMD system and even if it exists a system with VIA hardware. I know I must be joking with the last one about VIA. Well, I am not. I do think that if there was a VIA system in there many would be posting about how they were surprised about that. Even if they where laughing at it's performance it would have been a plus for the review.
Think a review many pages long about the next 3DMark only with AMD gpus because the reviewer don't find Nvidia gpus good enough. Many Nvidia fans would have been disappointed, to put it mildly.
Anyway the first post was written just for fun, because I know that Intel don't only have the better hardware but also the biggest influence not only at hardware sites but in people's minds too. Between two equal systems most just choose Intel because it is an Intel.
This post was whiten only because I was not expecting someone that writes for Anandtech to say that:
I only have Intel, I am not buying AMD because it is just not good enough for me.
Last. Thanks for the review. No joking here. It was interested and useful.
Andrew.a.cunningham - Monday, March 19, 2012 - link
First, thanks for reading! I'm glad you found the review useful. Second, I want to try to answer some of your questions as to how AnandTech (and most new outlets on the Internet) work.Most writers who get paid are not working full-time positions. This is true both of independently owned websites like AnandTech, corporate-owned sites like IGN, or even big-time traditional publications like the New York Times. Most sites will contract freelancers rather than full-time workers both because of cost (freelancers are almost universally paid less than salaried employees and get no benefits) and administrative reasons (full-time employees mean that you've got to start paying attention to things like benefits and payroll taxes, necessitating a larger administrative staff to handle things like accounting).
Different outlets handle things in different ways - at AnandTech, the pay is OK for contractors, and most of us can bother Anand himself if we have questions about a story we're working on. On other sites (to cherry-pick an extreme example, let's call out the Huffington Post), freelancers are sometimes paid nothing, and are rather compensated with "exposure" and clips that they could in theory use to land a paying gig later on. I think what HuffPo (and, really, any profitable publication that doesn't pay its writers) does is a scam and I've got some strong feelings about it, but that's not my main point - my point is that much of what you read on the Internet is being written by people who don't write on the Internet full time. At AnandTech, even the senior editors are contracted freelancers rather than full-time employees.
Different people write for different reasons, but my goal is to make a living at it - I'm doing it because I love it, sure, but I'm also doing it because there are bills to pay. To do that, I cannot and will not spend $500 on hardware to use in a review that will earn me quite a bit less than $500. As anyone can tell you, that math doesn't add up, and since this is a review of the beta version of an x86-compatible Windows product - a product that looks and acts the same on any hardware that meets the minimum requirements - it's frankly not as important as a few of you seem to think it is. And that's all I have to say about it.
yannigr - Monday, March 19, 2012 - link
I still believe that you should buy an AMD system. Not today or tomorrow but the next time you would need an extra machine. But that's me.Thanks for answering my post :-)
Andrew.a.cunningham - Monday, March 19, 2012 - link
I'll look into it for sure. Trinity has my interest piqued. :-)TC2 - Sunday, March 11, 2012 - link
AMD?This isn't the point! Andrew Cunningham here hasn't downside. I want to ask, what is the problem here? The recent Intel CPUs a far superior than the amd cpus! And, if you want to know the best sides of W8 ... the amd just isn't the first choice ... :)))
silverblue - Monday, March 12, 2012 - link
At the time Andrew got those machines, the best option across the board likely would've been Intel. The Atom build is thoroughly outclassed by Brazos but it simply wasn't available at the time.It's only really the past twelve months to fifteen months where AMD has actually had a viable range of mobile processors for netbooks and larger.
medi01 - Monday, March 12, 2012 - link
Name something "far superior" to AMD A8 3850 that has comparable cost.TC2 - Wednesday, March 14, 2012 - link
Oops to daisies :) It would make a god to tears!You and all amd-fans, are very funny!
When the conversation is about cores - "amd has twice than Intel" ?!
When the conversation is about performance - "the cost isn't comparable" ?!
When the conversation is about CPU - "amds APU is bla-bla..." ?!
When the conversation is about benchmarks - "look look, the BD is almost like Nehalem (btw. 2 generations older)" ?!
All those is UNTRUE!!! And remember well - I and many-many people doesn't give a shit about amds green presentations, cores and so ... We need fast CPU in ST as well as MT, and fast GPU! And believe me, esp. in professional segment amd got nothing significant :)))
chucky2 - Friday, March 9, 2012 - link
I'd like for you to do an article on feature support of DirectX 9 cards under say Windows XP SP3 vs Windows 8. I know AMD dropped support for their DirectX 9 based cards before their 10.2 (Feb 2010 driver set), and then later belatedly added 10.2 as the last supported driver. My interest is in if they've dropped proper support of their cards in Vista/7/and now 8 rather than putting in the (very likely minimal) work to properly support them.Thanks for the article!
Andrew.a.cunningham - Friday, March 9, 2012 - link
The 10.2 driver was only supported under Vista, but in my experience it works fine for Windows 7, which means it should work OK in Windows 8. One of the iMacs I tested on used a Radeon X1600 Mobility card - I installed the Vista-certified driver off of a Snow Leopard DVD and didn't see any crashes or instability, but your mileage may vary.