As we saw in part 1 of this series, large applications and games under Windows are getting incredibly close to hitting the 2GB barrier, the amount of virtual address space a traditional Win32 (32-bit) application can access. Once applications begin to hit these barriers, many of them will start acting up and/or crashing in unpredictable ways which makes resolving the problem even harder. Developers can work around these issues, but none of these except for building less resource intensive games or switching to 64-bit will properly solve the problem without creating other serious issues.

Furthermore, as we saw in part 2, games are consuming greater amounts of address space under Windows Vista than Windows XP. This makes Vista less suitable for use with games when at the same time it will be the version of Windows that will see the computing industry through the transition to 64-bit operating systems becoming the new standard. Microsoft knew about the problem, but up until now we were unable to get further details on what was going on and why. As of today that has changed.

Microsoft has published knowledge base article 940105 on the matter, and with it has finalized a patch to reduce the high virtual address space usage of games under Vista. From this and our own developer sources, we can piece together the problem that was causing the high virtual address space issues under Vista.

As it turns out, our initial guess about the issue being related to memory allocations being limited to the 2GB of user space for security reasons was wrong, the issue is simpler than that. One of the features of the Windows Vista Display Driver Model (WDDM) is that video memory is no longer a limited-sharing resource that applications will often take complete sovereign control of; instead the WDDM offers virtualization of video memory so that all applications can use what they think is video memory without needing to actually care about what else is using it - in effect removing much of the work of video memory management from the application. From both a developer's and user's perspective this is great as it makes game/application development easier and multiple 3D accelerated applications get along better, but it came with a cost.

All of that virtualization requires address space to work with; Vista uses an application's 2GB user allocation of virtual address space for this purpose, scaling the amount of address space consumed by the WDDM with the amount of video memory actually used. This feature is ahead of its time however as games and applications written to the DirectX 9 and earlier standards didn't have the WDDM to take care of their memory management, so applications did it themselves. This required the application to also allocate some virtual address space to its management tasks, which is fine under XP.

However under Vista this results in the application and the WDDM effectively playing a game of chicken: both are consuming virtual address space out of the same 2GB pool and neither is aware of the other doing the exact same thing. Amusingly, given a big enough card (such as a 1GB Radeon X2900XT), it's theoretically possible to consume all 2GB of virtual address space under Vista with just the WDDM and the application each trying to manage the video memory, which would leave no further virtual address space for anything else the application needs to do. In practice, both the virtual address space allocations for the WDDM and the application video memory manager attempt to grow as needed, and ultimately crash the application as each starts passing 500MB+ of allocated virtual address space.

This obviously needed to be fixed, and for a multitude of reasons (such as Vista & XP application compatibility) such a fix needed to be handled by the operating system. That fix is KB940105, which is a change to how the WDDM handles its video memory management. Now the WDDM will not default to using its full memory management capabilities, and more importantly it will not be consuming virtual address space unless specifically told to by the application. This will significantly reduce the virtual address space usage of an application when video memory is the culprit, but at best it will only bring Vista down to the kind of virtual address space usage of XP.

Testing the KB940105 Hotfix
POST A COMMENT

37 Comments

View All Comments

  • initialised - Thursday, February 14, 2008 - link

    After installing Vista SP1 on a Maximus/QX9650/4GB system it showed a full complement of 4GB under Vista Ultimate 32mb Reply
  • Ichinisan - Tuesday, August 14, 2007 - link

    Can we take a look at how SLI is affected by this? With two 1GB video cards, could you hit the addressing limit in XP (or two 768MB GTXs or 640MB GTSs for that matter)? I remember that you had trouble doing that. Reply
  • leexgx - Tuesday, August 14, 2007 - link

    SLI systems have the same work space as one card an 7900 GTX 512 in SLI does not turn it into an 1GB video SLI system the frame buffer is still 512mb (last time i checked as Both cards Need the same Textures as both cards render half of the video thats why its was Daft calling an 7950 GX2 with 1gb of video ram (basicy 2x7900 GT cards running off 1 PCI-E) when it can only Use 512MB of it)
    Reply
  • Blacklash - Tuesday, August 14, 2007 - link

    MS was gutless and decided not to push it. Instead of trying to create demand by instilling desire, which can easily be done through effective marketing, they took the lazy "there's no demand" approach. What makes me angry is they have the money to create a hell of a marketing blitz if they so desire, and could even eaten the possible initial loss of attempting to force a move to x64 if they had to. Get a spine MS. Go for it. Reply
  • MadAd - Tuesday, August 14, 2007 - link

    of course they didnt push it. The natural time to swap to x64 was the recent vista launch, but A. Intel were still playing catchup with x64 on c2d and B. After all that time in development they had to get some return, so they fobbed us off with the x32 as the mainstream product, which is really at the end of its life now.


    So what next? Well M$ know damn well that half the rest of the world are waiting for SP1 before touching vista with a bargepole so that'll be hurried out the door and dressed up in more BS marketing..and then what?

    Filling the marketing void between SP1 and whatever next will be the 'transition to 64' era. Yes yes lots of chubbly money to be made giving people what they want, not when they need it like now, but when its best to make the most money from it.
    Reply
  • BikeDude - Thursday, August 16, 2007 - link

    They recently announced that Windows Server 2008 is the last server product to support 32-bit CPUs. It is not known at this time if the next client version of Windows will also drop 32-bit CPUs.

    But the writing is on the wall. Sort of.

    However... A lot of users will not benefit from a 64-bit OS, but can still use 32-bit Vista just fine. Many will see the increased memory usage of 64-bit Vista as a problem (despite the cheap memory prices) performance-wise. For the vast majority of users, it makes little sense. Some of them may even be using some 16-bit Windows software for all we know... Why force them into problems they don't need? 32-bit Vista is a great stepping stone to 64-bit Vista. There is a choice, and I think most of us needs it.

    If you look at the forums (and the article we are commenting), many punters advocate sticking with 32-bit Vista for the foreseeable future. It is a cowardly stance in my opinion, but they present some valid concerns.
    Reply
  • poohbear - Tuesday, August 14, 2007 - link

    very informative article! dont mean to be nit picky, but u guys use words in some wierd contexts: a program taking "sovereign control" of memory? "assumingly"? lol interesting usage. cheers for a great article nonetheless! Reply
  • Tristesse27 - Tuesday, August 14, 2007 - link

    Ryan, with all respect, you seriously need to learn how to use commas. Reply
  • Larso - Tuesday, August 14, 2007 - link

    As long as you are running 32 bit, with 32 bit addressing, the ultimate barrier will always be 4 GB. So isn't this 2 GB barrier problem a bit acedemic, as we are only one bit short of spending all 32 bits of addressing anyways? We are already hitting the roof and a factor of two is not significant in the long run.

    But hitting roofs seems to make people paranoid, which is understandable with the otherwise unlimited resources of a modern PC. But everybody seems to have forgotten how it was in the old days. How hardware limitations forced developers to be creative and ingenious with great results. You don't see that today, it seems more like developers are acting like spoiled kids.

    Perhaps its healthy to face a hard resource limitation again, so developers will be forced to make efficient use of the resources. Its not that 2 GB is a tiny amount of memory, it's actually a huge amount. And when there is a justified case for using more, there is always 64-bit.
    Reply
  • MadBoris - Tuesday, August 14, 2007 - link

    quote:

    Perhaps its healthy to face a hard resource limitation again, so developers will be forced to make efficient use of the resources. Its not that 2 GB is a tiny amount of memory, it's actually a huge amount. And when there is a justified case for using more, there is always 64-bit.


    It's by no means huge for games, which have always pushed hardware to it's fullest.

    The problem is that those that set the limits, are the most guilty of waste (MS). Try using managed .NET and see RAM usage climb exorbitantly in applications. Vista itself claims extensive amounts of RAM as applications open up on the PC (w/ superfetch off). Also Vista uses more CPU cycles compared to XP for a game (maybe only certain games, maybe not). I also didn't ask MS to commit 20+- percent of a CPU core to sound as they think I should do now in Vista w/ software.

    As we get more hardware resources MS is right there to "waste" it, not really using it efficiently or gaining performance from more HW resources, which should be the obvious result. Cutting out hardware sound best suited their needs (I really think it had more to do with pushing xact and moving devs to 360 consoles). Furthermore, tighter control on GPU makers with restricted WDDM helped their needs with a prettier desktop to compete with MAC (maybe wddm2 will bring more to the table). DX10+ API's also now further limit the individual 3D features that GPU companies can expose in their hardware, they have to have the API support first to expose features and WDDM support to expose how resources are managed.

    I'm not into MS bashing or conspiracy theories, but their comes a point that "Sovereign" control becomes better for the king and worse for the people, it inhibits creativity. Maybe it will take an "upheavel" but that sure won't come from any outspoken hardware MFR's or IHV's because they have to maintain best relations, regardless how poorly and yet dominantly, MS steers the ship.

    Devs will have to become smarter with resources, but as this article clearly shows, that needs to begin with Microsoft first, they are most guilty in the OS and dev languages and runtimes. So I am not exactly filled with confidence when they remove hardware sound and take tighter control of the future of 3D features with their API support and hardware functionality with their driver models.

    It's not an MS rant, but if MS is going to take tighter control of all the reins, and everyone else has to adapt to them, then they should be the ones to answer for things. Since they made it their responsibility, they should be held accountable not the companies like Creative and Nvidia that have to struggle with the difficulty of adapting to the stringent choices MS makes (for their competitive reasons).
    Reply

Log in

Don't have an account? Sign up now