NVIDIA has announced plans to cease developing drivers supporting 32-bit operating systems for any GPU architecture in the near future. All NVIDIA GPU drivers published after the release 390 (due in early 2018) will only support x86-64 OSes. The decision marks an important milestone in the transition of the PC industry to 64-bit computing that has been ongoing for over a decade, but may upset some of the users who still run older machines or those that require drivers for embedded systems.

NVIDIA driver version 390 will be the final drivers from the company that will support 32-bit Windows 7/8/8.1/10, Linux or FreeBSD. Whatever version comes after, it will only run on 64-bit versions of OSes. The company will continue to release 32-bit drivers containing security fixes till January 2019, but has no plans to improve the performance or add features to such releases.

The transition of the PC industry from 32-bit to 64-bit has taken a very long time, but it seems to be in the final stages of completion, at least for consumer-based machines. AMD released the first x86-64 processors for PCs in late 2003, whereas Microsoft came up with Windows XP Professional x64 Edition for client computers in mid-2005, kicking off the transition of the PC industry to 64-bit computing. By now, all contemporary x86 processors are 64-bit capable and the vast majority of personal computers in all form-factors come with four or more gigabytes of RAM (that’s the maximum user addressable memory for a 32-bit OS), so the absolute majority of new PCs today run a 64-bit OS. The last remnants of 32-bit machines are often long-standing hold-overs, such as the machine that George RR Martin writes his Game of Thrones novels on (although he is still DOS based).

There other reason that springs to mind for using a 32-bit OS is for embedded systems, such as those running point-of-sale type environments, display walls, gambling machines, or monitoring. Although these systems are typically deployed for the super-long term, at some point between now and 19th January 2038 they will have to be replaced due to the Unix Epoch time-rollover bug that will affect all 32-bit systems (bug? feature? oversight! Surely no-one will still have a 32-bit system in 2038, right?).

According to Steam hardware survey there are 2.19% of users who continue to use 32-bit Windows. Those 32-bit systems are legacy machines and PCs that have to run old programs or hardware natively. Owners of such computers are hardly interested in the latest GPUs (modern graphics cards may come with more memory than a 32-bit OS can address) or driver features, so the end of support will likely go unnoticed by the vast majority of involved parties. Meanwhile, 2% – 2.2% out of a billion of PCs in use worldwide is 20 – 22 million systems (the actual number will be higher because not all PCs run Steam). Therefore, there definitely will be disgruntled owners of 32-bit PCs running entry-level GPUs released in the recent years and now left without updated driver support for the latest GPUs. 

Related Reading

Source: NVIDIA



View All Comments

  • HStewart - Thursday, December 28, 2017 - link

    As a developer, limiting to 64 bit is smart thing now - they can compile the code for 64 bit only and make it optimization globally to increase performance not worrying about damaging 32 bit systems.

    32->64 debate is long time been over with and it become pretty much standard. It however can lead sloppiest in development in some cases - by assuming that one has tons of memory.

    The 32->64 bit jump was not as significant as 16->32bit jump originally - yes 64 bit allows for much more memory, but basically it not much different than 32 bit - 64 bit is natural evolution of 32 bit - not sure that we will ever need 128 bit or higher - but then during earlier that what said about 64 bit, or 32 bit or even more than 64k of memory in beginning of PC's

    I would not doubt - someday in the next couple of years we will have 64 bit only Windows version.
  • ddriver - Thursday, December 28, 2017 - link

    If they have a logistical problem with maintenance of two build targets that most likely has to do with their code base being plagued by non-portable outdated legacy garbage. Target platform is generally not a concern in development. The same C and C++ code I write runs perfectly fine on x86, x64, armv7 and arm64 platforms, with zero modifications necessary.

    Problematic transitions are only present when your code is crap. Shortsighted and stiff architecture design, poor API layer separation, heavy reliance on non-portable APIs across the entire code base rather than using a platform integration abstraction layer. If you designed it poorly it becomes a maintenance nightmare, and an issue that can only be overcome by means of a complete rewrite on top of better design foundation.

    I don't see computers moving beyond 64 bit anytime soon, if ever. 64 bits allow for over 18 exabytes of addressing space. That is over 18 MILLION TERABYTES. In fact modern processors are not even touching the 64 bit addressing limit, they only use 48 of the 64 bits.

    Keep in mind that 32bit to 64bit might seem like a two-fold increase, but in reality it is over 2 billion fold increase of actual addressing space. 16 to 32 bit was only a 65536 fold increase, and 8 to 16 bits was merely a 256 fold increase. 32 to 64 goes from a fairly modest to tremendously ample addressing space, it is not like it is any longer necessary to double addressing space representation every few years as it has been done in the past.
  • Santoval - Thursday, December 28, 2017 - link

    Memory addressing aside would it make any sense to go 128 bit for performance reasons? FPU, SIMD & vector blocks have long gone 128 bit, then 256 bit, and now even 512 bit (AVX-512). Would the integer (or MIMD) blocks see any performance gains from going wider, or would it negatively impact power efficiency? I am not referring to today, I mean perhaps in 10 to 12 years. Potentially after Moore's "law" -and MOSFET tech by and large- have completely run out of steam and nothing viable has replaced them. Reply
  • ddriver - Thursday, December 28, 2017 - link

    At this point the ALU/scalar unit is ample as it only serves to drive program flow. It doesn't need to go wider than the pointer size. Traditionally, and with very few exceptions, the platform native integer width has been the same size as the pointer.

    For vector (SIMD) execution there is really no limit, as long as you have the workload to throw at it. Power efficiency will be good as long as the hardware is property utilized.

    CPUs don't really have MIMD, that's more of a gpu thing, although the xeon phi has it too, remember it came from intel's failed attempt at making a high end gpu. The same principle applies as with the SIMD - as long you have the bandwidth and the data to throw at it, indefinite increase of the architecture width is feasible. The first brickwall is die size limitations, but it is already being addressed by fragmenting chip design and taking a departure from monolithic dies.

    Moore's law not about performance or transistor density, it is about the number of transistors in an integrated circuit. We are about to hit a process brickwall the next decade or so, and so far there is no replacement for that. However the departure from monolithic dies will help to keep boosting the number of transistors in an integrated circuit, it is just that the circuit will be spread wider and a tad less integrated relative to monolithic dies. So I guess that will keep going until the contains of form factor allow it.
  • extide - Thursday, December 28, 2017 - link

    You can use 128-bit or even bigger values, with 32-bit code. When stuff is 32-bit or 64-bit it is talking about the address space and the length of the pointers. You can use 64-bit ints with 32-bit code for example, etc.

    There is a slight advantage that X64 code has access to more registers but that is just because the architecture is newer, not necessarily because it is 64-bit.
  • ddriver - Friday, December 29, 2017 - link

    x64 does increase the number of registers, but in direct comparison that doesn't result in any tangible performance gains. The extra registers are not really a necessity but more of a "why not while we are at it", at best it saves a L1 cache access every once in a while. Reply
  • lmcd - Friday, December 29, 2017 - link

    This post almost "got it" in my opinion -- 32-bit limitations aren't merely the change from 32-bit to 64-bit, but the expected CPU features as well. Switching to 64-bit moves that baseline to a more realistic and modern position.

    I wouldn't be surprised a few years down the road if CPUs prior to Sandy Bridge are cut off due to lacking instructions.
  • llukas11 - Thursday, December 28, 2017 - link

    > If they have a logistical problem with maintenance of two build targets that most likely has to do with their code base being plagued by non-portable outdated legacy garbage. Target platform is generally not a concern in development.

    If you do not test what you ship it doesn't work by definition. I don't think QA/testing is free.
  • ddriver - Thursday, December 28, 2017 - link

    It is done via unit tests. You run the same tests on every platform. The only additional cost is the time additional test take, which is not a lot relatively speaking. I highly doubt that's the reason they are dropping 32bit support. It is the development and maintenance costs that are the real burden. Reply
  • ddriver - Thursday, December 28, 2017 - link

    "but in reality it is over 2 billion fold increase"

    but in reality it is over FOUR billion fold increase

Log in

Don't have an account? Sign up now