All good things must come to an end, and for NVIDIA’s Direct3D 10 generation GPUs that end is just about here. NVIDIA has posted a document to their support website this week announcing their plans to drop driver support for their D3D10 GPUs, and how future support for those products will work.

With the forthcoming Release 340 driver set NVIDIA will be moving their D3D10 GPUs to legacy status, which will make R340 the final driver branch to support these products. The branch after R340, R343, will drop support for D3D10 GPUs, leaving Fermi, Kepler, and the new Maxwell as the only GPU families supported in newer driver releases.

As for R340 and their D3D10 GPUs, while the move to legacy status means that these GPUs will no longer receive performance optimizations and new driver features, NVIDIA does leave the door open to further bug fixes with further R340 releases. Officially R340 will support these GPUs until April 1, 2016, so NVIDIA is planning on continuing to support these products in some fashion up until then. But it’s worth noting that legacy products aren’t on a planned driver update schedule like current generation products are, so bug fix drivers are issued on an as-needed basis. For comparison, NVIDIA’s DX9 GPUs, which went legacy back in October of 2012, have received a driver update as recently as February of 2013.

Meanwhile it’s worth pointing out that this move to legacy status will be for all of NVIDIA’s product lines using these GPUs: GeForce, Quadro, and Tesla. So along with GeForce cards ranging from the GeForce 8800GTX to the GeForce 405, the move to legacy will also impact NVIDIA’s first-generation Tesla 1000 parts, the Quadro FX series, and more. The extensive list of affected products can be found in NVIDIA’s announcement.


Farewell 8800GT

On a tangential note, compared to NVIDIA’s legacy D3D9 products it looks like NVIDIA’s D3D10 products will have received roughly the same length of mainstream support. With GeForce 8800GTX turning 8 this year and GeForce 405 turning 4, NVIDIA’s D3D10 products have been supported for between 4 and 8 years. This is about a year less than D3D9 on the low end and roughly the same length of time on the high end. Unsurprisingly, those products from earlier in the generation have received a longer period of support overall, due to the fact that NVIDIA retires products whole generations at a time.

Wrapping things up, today’s retirement announcement means that D3D10 GPUs as a class are just about at the end of their lives. With AMD having moved their D3D10 GPUs to legacy status back in 2012 and now NVIDIA in 2014, both of the major dGPU vendors have put their pre-D3D11 products out to pasture. This leaves Intel as the only vendor with D3D10 parts still receiving mainstream support, as Intel did not gain support for D3D11 until Ivy Bridge in 2012.

As such it will be interesting to see how forthcoming game development is impacted by this. With D3D10 GPUs now existing as legacy products, and with the current-generation game consoles being D3D11 based, will developers as a whole continue to support D3D10 GPUs after 2014? We will have to see what the future brings.

Source: NVIDIA (via SH SOTN)

Comments Locked

33 Comments

View All Comments

  • kyuu - Wednesday, March 12, 2014 - link

    Uh, why would AMD be supporting an Intel iGPUs...?
  • Novaguy - Wednesday, March 12, 2014 - link

    He's clearlyctalking about the Amd HD 4000 series. I had the HD 4650, for example.
  • tipoo - Wednesday, March 12, 2014 - link

    ಠ_ಠ

    The HD 4000 series by AMD...Come on now, I was talking about it with AMD as the context. And the HD Graphics 4000 by Intel is a single GPU, not a series.
  • tipoo - Wednesday, March 12, 2014 - link

    Actually I guess that's a series too, but HD 4000 = Radeon, HD Graphics 4000 = Intel.
  • Kevin G - Wednesday, March 12, 2014 - link

    This is kinda weird on the AMD side as not much changed between the HD4000 and HD5000 series in terms of architecture. The main thing that the HD5000 series added was Eyefinity.
  • blzd - Wednesday, March 12, 2014 - link

    More changed from 4000-5000 than it did from 5000-6000 series. Maybe you have those mixed up.

    5000 series brought us twice the performance at the same cost almost across the board.
  • Kevin G - Wednesday, March 12, 2014 - link



    The transition from the Radeon 4000 to the 5000 series coincided with a die shrink from 55 nm to 40 nm. This allowed AMD to increase the number of functional units on a die but the actual design of those units was the same. Case in point, the Radeon 4870 and 5770 differ in core specs mainly due to the memory bus width (256 bit vs 128 bit wide) and clock speeds. Sure, due to the shrink AMD was able to offer more performance for the same price but the underlaying architecture was the same. This is what makes AMD dropping the Radeon 4000 support in their drivers odd.

    Things were different in the 6000 series. AMD couldn't go a shrink so they had to increase die size as well as the efficiency of their designs. The Radeon 6900 line was indeed a big change as that card was a VLIW4 design instead of VLIW5. The Radeon 6800 and 6900 lines also got enhanced TMU's and ROP's for more throughput. Though the lower end of the 6000 series had some rebrands: the Radeon 6770 was the same as the 5770 for example.
  • lmcd - Wednesday, March 12, 2014 - link

    I think there was a UVD generation jump, and obviously the tessellation engine changed. Wasn't Eyefinity introduced with the AMD HD 5k series?

    I think the combination of these changes affect the front-end a lot more than the VLIW5->4 shift. So yes, I agree in premise but not in driver maintenance.
  • Kevin G - Thursday, March 13, 2014 - link

    I think UVD may have changed and I did mention Eyefinity above. The changes to UVD would take some rewriting but ultimately a minor part of the drivers. The tessellation engine was upgraded but wasn't entirely new either (DX10.1 had it as an optional feature).

    The VLIW5 to VLIW4 change did require a fair amount of compiler tuning. There were a few edge cases where the Radeon 6970 at launch would lose in compute to a Radeon 5870 due to unoptimized compilers.
  • MrSpadge - Wednesday, March 12, 2014 - link

    "This leaves Intel as the only vendor with D3D10 parts still receiving mainstream support"

    It may be Intel's mainstream suppoort treatment, but it's not really any better than legacy support from the green and red team.

Log in

Don't have an account? Sign up now