Meet The GeForce GTX 980 Ti

Like the rest of NVIDIA’s high-end cards in this generation, the reference GeForce GTX 980 Ti is launching with NVIDIA’s standard metal cooler. This design has served NVIDIA well since the launch of the GTX Titan in 2013 and continues to be the blower design to beat the high end, easily handling the 250W TDP of NVIDIA’s high-end cards without generating a ton of noise in the process.

As with so many other aspects of the GTX 980 Ti, the GTX 980 Ti’s cooler and build is a near-copy of the GTX Titan X. The only difference in the cooler is the paint job; GTX Titan X got a unique black paint job, while GTX 980 Ti gets the more standard bare aluminum finish with black lettering and a black-tinted polycarbonate window.

Otherwise there’s very little to be said about the GTX 980 Ti’s design that hasn’t been said before, so we’ll just recap what we said about the cooler design from our review of the GTX Titan X.

For GTX 980 Ti, NVIDIA has opted to leave well enough alone, having made virtually no changes to the shroud or cooling apparatus. And truth be told it’s hard to fault NVIDIA right now, as this design remains the gold (well, aluminum) standard for a blower. Looks aside, after years of blowers that rattled, or were too loud, or didn’t cool discrete components very well, NVIDIA is sitting on a very solid design that I’m not really sure how anyone would top (but I’d love to see them try).

In any case, our favorite metal shroud is back once again. Composed of a cast aluminum housing and held together using a combination of rivets and screws, it’s as physically solid a shroud as we’ve ever seen. The card measures 10.5” long overall, which at this point is NVIDIA’s standard size for high-end GTX cards.

Drilling down we have the card’s primary cooling apparatus, composed of a nickel-tipped wedge-shaped heatsink and ringed radial fan. The heatsink itself is attached to the GPU via a copper vapor chamber, something that has been exclusive to NVIDIA’s 250W cards and provides the best possible heat transfer between the GPU and heatsink. Meanwhile the rest of the card is covered with a black aluminum baseplate, providing basic heatsink functionality for the VRMs and other components while also protecting them.

Finally at the bottom of the stack we have the card itself, complete with the GM200 GPU, VRAM chips, and various discrete components. The GM200 PCB places the GPU and VRAM chips towards the front of the card, while the VRMs and other discrete components occupy the back. As with the GTX Titan X, GTX 980 Ti features NVIDIA’s reworked component placement to improve airflow to the discrete components and reduce temperatures, along with employing molded inductors.

NVIDIA once again employs a 6+2 phase VRM design, with 6 phases for the GPU and another 2 for the VRAM. This means that GTX 980 Ti has a bit of power delivery headroom – NVIDIA allows the power limit to be increased by 10% to 275W – but hardcore overclockers will find that there isn’t an extreme amount of additional headroom to play with. Based on our sample the actual shipping voltage at the max boost clock is a bit higher than GTX Titan X, coming in at 1.187v, so in non-TDP constrained scenarios there is some additional headroom through overvolting, up to 1.23v in the case of our sample.

In terms of overall design, unlike GTX Titan X and its 24 VRAM chips, for the GTX 980 Ti NVIDIA only needs to use 12 VRAM chips to get the card’s 6GB of VRAM, so all of the VRAM is located at the front of the card. Halving the RAM capacity simplifies the card a bit – there are now no critical components on the back – and it brings down the total VRAM power consumption slightly. However despite this, NVIDIA has not brought back the backplate from the GTX 980, having removed it on the GTX Titan X due to the VRAM chips it placed on the rear.

Moving on, in accordance with GTX 980 Ti’s 250W TDP and the reuse of the metal cooler, power delivery for the GTX 980 Ti is identical to its predecessors. This means a 6-pin and an 8-pin power connector at the top of the card, to provide up to 225W, with the final 75W coming from the PCIe slot.

Meanwhile display I/O follows the same configuration we’ve seen on the rest of the high-end GTX 900 series. This is 1x DL-DVI-I, 3x DisplayPort 1.2, and 1x HDMI 2.0, with a total limit of 4 displays. In the case of GTX 980 Ti the DVI port is somewhat antiquated at this point – the card is generally overpowered for the relatively low maximum resolutions of DL-DVI – but on the other hand the HDMI 2.0 port is actually going to be of some value here since it means GTX 980 Ti can drive a 4K TV. Meanwhile if you have money to spare and need to drive more than a single 4K display, GTX 980 Ti also features a pair of SLI connectors for even more power.

Finally, taking a look at the long term, I wanted to quickly hit upon the subject of the VRAM capacity difference between the GTX 980 Ti and the GTX Titan X. Essentially NVIDIA’s only remaining selling point for the GTX Titan X, the Titan will remain their only 12GB card for some time to come. For NVIDIA this means that they can pitch the GTX Titan X as a more future-proof card than the GTX 980 Ti, as it would be hard-pressed to run out of VRAM.

The question for the moment then is whether 12GB is worth a higher premium, let alone the GTX Titan X’s $350 premium. The original GTX Titan by comparison was fortunate enough to come out with 6GB right before the current-generation consoles launched, and with them their 8GB memory configurations. This lead to a rather sudden jump in VRAM requirements in games that the GTX Titan was well positioned to handle, whereas GTX 780 Ti and its 3GB of VRAM can struggle in the very latest games at 4K resolutions. Much like 6GB in 2013, 12GB is overkill in 2015, all the while 6GB is a more practical amount for a 384-bit card at this time.

But to answer the question at hand, unlike the original GTX Titan, I suspect 12GB will remain overkill for a much longer period of time, especially without a significant technology bump like the consoles to drive up VRAM requirements. And consequently I don’t expect GTX 980 Ti to have any real issues with VRAM capacity in games over the next couple of years, making it better off than the GTX 780 Ti, relatively speaking.

The NVIDIA GeForce GTX 980 Ti Review NVIDIA's Computex Announcements & The Test
Comments Locked

290 Comments

View All Comments

  • Kosiostin - Monday, June 1, 2015 - link

    I beg to differ. 4K at monitor viewing distance is not overkill, it's actually quite pleasantly sharp. Phones, tablets and laptops are already pushing for 2K+ displays which is phenomenally sharp and out of the league for normal FHD monitors. Gaming at 4K is still not coming but when it comes it will blow our minds, I am sure.
  • Oxford Guy - Monday, June 1, 2015 - link

    People who care so much for immersion should be using 1440 with HDTV screen sizes, not sitting way up close with small monitors.

    Too bad HDTVs have so much input lag, though.
  • Kutark - Monday, June 1, 2015 - link

    Basically at a 5' viewing distance, you would have to have a 40" monitor before 4k would start to become noticeable.

    Even at 30" monitor you would have to be sitting roughly 3.5' or closer to your monitor to be able to begin to tell the difference.

    We also have to keep in mind we're talking about severely diminishing returns. 1440p is about perfect for normal seating distances with a computer on a 27" monitor. 30" some arguments can be made for 4k but its a minor. Its not like we're going from 480p to 1080p or something 1440p is still very good at "normal" computer seating distances.
  • mapesdhs - Wednesday, June 3, 2015 - link

    Human vision varies as to who can discern what at a particular distance. There's no fixed cutoffs for this. Personally, when wandering around a TV store back in January (without knowing what type of screen I was looking at), for visual clarity the only displays that looked properly impressive turned out to be 4Ks. However, they're still a bit too pricey atm for a good one, with the cheaper models employing too many compromises such as reduced chroma sampling to bring down the pricing, or much lower refresh rates, etc. (notice how stores use lots of static imagery to advertise their cheaper 4K TVs?)

    Btw, here's a wonderfull irony for you: recent research, mentioned in New Scientist, suggests that long exposure by gamers to high-refresh displays makes them more able to tell the difference between standard displays and high-refresh models, ie. simply using a 144Hz monitor can make one less tolerant of standad 60Hz displays in the long term. :D It's like a self-reinforcing quality tolerance level. Quite funny IMO. No surprise to me though, years working in VR & suchlike resulted in my being able to tell the difference in refresh rates much more than I was able to beforehand.

    Anyway, I'm leaving 4K until cheaper models are better quality, etc. In the meantime I bought a decent (but not high-end) 48" Samsung which works pretty well. Certainly looks good for Elite Dangerous running off a 980, and Crysis looks awesome.
  • Laststop311 - Monday, June 1, 2015 - link

    Why would most people be using DVI? DVI is big and clunky and just sucks. Everyone that gets new stuff nowadays uses displayport it has the easiest to use plug.
  • Crest - Sunday, May 31, 2015 - link

    Thank you for including the GTX580. I'm still living and working on a pair of 580's and it's nice to know where they stand in these new releases.
  • TocaHack - Monday, June 1, 2015 - link

    I upgraded from SLI'd 580s to a 980 at the start of April. Now I'm wishing I'd waited for the Ti! It wasn't meant to launch this soon! :-/
  • mapesdhs - Wednesday, June 3, 2015 - link

    Indeed, one of the few sites to include 580 numbers, though it's a shame it's missing in some of the graphs (people forget there are lots of 3GB 580s around now, I bought ten last month).

    If it's of any use, I've done a lot of 580 SLI vs. 980 (SLI) testing, PM for a link to the results. I tested with 832MHz 3GB 580s, though the reference 783MHz 3GB models I was already using I sold for a nice profit to a movie company (excellent cards for CUDA, two of them beat a Titan), reducing the initial 980 upgrade to a mere +150.

    Overall, a 980 easily beats 580 SLI, and often comes very close to 3-way 580 SLI. The heavier the load, the bigger the difference, eg. for Firestrike Ultra, one 980 was between 50% and 80% faster than two 3GB 580s. I also tested 2/3-way 980 SLI, so if you'd like the numbers, just PM me or Google "SGI Ian" to find my site, contact page and Yahoo email adr.

    I've been looking for a newer test. I gather GTA V has a built-in benchmark, so finally I may have found something suitable, need to look into that.

    Only one complaint about the review though, why no CUDA test??? I'd really like to know how the range of NV cards stacks up now, and whether AE yet supports MW CUDA V2. I've tested 980s with Arion and Blender, it came close to two 580s, but not quite. Would be great to see how the 980 Ti compares to the 980 for this. Still plenty of people using CUDA with pro apps, especially AE.

    Ian.
  • mapesdhs - Wednesday, June 3, 2015 - link

    Btw Crest, which model 580s are you using? I do have some 1.5GB 580s aswell, but I've not really done much yet to expose where VRAM issues kick in, though it does show up in Unigine pretty well at 1440p.

    For reference, I do most testing with a 5GHz 2700K and a 4.8GHz 3930K, though I've also tested three 980s on a P55 with an i7 870 (currently the fastest P55 system on 3DMark for various tests).
  • Mikemk - Sunday, May 31, 2015 - link

    Since it has 2 SMM's disabled, does it have the memory issue of the 970? (Haven't read full article yet, sorry if answered in article)

Log in

Don't have an account? Sign up now