Meet The GeForce GTX 980 Ti

Like the rest of NVIDIA’s high-end cards in this generation, the reference GeForce GTX 980 Ti is launching with NVIDIA’s standard metal cooler. This design has served NVIDIA well since the launch of the GTX Titan in 2013 and continues to be the blower design to beat the high end, easily handling the 250W TDP of NVIDIA’s high-end cards without generating a ton of noise in the process.

As with so many other aspects of the GTX 980 Ti, the GTX 980 Ti’s cooler and build is a near-copy of the GTX Titan X. The only difference in the cooler is the paint job; GTX Titan X got a unique black paint job, while GTX 980 Ti gets the more standard bare aluminum finish with black lettering and a black-tinted polycarbonate window.

Otherwise there’s very little to be said about the GTX 980 Ti’s design that hasn’t been said before, so we’ll just recap what we said about the cooler design from our review of the GTX Titan X.

For GTX 980 Ti, NVIDIA has opted to leave well enough alone, having made virtually no changes to the shroud or cooling apparatus. And truth be told it’s hard to fault NVIDIA right now, as this design remains the gold (well, aluminum) standard for a blower. Looks aside, after years of blowers that rattled, or were too loud, or didn’t cool discrete components very well, NVIDIA is sitting on a very solid design that I’m not really sure how anyone would top (but I’d love to see them try).

In any case, our favorite metal shroud is back once again. Composed of a cast aluminum housing and held together using a combination of rivets and screws, it’s as physically solid a shroud as we’ve ever seen. The card measures 10.5” long overall, which at this point is NVIDIA’s standard size for high-end GTX cards.

Drilling down we have the card’s primary cooling apparatus, composed of a nickel-tipped wedge-shaped heatsink and ringed radial fan. The heatsink itself is attached to the GPU via a copper vapor chamber, something that has been exclusive to NVIDIA’s 250W cards and provides the best possible heat transfer between the GPU and heatsink. Meanwhile the rest of the card is covered with a black aluminum baseplate, providing basic heatsink functionality for the VRMs and other components while also protecting them.

Finally at the bottom of the stack we have the card itself, complete with the GM200 GPU, VRAM chips, and various discrete components. The GM200 PCB places the GPU and VRAM chips towards the front of the card, while the VRMs and other discrete components occupy the back. As with the GTX Titan X, GTX 980 Ti features NVIDIA’s reworked component placement to improve airflow to the discrete components and reduce temperatures, along with employing molded inductors.

NVIDIA once again employs a 6+2 phase VRM design, with 6 phases for the GPU and another 2 for the VRAM. This means that GTX 980 Ti has a bit of power delivery headroom – NVIDIA allows the power limit to be increased by 10% to 275W – but hardcore overclockers will find that there isn’t an extreme amount of additional headroom to play with. Based on our sample the actual shipping voltage at the max boost clock is a bit higher than GTX Titan X, coming in at 1.187v, so in non-TDP constrained scenarios there is some additional headroom through overvolting, up to 1.23v in the case of our sample.

In terms of overall design, unlike GTX Titan X and its 24 VRAM chips, for the GTX 980 Ti NVIDIA only needs to use 12 VRAM chips to get the card’s 6GB of VRAM, so all of the VRAM is located at the front of the card. Halving the RAM capacity simplifies the card a bit – there are now no critical components on the back – and it brings down the total VRAM power consumption slightly. However despite this, NVIDIA has not brought back the backplate from the GTX 980, having removed it on the GTX Titan X due to the VRAM chips it placed on the rear.

Moving on, in accordance with GTX 980 Ti’s 250W TDP and the reuse of the metal cooler, power delivery for the GTX 980 Ti is identical to its predecessors. This means a 6-pin and an 8-pin power connector at the top of the card, to provide up to 225W, with the final 75W coming from the PCIe slot.

Meanwhile display I/O follows the same configuration we’ve seen on the rest of the high-end GTX 900 series. This is 1x DL-DVI-I, 3x DisplayPort 1.2, and 1x HDMI 2.0, with a total limit of 4 displays. In the case of GTX 980 Ti the DVI port is somewhat antiquated at this point – the card is generally overpowered for the relatively low maximum resolutions of DL-DVI – but on the other hand the HDMI 2.0 port is actually going to be of some value here since it means GTX 980 Ti can drive a 4K TV. Meanwhile if you have money to spare and need to drive more than a single 4K display, GTX 980 Ti also features a pair of SLI connectors for even more power.

Finally, taking a look at the long term, I wanted to quickly hit upon the subject of the VRAM capacity difference between the GTX 980 Ti and the GTX Titan X. Essentially NVIDIA’s only remaining selling point for the GTX Titan X, the Titan will remain their only 12GB card for some time to come. For NVIDIA this means that they can pitch the GTX Titan X as a more future-proof card than the GTX 980 Ti, as it would be hard-pressed to run out of VRAM.

The question for the moment then is whether 12GB is worth a higher premium, let alone the GTX Titan X’s $350 premium. The original GTX Titan by comparison was fortunate enough to come out with 6GB right before the current-generation consoles launched, and with them their 8GB memory configurations. This lead to a rather sudden jump in VRAM requirements in games that the GTX Titan was well positioned to handle, whereas GTX 780 Ti and its 3GB of VRAM can struggle in the very latest games at 4K resolutions. Much like 6GB in 2013, 12GB is overkill in 2015, all the while 6GB is a more practical amount for a 384-bit card at this time.

But to answer the question at hand, unlike the original GTX Titan, I suspect 12GB will remain overkill for a much longer period of time, especially without a significant technology bump like the consoles to drive up VRAM requirements. And consequently I don’t expect GTX 980 Ti to have any real issues with VRAM capacity in games over the next couple of years, making it better off than the GTX 780 Ti, relatively speaking.

The NVIDIA GeForce GTX 980 Ti Review NVIDIA's Computex Announcements & The Test
Comments Locked

290 Comments

View All Comments

  • chizow - Monday, June 1, 2015 - link

    Yes, its unprecedented to launch a full stack of rebrands with just 1 new ASIC, as AMD has done not once, not 2x, not even 3x, but 4 times with GCN (7000 to Boost/GE, 8000 OEM, R9 200, and now R9 300) Generally it is only the low-end, or a gap product to fill a niche. The G92/b isn't even close to this as it was rebranded numerous times over a short 9 month span (Nov 2007 to July 2008), while we are bracing ourselves for AMD rebrands going back to 2011 and Pitcairn.
  • Gigaplex - Monday, June 1, 2015 - link

    If it's the 4th time as you claim, then by definition, it's most definitely not unprecedented.
  • chizow - Monday, June 1, 2015 - link

    The first 3 rebrands were still technically within that same product cycle/generation. This rebrand certainly isn't, so rebranding an entire stack with last-gen parts is certainly unprecedented. At least, relative to Nvidia's full next-gen product stack. Hard to say though given AMD just calls everything GCN 1.x, like inbred siblings they have some similarities, but certainly aren't the same "family" of chips.
  • Refuge - Monday, June 1, 2015 - link

    Thanks Gigaplex, you beat me to it... lol
  • chizow - Monday, June 1, 2015 - link

    Cool maybe you can beat each other and show us the precedent where a GPU maker went to market with a full stack of rebrands against the competition's next generation line-up. :)
  • FlushedBubblyJock - Wednesday, June 10, 2015 - link

    Nothing like total fanboy denial
  • Kevin G - Monday, June 1, 2015 - link

    The G92 got its last prebrand in 2009 and was formally replaced on in 2010 by the GTX 460. It had a full three year life span on the market.

    The GTS/GTX 200 series as mostly rebranded. There was the GT200 chip on the high end that was used for the GTX 260 and up. The low end silently got the GT216 for the Geforce 210 a year after the GTX 260/280 launch. At this time, AMD was busy launching the Radeon 4000 series which brought a range of new chips to market as a new generation.

    Pitcairn came out in 2012, not 2011. This would mimic the life span of the G92 as well as the number of rebrands. (It never had a vanilla edition, it started with the Ghz edition as the 7870.)
  • chizow - Monday, June 1, 2015 - link

    @Kevin G, nice try at revisionist history, but that's not quite how it went down. G92 was rebranded numerous times over the course of a year or so, but it did actually get a refresh from 65nm to 55nm. Indeed, G92 was even more advanced than the newer GT200 in some ways, with more advanced hardware encoding/decoding that was on-die, rather than on a complementary ASIC like G80/GT200.

    Also, at the time, prices were much more compacted at the time due to economic recession, so the high-end was really just a glorified performance mid-range due to the price wars started by the 4870 and the economics of the time.

    Nvidia found it was easier to simply manipulate the cores on their big chip than to come out with a number of different ASICs, which is how we ended up with GTX 260 core 192, core 216 and the GTX 275:

    Low End: GT205, 210, GT 220, GT 230
    Mid-range: GT 240, GTS 250
    High-end: GTX 260, GTX 275
    Enthusiast: GTX 280, GTX 285, GTX 295

    The only rebranded chip in that entire stack is the G92, so again, certainly not the precedent for AMD's entire stack of Rebrandeon chips.
  • Kevin G - Wednesday, June 3, 2015 - link

    @chizow
    Out of that list of GTS/GTX200 series, the new chip in that line up in 2008 was the GT200 and the GT218 that was introduced over a year later in late 2009. For 9 months on the market the three chips used in the 200 series were rebrands of the G94, rebrands of the G92 and the new GT200. The ultra low end at this time was filled in by cards still carrying the 9000 series branding.

    The G92 did have a very long life as it was introduced as the 8800GTS with 512 MB in late 2007. In 2008 it was rebranded the 9800GTX roughly six months after it was first introduced. A year later in 2009 the G92 got a die shrink and rebranded as both the GTS 150 for OEMs and GTS 250 for consumers.

    So yeah, AMD's R9 300 series launch really does mimic what nVidia did with the GTS/GTX 200 series.
  • FlushedBubblyJock - Wednesday, June 10, 2015 - link

    G80 was not G92 not G92b nor G94 mr kevin g

Log in

Don't have an account? Sign up now