Meet the GeForce GTX 980

For the physical design of the reference GeForce GTX 980, NVIDIA is clearly iterating on previous designs rather than coming up with something from scratch. With that said however, the idiom of “if it ain’t broke, don’t fix it” has been very applicable to NVIDIA over the last year and a half since the launch of the GTX Titan and its high-end cooler. The GTX Titan’s cooler set a new bar in build quality and performance for a blower design that is to this day unmatched, and for that reason NVIDIA has reused this design for the GTX 780, GTX 780 Ti, GTX Titan Black, and now the GTX 980. What this means for the GTX 980 is that its design comes from a very high pedigree, one that we believe shall serve it well.

At a high level, GTX 980 recycles the basic cooler design and aesthetics of GTX 780 Ti and GTX Titan Black. This means we’re looking at a high performance blower design that is intended to offer the full heat-exhaustion benefits of a blower, but without the usual tradeoff in acoustics. The shroud of the card is composed of cast aluminum housing and held together using a combination of rivets and screws. NVIDIA has also kept the black accenting first introduced by its predecessors, giving the card distinct black lettering and a black tinted polycarbonate window. The card measures 10.5” long overall, which again is the same length as the past high-end GTX cards.

Cracking open the card and removing the shroud exposes the card’s fan and heatsink assembly. Once again NVIDIA is lining the entire card with an aluminum baseplate, which provides heatsinking capabilities for the VRMs and other discrete components below it, along with providing additional protection for the board. The primary GPU heatsink is fundamentally the same as before, retaining the same wedged shape and angled fins.

However in one of the only major deviations from the earlier GTX Titan cooler, at the base NVIDIA has dropped the vapor chamber design for a simpler (and admittedly less effective) heatpipe design that uses a trio of heatpipes to transfer heat from the GPU to the heatsink. In the case of the GTX Titan and other GK110 cards a vapor chamber was deemed necessary due to the GPU’s 250W TDP, but with GM204’s much lower 165W TDP, the advanced performance of the vapor chamber should not be necessary. We would of course like to see a vapor chamber here anyhow, but we admittedly can’t fault NVIDIA for going without it on such a low TDP part.

Drilling down to the PCB, we find a PCB design not all that far removed from NVIDIA’s GK110 PCBs. At the heart of the card is of course the new GM204 GPU, which although some 150mm2 smaller than GK110, is still a hefty GPU on its own. Paired with this GPU are the 8 4Gb 7GHz Samsung GDDR5 modules that surround it, composing the 4GB of VRAM and 256-bit memory bus that GM204 interfaces with.

Towards the far side of the PCB we find the card’s power delivery components, which is composed of a 4+1 phase design. Here NVIDIA is using 4 power phases for the GPU itself, and then another phase for the GDDR5. Like the 5+1 phase design on GK110 cards, this configuration is more than enough for stock operations and mild overclocking, however hardcore overclockers will probably end up gravitating towards custom designs more with more heavily overbuilt power delivery systems. Though it is interesting to note that NVIDIA’s design has open pads for 2 more power phases, meaning there is some kind of headroom left in the PCB design. Meanwhile feeding the power delivery system is a pair of 6pin PCIe sockets, giving the card a combined power delivery ceiling of 225W, which is still well above the maximum TDP NVIDIA allows for this card.

What may be the most interesting – or at least most novel – aspect of GTX 980 isn’t even found on the front side of the card, but rather it’s what’s found on the back side. Back after a long absence is a backplate for the card, which runs the entire length of the card and completely covers the back side of the PCB, leaving no element exposed except for the SLI connectors above it and the PCIe connector below it.

Generally speaking backplates are nice to have on video cards. Though they don’t provide any kind of meaningful mechanical/thermal benefit, they do serve to protect a card by reducing how much of the relatively vulnerable PCB is exposed, and similarly protect the user by keeping them from getting jabbed by the soldered tips of discrete components. However backplates typically come with one very big drawback, which is that the 2mm or so of space they occupy is not really their space, and encroaches on anything above it. For a single video card this is not a concern, but when pairing up video cards for SLI, if the cards are directly next to each other this extra 2mm makes all the difference in the world for cooling, blocking valuable space for airflow and otherwise suffocating the card unlucky enough to get blocked.

In more recent years motherboard manufacturers have done a better job of designing their boards by avoiding placing the best PCIe x16 slots next to each other, but there are still cases where these cards must be packed tightly together, such as in micro-ATX cases and when utilizing tri/quad-SLI. As a result of this clash between the benefits and drawbacks of the backplate, for GTX 980 NVIDIA has engineered a solution that allows them to include a backplate and simultaneously not impede the airflow of closely packed cards, and that is a partially removable backplate.

For GTX 980 a segment of the backplate towards the top back corner is detachable from the rest, and removing it exposes the PCB underneath. Based on studying the airflow of video cards with and without a backplate, NVIDIA tells us that they have been able to identify what portion of the backplate is responsible for impeding most of the airflow in an SLI configuration, and that they in turn have made this segment removable so as to be able to offer the full benefits of a backplate while also mitigating the airflow problems. Interestingly this segment is actually quite small – it’s only 34mm tall – making it much shorter than the radial fan on the front of the card, but NVIDIA tells us that this is all that needs to be removed to let a blocked card breathe. In our follow-up to the GTX 980 next week we will be looking at SLI performance, and this will include measuring the cooling impact of the removable backplate segment.

Moving on, beginning with GTX 980 NVIDIA’s standard I/O configuration has dramatically changed, and so for that matter has the design of their I/O shield. First introduced with GTX Titan Z and now present on GTX 980, NVIDIA has been working to maximize the amount of airflow available through their I/O bracket by replacing standard rectangular vents with triangular vents across the whole card. This results in pretty much every square centimeter of the card not occupied by an I/O port having venting through it, leading to very little of the card actually being blocked by the I/O shield.

Meanwhile starting with GTX 980, NVIDIA is introducing their new standard I/O configuration. NVIIDA has finally dropped the second DL-DVI port, and in its place they have installed a pair of full size DisplayPorts. This brings the total I/O configuration up to 1x DL-DVI-I, 3x DisplayPort 1.2, and 1x HDMI 2.0. The inclusion of more DisplayPorts has been a long time coming and I’m glad to see that NVIDIA has finally gone this route. DisplayPorts offer more functionality than any other type of port and can easily be converted to HDMI or SL-DVI as necessary. More importantly for NVIDIA, with 3 DisplayPorts NVIDIA can now drive 3 G-Sync monitors off of a single card, making G-Sync Surround viable for the first time.

Speaking of I/O, we’ll briefly note that NVIDIA’s SLI connectors are still present, with the pair of connectors allowing up to quad-SLI. However we’d also note that this also means that for anyone hoping that NVIDIA would have an all-PCIe multi-GPU solution analogous to AMD’s XDMA engine, Maxwell 2 will not be such a product. Physical bridges are still necessary for SLI, with NVIDIA splitting up the workload over SLI and PCIe in the case of very high resolutions such as 4K.

Wrapping up our look at the physical build quality of the GTX 980, NVIDIA has done a good job iterating on what was already an excellent design with the GTX Titan and its cooler. The backplate, though not a remarkable difference, does give the card that last bit of elegance that GTX Titan and its GK110 siblings never had, as the card is now clad in metal from top to bottom. As silly as it sounds, other than the PCIe connector the GTX 980 may as well be a complete consumer electronic product of its own, as it’s certainly built like one.

Finally, along with the hardware we also want to quickly summarize the GPU Boost 2.0 limits NVIDIA has chosen for the GTX 980, to better illustrate what the card is capable of. Like the other high-end NVIDIA cards before it, NVIDIA has opted to set the GTX 980’s temperature target at 80C, with a maximum target of 91C and an absolute thermal threshold of 95C. Meanwhile the card’s 165W TDP limit can be increased by as much as 25% to 206W, or 41W over its reference limit.

It’s interesting to note that despite the fact that the GTX Titan cooler was designed for a 250W card, GTX 980 will still see some temperature throttling under heavy, sustained loads. NVIDIA seems to have invested most of their cooling gains into acoustics, which has produced a card with amazing acoustic performance given the combination of the blower and the chart-topping performance, but it has also produced a card that is still going to throttle from time to time.

Launching Today: GTX 980 & GTX 970 The Test
Comments Locked

274 Comments

View All Comments

  • garadante - Sunday, September 21, 2014 - link

    What might be interesting is doing a comparison of video cards for a specific framerate target to (ideally, perhaps it wouldn't actually work like this?) standardize the CPU usage and thus CPU power usage across greatly differing cards. And then measure the power consumed by each card. In this way, couldn't you get a better example of
  • garadante - Sunday, September 21, 2014 - link

    Whoops, hit tab twice and it somehow posted my comment. Continued:

    couldn't you get a better example of the power efficiency for a particular card and then meaningful comparisons between different cards? I see lots of people mentioning how the 980 seems to be drawing far more watts than it's rated TDP (and I'd really like someone credible to come in and state how heat dissipated and energy consumed are related. I swear they're the exact same number as any energy consumed by transistors would, after everything, be released as heat, but many people disagree here in the comments and I'd like a final say). Nvidia can slap whatever TDP they want on it and it can be justified by some marketing mumbo jumbo. Intel uses their SDPs, Nvidia using a 165 watt TDP seems highly suspect. And please, please use a nonreference 290X in your reviews, at least for a comparison standpoint. Hasn't it been proven that having cooling that isn't garbage and runs the GPU closer to high 60s/low 70s can lower power consumption (due to leakage?) something on the order of 20+ watts with the 290X? Yes there's justification in using reference products but lets face it, the only people who buy reference 290s/290Xs were either launch buyers or people who don't know better (there's the blower argument but really, better case exhaust fans and nonreference cooling destroys that argument).

    So basically I want to see real, meaningful comparisons of efficiencies for different cards at some specific framerate target to standardize CPU usage. Perhaps even monitoring CPU usage over the course of the test and reporting average, minimum, peak usage? Even using monitoring software to measure CPU power consumption in watts (as I'm fairly sure there are reasonably accurate ways of doing this already, as I know CoreTemp reports it as its probably just voltage*amperage, but correct me if I'm wrong) and reported again average, minimum, peak usage would be handy. It would be nice to see if Maxwell is really twice as energy efficient as GCN1.1 or if it's actually much closer. If it's much closer all these naysayers prophesizing AMD's doom are in for a rude awakening. I wouldn't put it past Nvidia to use marketing language to portray artificially low TDPs.
  • silverblue - Sunday, September 21, 2014 - link

    Apparently, compute tasks push the power usage way up; stick with gaming and it shouldn't.
  • fm123 - Friday, September 26, 2014 - link

    Don't confuse TDP with power consumption, they are not the same thing. TDP is for designing the thermal solution to maintain the chip temperature. If there is more headroom in the chip temperature, then the system can operate faster, consuming more power.

    "Intel defines TDP as follows: The upper point of the thermal profile consists of the Thermal Design Power (TDP) and the associated Tcase value. Thermal Design Power (TDP) should be used for processor thermal solution design targets. TDP is not the maximum power that the processor can dissipate. TDP is measured at maximum TCASE"

    https://www.google.com/url?sa=t&source=web&...
  • NeatOman - Sunday, September 21, 2014 - link

    I just realized that the GTX 980 has a TDP of 165 watts, my Corsair CX430 watt PSU is almost overkill!, that's nuts. That's even enough room to give the whole system a very good stable overclock. Right now i have a pair of HD 7850's @ stock speed and a FX-8320 @ 4.5Ghz, good thing the Corsair puts out over 430 watts perfectly clean :)
  • Nfarce - Sunday, September 21, 2014 - link

    While a good power supply, you are leaving yourself little headroom with 430W. I'm surprised you are getting away with it with two 7850s and not experiencing system crashes.
  • ET - Sunday, September 21, 2014 - link

    The 980 is an impressive feat of engineering. Fewer transistors, fewer compute units, less power and better performance... NVIDIA has done a good job here. I hope that AMD has some good improvements of its own under its sleeve.
  • garadante - Sunday, September 21, 2014 - link

    One thing to remember is they probably save a -ton- of die area/transistors by giving it only what, 1/32 double precision rate? I wonder how competitive in terms of transistors/area an AMD GPU would be if they gutted double precision compute and went for a narrower, faster memory controller.
  • Farwalker2u - Sunday, September 21, 2014 - link

    I am looking forward to your review of the GTX 970 once you have a compatible sample in hand.
    I would like to see the results of the Folding @Home benchmarks. It seems that this site is the only one that consistently use that benchmark in its reviews.

    As a "Folder" I'd like to see any indication that the GTX 970, at a cost of $330 and drawing less watts than a GTX 780; may out produce both the 780 ($420 - $470) and the 780Ti ($600). I will be studying the Folding @ Home: Explicit, Single Precision chart which contains the test results of the GTX 970.
  • Wolfpup - Monday, September 22, 2014 - link

    Wow, this is impressive stuff. 10% more performance from 2/3 the power? That'll be great for desktops, but of course even better for notebooks. Very impressed they could pulll off that kind of leap on the same process!

    They've already managed to significantly bump up the top end mobile part from GTX 680 -> 880, but within a year or so I bet they can go quite a bit higher still.

    Oh well, it was nice having a top of the line mobile GPU for a while LOL

    If 28nm hit in 2012 though, doesn't that make 2015 its third year? At least 28nm seems to be a really good process, vs all the issues with 90/65nm, etc., since we're stuck on it so long.

    Isn't this Moore's Law hitting the constraints of physical reality though? We're taking longer and longer to get to progressively smaller shrinks in die size, it seems like...

    Oh well, 22nm's been great with Intel and 28's been great with everyone else!

Log in

Don't have an account? Sign up now