Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

GTX 770 ends up being an interesting case study in all 3 factors due to the fact that NVIDIA is pushing the GK104 GPU so hard. Though the old version of GPU Boost muddles things some, there’s no denying that higher clockspeeds coupled with the higher voltages needed to reach those clockspeeds has a notable impact on power consumption. This makes it very hard for NVIDIA to stick to their efficiency curve, since adding voltages and clockspeeds offers diminishing returns for the increase in power consumption.

GeForce GTX 770 Voltages
GTX 770 Max Boost GTX 680 Max Boost GTX 770 Idle
1.2v 1.175v 0.862v

As we can see, NVIDIA has pushed up their voltage from 1.175v on GTX 680 to 1.2v on GTX 770. This buys them the increased clockspeeds they need, but it will drive up power consumption. At the same time GPU Boost 2.0 helps to counter this some, as it will keep leakage from being overwhelming by keeping GPU temperatures at or below 80C.

GeForce GTX 770 Average Clockspeeds
Max Boost Clock 1136MHz
DiRT:S
1136MHz
Shogun 2
1136MHz
Hitman
1136MHz
Sleeping Dogs
1102MHz
Crysis
1136MHz
Far Cry 3
1136MHz
Battlefield 3
1136MHz
Civilization V
1136MHz
Bioshock Infinite
1128MHz
Crysis 3
1136MHz

Speaking of clockspeeds, we also took the average clockspeeds for GTX 770 in our games. In short, GTX 770 is almost always at its maximum boost bin of 1136; the oversized Titan cooler keeps temperatures just below the thermal throttle, and there’s enough TDP headroom left that the card doesn’t need to pull back to avoid that. This is one of the reasons why GTX 770’s performance advantage over GTX 680 is greater than the clockspeed increases alone.

We don’t normally publish this data, but GTX 770 has an extra interesting attribute about it: its idle clockspeed is lower than other Kepler parts. GTX 680 and GTX 780 both idle at 324MHz, but GTX 770 idles at 135MHz. Even 324MHz has proven low enough to keep Kepler’s idle power in check in the past, so it’s not entirely clear just what NVIDIA is expecting here. We’re seeing 1W less at the wall, but by this point the rest of our testbed is drowning out the video card.

Moving on to BF3 power consumption, we can see the power cost of GTX 770’s performance. 374W at the wall is only 18W more than GTX 680, thanks in part to the fact that GTX 770 isn’t hitting its TDP limit here. At the same time compared to the outgoing GTX 670, this is a 44W difference. This makes it very clear that GTX 770 is not a drop-in replacement for GTX 670 as far as power and cooling go. On the other hand GTX 770 and GTX 570 are very close, even if GTX 770’s TDP is technically a bit higher than GTX 570’s.

Despite this runup, GTX 770 still stays a hair under 7970GE, despite the slightly higher CPU power consumption from GTX 770’s higher performance in this benchmark. It’s only 6W at the wall, but it showcases that NVIDIA didn’t have to completely blow their efficiency curve to get a GK104 card back up to 7970GE performance levels.

In our TDP constrained scenario we can see the gaps between our cards grow. 78W separates the GTX 770 from GTX 670, and even GTX 680 draws 41W less, almost exactly what we’d expect from their published TDPs. On the flip side of the coin 383W is still less than both 7970 cards, reflecting the fact that GTX 770 is geared for 230W while AMD’s best is geared for 250W.

This is also a reminder however that at a mid-generation product extra performance does not come for free. With the same process and the same architecture, performance increases require power increases. This won’t significantly change until we see 20nm cards next year.

Moving on to temperatures, these are going to be a walk in the part for the reference GTX 770 due to the Titan cooler. At idle we see it hit 31C, which is actually 1C warmer than GTX 780, but this really just comes down to uncontrollable variations in our tests.

As a GPU Boost 2.0 card temperatures will top out at 80C in games, and that’s exactly what happens here. Interestingly, GTX 770 is just hitting 80C, as evidenced by our clockspeeds earlier. If it was running hotter, it would have needed to drop to lower clockspeeds.

Of course it doesn’t hold a candle here to 7970GE, but that’s the difference between a blower and an open air cooler in action. The blower based 7970 is much closer, as we’d expect.

Under FurMark the temperature situation is largely the same. The GTX 770 comes up to 82C here (favoring TDP throttling over temperature throttling), but the relative rankings are consistent.

With Titan’s cooler in tow, idle noise looks very good on GTX 770.

Our noise results under Battlefield 3 are a big part of the reason we’ve been calling the Titan cooler oversized for GTX 770. When is the last time we’ve seen a blower on a 230W card that only hit 46.7dB? The short answer is never. GTX 770’s fan simply doesn’t have to rev up very much to handle the lesser heat output. In fact it’s damn near competitive with the open air cooled 7970GE; there’s still a difference, but it’s under 2dB. More importantly however, despite being a more powerful and more power-hungry card than the GTX 680, the GTX 770 is over 5dB quieter, and this is despite the fact that the GTX 680 is already a solid card own its own. Titan’s cooler is certainly expensive, but it gets results.

Of course this is why it’s all the more a shame that none of NVIDIA’s partners are releasing retail cards with this cooler. There are some blowers in the pipeline, so it will be interesting to see if they can maintain Titan’s performance while giving up the metal.

With FurMark pushing our GTX 770 at full TDP, our noise results are still good, but not as impassive as they were under BF3. 50.3dB is still over a dB quieter than GTX 680, though obviously much closer than before. On the other hand the GTX 770 ever so slightly edges out the 7970GE and its open air cooler. Part of this comes down to the TDP difference of course, but beating an open air cooler like that is still quite the feat.

Wrapping things up here, it will be interesting to see where NVIDIA’s partners go with their custom designs. GTX 770, despite being a higher TDP part than both GTX 670 and GTX 680, ends up looking very impressive when it comes to noise, and it would be great to see NVIDIA’s partners match that. At the same time the increased power consumption and heat generation relative to the GeForce 600 series is unfortunate, but not unexpected. But for buyers coming from the GeForce 400 and GeForce 500 series, GTX 770 is in-line with what those previous generation cards were already pulling.

Compute Overclocking GTX 770
POST A COMMENT

117 Comments

View All Comments

  • khanov - Friday, May 31, 2013 - link

    *sigh*

    You failed again.
    Reply
  • khanov - Friday, May 31, 2013 - link

    Sorry dude, that wasn't aimed at you. Anand your comments system has a mind of its own.
    If I reply to xyz I sort of expect my reply to be below xyz's comment and not inserted randomly in to the comments list.
    Reply
  • chizow - Thursday, May 30, 2013 - link

    Once again, a year late, but still a nice card. The updated cooler and higher memory clocks are impressive, but the max Boost clock was achievable on "FTW" type binned GTX 680s in the past.

    I guess this is Nvidia's "Gigahertz Edition", basically an overclocked SKU to bring parity in the performance midrange market.
    Reply
  • Homeles - Thursday, May 30, 2013 - link

    How in the world is this card a year late? Nvidia was still winning at this time, one year ago. Now they have not one, not two, but three single GPU cards that are on parity or are faster than the 7970 GE. Nvidia is in a far better position than they were with their GTX 500 series. Reply
  • chizow - Thursday, May 30, 2013 - link

    Full GK104 should've been GTX 670 and below from the outset, as Nvidia initially planned. That's why it's a year late, at this price point anyways.

    Also, AMD reached parity with Nvidia's GTX 680 last year with the 7970GE launch in June/July, which then distanced itself by 5-10% with the Never Settle Drivers in Sept/Oct last year.

    Now that the GTX 770 has launched and is ~10% faster than the 680, it again, reaches parity with the 7970GE.
    Reply
  • JPForums - Thursday, May 30, 2013 - link

    I thought the 104/114 series was historically reserved for the x60, while the 100/110 series was meant for the x70/x80 chips. Thus this new highend GK104 model should have been a 760Ti. GK110 should have maxed out at the 780 and the 770 should have been the paired down model. If they really had to have a Titan, it should have been a DPFP uncapped 780 (so they got that almost right).
    Of course the prices should have been the usual highend price points and not the massive price jumps they are currently pushing. Sure you can justify the price with the current performance relative to the previous generation, but if we always did that, the high end cards would get perpetually more expensive as the performance of each new generation of cards would justify a price hike over the previous generation. In reality, these prices are the unfortunate result of a lack of competition. Of course not all companies handle lack of competition the same way. nVidia has shown that, when uncontested, they will jack introductory prices into the stratosphere (8800 Ultra - $800-1000, $650 - GTX280, Titan/GTX780 - $1000/$650). Under normal competitive conditions, the top single GPU card from either nVidia or AMD/ATi of each generation comes in at $500. In similarly uncontested situations AMD/ATi has proven to be much less abusive to their customers (7970 - $550, 5870 - $400). Granted the relatively low price of the Dual GPU GTX295 probably kept the 5870s price in check until the GTX400 series launched, but at that point there was a significant difference in stability between single and dual GPU cards. Now I must mention, lest anyone gets the wrong idea, that AMD/ATi was probably only taking this route because marketshare/mindshare was more important to them than profit margins. Nonetheless, the facts remain.
    Reply
  • chizow - Thursday, May 30, 2013 - link

    I agree with virtually everything you said, although I never really had a problem with Nvidia jumping GK104 up a SKU to the x70 range. The performance was certainly there especially relative to last-gen performance and full GK104 also beat AMD's best offering at the time.

    The problem I had was Nvidia's decision to turn this 2nd tier ASIC into their flagship and subsequently, hold off on launching their true flagship ASIC a full year AND charge $1000 (and later, $650) for it.

    All events predicated on the fact AMD launched 7970 at flagship prices when it really didn't deserve the asking price. Tahiti launch set the stage for Nvidia to not only undercut AMD pricing but to beat them in performance as well with only their 2nd tier chip.
    Reply
  • JPForums - Thursday, May 30, 2013 - link

    True, the 7970 could definitely be considered overpriced when it launched, but it was the undisputed performance champ until nVidia finally launched the GTX680 to bring back competition. Though, this begs the question, was the 7970 really this underperforming, or was the GK104 simply larger and faster (relatively speaking) than midrange chips in the past. Given that the GK104 die size is smaller than the GTS250, GTX460, GTX555 die sizes, I'd say larger is out. That said, they removed a lot of compute resources to get the gaming performance they were targeting, so faster might hold some weight.

    The 7000 series sudden proficiency in compute combined with the equally sudden removal of compute focus in the GTX600 series meant the 7970 would need to be far larger to maintain equivalent performance. Given the fact that Tahiti XT (352mm) was much closer to the size of GK104 (294mm) than GK110 (561mm), the 7970 should probably be considered a mid-weight. That is to say I can conclude that Tahiti XT was under performing (in games) AND GK104 was an overachiever. So the question becomes, is compute capabilities important enough to sacrifice gaming performance that a year ago likely would have clocked in closer to the GTX780 (GTX775 class?) for compute performance that in many cases exceeds Titan, but gaming performance roughly on par with a GTX680?
    Reply
  • JlHADJOE - Friday, May 31, 2013 - link

    IMO AMD's initial, higher price on the 7970 was justified. People forget that it was a much bigger chip than the 6970, with a 384-bit bus instead of 256. Any 384-bit part is effectively big, IMO. Same size as the 580, and now the Titan and 780.

    The fault here IMO goes right back to AMD's marketing division. If they hadn't stupidly went from 5870 to 6970, then people might have noticed that Tahiti was in fact a bigger part than its two immediate predecessors, and properly deserving of the 7900-series naming.
    Reply
  • EJS1980 - Thursday, May 30, 2013 - link

    Pretty much this /I\
    I
    I
    Reply

Log in

Don't have an account? Sign up now