Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

GTX 780 comes into this phase of our testing with a very distinct advantage. Being based on an already exceptionally solid card in the GTX Titan, it’s guaranteed to do at least as well as Titan here. At the same time because its practical power consumption is going to be a bit lower due to the fewer enabled SMXes and fewer RAM chips, it can be said that it has Titan’s cooler and a lower yet TDP, which can be a silent (but deadly) combination.

GeForce GTX 780 Voltages
GTX 780 Max Boost GTX 780 Base GTX 780 Idle
1.1625v 1.025v 0.875v

Unsurprisingly, voltages are unchanged from Titan. GK110’s max safe load voltage is 1.1625v, with 1.2v being the maximum overvoltage allowed by NVIDIA. Meanwhile idle remains at 0.875v, and as we’ll see idle power consumption is equal too.

Meanwhile we also took the liberty of capturing the average clockspeeds of the GTX 780 in all of the games in our benchmark suite. In short, although the GTX 780 has a higher base clock than Titan (863MHz versus 837MHz), the fact that it only goes to one higher boost bin (1006MHz versus 993MHz) means that the GTX 780 doesn’t usually clock much higher than GTX Titan under load; for one reason or another it typically settles at the boost bin as the GTX Titan on tests that offer consistent work loads. This means that in practice the GTX 780 is closer to a straight-up harvested GTX Titan, with no practical clockspeed differences.

GeForce GTX Titan Average Clockspeeds
  GTX 780 GTX Titan
Max Boost Clock 1006MHz 992MHz
Shogun 2
Sleeping Dogs
Far Cry 3
Battlefield 3
Civilization V

Idle power consumption is by the book. With the GTX 780 equipped, our test system sees 110W at the wall, a mere 1W difference from GTX Titan, and tied with the 7970GE. Idle power consumption of video cards is getting low enough that there’s not a great deal of difference between the latest generation cards, and what’s left is essentially lost as noise.

Moving on to power consumption under Battlefield 3, we get our first real confirmation of our earlier theories on power consumption. Between the slightly lower load placed on the CPU from the lower framerate, and the lower power consumption of the card itself, GTX 780 draws 24W less at the wall. Interestingly this is exactly how much our system draws with the GTX 580 too, which accounting for lower CPU power consumption means that video card power consumption on the GTX 780 is down compared to the GTX 580. GTX 780 being a harvested part helps a bit with that, but it still means we’re looking at quite the boost in performance relative to the GTX 580 for a simultaneous decrease in video card power consumption.

Moving along, we see that power consumption at the wall is higher than both the GTX 680 and 7970GE. The former is self-explanatory: the GTX 780 features a bigger GPU and more RAM, but is made on the same 28nm process as the GTX 680. So for a tangible performance improvement within the same generation, there’s nowhere for power consumption to go but up. Meanwhile as compared to the 7970GE, we are likely seeing a combination of CPU power consumption differences and at least some difference in video card power consumption, though this doesn’t make it possible to specify how much of each.

Switching to FurMark and its more pure GPU load, our results become compressed somewhat as the GTX 780 moves slightly ahead of the 7970GE. Power consumption relative to Titan is lower than what we expected it to be considering both cards are hitting their TDP limits, though compared to GTX 680 it’s roughly where it should be. At the same time this reflects a somewhat unexpected advantage for NVIDIA; despite the fact that GK110 is a bigger and logically more power hungry GPU than AMD’s Tahiti, the power consumption of the resulting cards isn’t all that different. Somehow NVIDIA has a slight efficiency advantage here.

Moving on to idle temperatures, we see that GTX 780 hits the same 30C mark as GTX Titan and 7970GE.

With GPU Boost 2.0, load temperatures are kept tightly in check when gaming. The GTX 780’s default throttle point is 80C, and that’s exactly what happens here, with GTX 780 bouncing around that number while shifting between its two highest boost bins. Note that like Titan however this means it’s quite a bit warmer than the open air cooled 7970GE, so it will be interesting to see if semi-custom GTX 780 cards change this picture at all.

Whereas GPU Boost 2.0 keeps a lid on things when gaming, it’s apparently a bit more flexible on FurMark, likely because the video card is already heavily TDP throttled.

Last but not least we have our look at idle noise. At 38dB GTX 780 is essentially tied with GTX Titan, which again comes at no great surprise. At least in our testing environment one would be hard pressed to tell the difference between GTX 680, GTX 780, and GTX Titan at idle. They’re essentially as quiet as a card can get without being silent.

Under BF3 we see the payoff of NVIDIA’s fan modifications, along with the slightly lower effective TDP of GTX 780. Despite – or rather because – it was built on the same platform as GTX Titan, there’s nowhere for idle noise to go down. As a result we have a 250W blower based card hitting 48.1dB under load, which is simply unheard of. At nearly a 4dB improvement over both GTX 680 and GTX 690 it’s a small but significant improvement over NVIDIA’s previous generation cards, and even Titan has the right to be embarrassed. Silent it is not, but this is incredibly impressive for a blower. The only way to beat something like this is with an open air card, as evidenced by the 7970GE, though that does comes with the usual tradeoffs for using such a cooler.

Because of the slightly elevated FurMark temperatures we saw previously, GTX 780 ends up being a bit louder than GTX Titan under FurMark. This isn’t something that we expect to see under any non-pathological workload, and I tend to favor BF3 over FurMark here anyhow, but it does point to there being some kind of minor difference in throttling mechanisms between the two cards. At the same time this means that GTX 780 is still a bit louder than our open air cooled 7970GE, though not by as large a difference as we saw with BF3.

Overall the GTX 780 generally meets or exceeds the GTX Titan in our power, temp, and noise tests, just as we’d expect for a card almost identical to Titan itself. The end result is that it maintains every bit of Titan’s luxury and stellar performance, and if anything improves on it slightly when we’re talking about the all-important aspects of load noise. It’s a shame that coolers such as 780’s are not a common fixture on cheaper cards, as this is essentially unparalleled as far as blower based coolers are concerned.

At the same time this sets up an interesting challenge for NVIDIA’s partners. To pass Greenlight they need to produce cards with coolers that function as good or as better than the reference GTX 780 in NVIDIA’s test environment. This is by no means impossible, but it’s not going to be an easy task. So it will be interesting to see what partners cook up, especially with the obligatory dual fan open air cooled models.

Compute Final Thoughts


View All Comments

  • EJS1980 - Tuesday, May 28, 2013 - link

    The 680 was $500 at launch, and was the main reason why AMD received so much flak for their 7970 pricing. At the time it launched, the 680 blew the 7970 away in terms of gaming performance, which was thee reason AMD had to respond with across the board price drops on the 7950/70, even though it took them a few months. Reply
  • just4U - Thursday, May 23, 2013 - link

    I love the fact that their using the cooler they used for the Titan. While I plan to wait (no need to upgrade right now) I'd like to see more of that.. It's a feature I'd pay for from both Nvidia and Amd. Reply
  • HalloweenJack - Thursday, May 23, 2013 - link

    no compute with the GTX 780 - the DP is similar to a GTX 480 and way way down on a 7970. no folding on these then Reply
  • BiffaZ - Friday, May 24, 2013 - link

    Folding doesn't use DP currently, its SP, same for most @home type compute apps, the main exclusion being Milkyway@Home which needs DP alot. Reply
  • boe - Thursday, May 23, 2013 - link

    Bring on the DirectCU version and I'll order 2 today! Reply
  • slickr - Thursday, May 23, 2013 - link

    At $650 its way too expensive. Two years ago this card would have been $500 at launch and within 4-5 months it would have been $400 with the slower cut down version at $300 and mid range cards $200.

    I hope people aren't stupid to buy this overpriced card that only brings about 5fps more than AMD top end single card.
  • chizow - Thursday, May 23, 2013 - link

    I think if it launched last year, it's price would have been more justified, but Nvidia sat on it for a year while they propped up mid-range GK104 as flagship. Very disappointing.

    Measured on it's own merits, GTX 780 is very impressive and probably worth the increase over previous flagship price points. For example, it's generally 80% faster than GTX 580, almost 100% faster than GTX 480, it's predecessors. In the past the increase might only be ~60-75% and improve some with driver gains. It also adds some bling and improvements with the cooler.

    It's just too late imo for Nvidia to ask those kinds of prices, especially after lying to their fanbase about GK104 always slotted as Kepler flagship.
  • JPForums - Thursday, May 23, 2013 - link

    I love what you are doing with frame time deltas. Some sites don't quite seem to understand that you can maintain low maximum frame times while still introducing stutter (especially in the simulation time counter) by having large deltas between frames. In the worst case, your simulation time can slow down (or speed up) while your frame time moves back in the opposite direction exaggerating the result.

    Admittedly I may be misunderstanding your method as I'm much more accustomed to seeing algebraic equations describing the method, but assuming I get it, I'd like to suggest further modification to you method to deal with performance swings that occur expectedly (transition to/from cut-scenes, arrival/departure of graphically intense elements, etc.). Rather than compare the average of the delta between frames against an average frame time across the entire run, you could compare instantaneous frame time against a sliding window average. The window could be large for games with consistent performance and smaller for games with mood swings. Using percentages when comparing against the average frame times for the entire run can result in situations where two graphics solutions with the exact same deltas would show the one with better performance having worse deltas. As an example, take any video cards frame time graph and subtract 5ms from each frame time and compare the two resulting delta percentages. A sliding window accounts for natural performance deviations while still giving a baseline to compare frame times swings from. If you are dead set on percentages, you can take them from there as the delta percentages from local frame time averages are more relevant than the delta percentage from the runs overall average. Given my love of number manipulation, though, I'd still prefer to see the absolute frame time difference from the sliding window average. It would make it much easier for me to see whether the difference to the windowed average is large (lets say >15ms) or small (say <4ms). Of course, while I'm being demanding, it would be nice to get an xls, csv, or some other format of file with the absolute frame times so I can run whatever graph I want to see myself. I won't hold my breath. Take some of my suggestions, all of them, or none of them. I'm just happy to see where things are going.
  • Arnulf - Thursday, May 23, 2013 - link

    The correct metric for this comparison would be die size (area) and complexity of manufacturing rather than the number of transistors.

    RAM modules contain far more transistors (at least a couple of transistors per bit, with common 4 GB = 32 Gb = 64+ billion transistors per stick modules selling for less than $30 on Newegg), yet cost peanuts compared to this overpriced abomination that is 780.
  • marc1000 - Thursday, May 23, 2013 - link

    and GTX 760 ??? what will it be? will it be $200??

    or maybe the 660 will be rebranded as 750 and go to $150??

Log in

Don't have an account? Sign up now