Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

GTX 780 comes into this phase of our testing with a very distinct advantage. Being based on an already exceptionally solid card in the GTX Titan, it’s guaranteed to do at least as well as Titan here. At the same time because its practical power consumption is going to be a bit lower due to the fewer enabled SMXes and fewer RAM chips, it can be said that it has Titan’s cooler and a lower yet TDP, which can be a silent (but deadly) combination.

GeForce GTX 780 Voltages
GTX 780 Max Boost GTX 780 Base GTX 780 Idle
1.1625v 1.025v 0.875v

Unsurprisingly, voltages are unchanged from Titan. GK110’s max safe load voltage is 1.1625v, with 1.2v being the maximum overvoltage allowed by NVIDIA. Meanwhile idle remains at 0.875v, and as we’ll see idle power consumption is equal too.

Meanwhile we also took the liberty of capturing the average clockspeeds of the GTX 780 in all of the games in our benchmark suite. In short, although the GTX 780 has a higher base clock than Titan (863MHz versus 837MHz), the fact that it only goes to one higher boost bin (1006MHz versus 993MHz) means that the GTX 780 doesn’t usually clock much higher than GTX Titan under load; for one reason or another it typically settles at the boost bin as the GTX Titan on tests that offer consistent work loads. This means that in practice the GTX 780 is closer to a straight-up harvested GTX Titan, with no practical clockspeed differences.

GeForce GTX Titan Average Clockspeeds
  GTX 780 GTX Titan
Max Boost Clock 1006MHz 992MHz
DiRT:S
1006MHz
992MHz
Shogun 2
966MHz
966MHz
Hitman
992MHz
992MHz
Sleeping Dogs
969MHz
966MHz
Crysis
992MHz
992MHz
Far Cry 3
979MHz
979MHz
Battlefield 3
992MHz
992MHz
Civilization V
1006MHz
979MHz

Idle power consumption is by the book. With the GTX 780 equipped, our test system sees 110W at the wall, a mere 1W difference from GTX Titan, and tied with the 7970GE. Idle power consumption of video cards is getting low enough that there’s not a great deal of difference between the latest generation cards, and what’s left is essentially lost as noise.

Moving on to power consumption under Battlefield 3, we get our first real confirmation of our earlier theories on power consumption. Between the slightly lower load placed on the CPU from the lower framerate, and the lower power consumption of the card itself, GTX 780 draws 24W less at the wall. Interestingly this is exactly how much our system draws with the GTX 580 too, which accounting for lower CPU power consumption means that video card power consumption on the GTX 780 is down compared to the GTX 580. GTX 780 being a harvested part helps a bit with that, but it still means we’re looking at quite the boost in performance relative to the GTX 580 for a simultaneous decrease in video card power consumption.

Moving along, we see that power consumption at the wall is higher than both the GTX 680 and 7970GE. The former is self-explanatory: the GTX 780 features a bigger GPU and more RAM, but is made on the same 28nm process as the GTX 680. So for a tangible performance improvement within the same generation, there’s nowhere for power consumption to go but up. Meanwhile as compared to the 7970GE, we are likely seeing a combination of CPU power consumption differences and at least some difference in video card power consumption, though this doesn’t make it possible to specify how much of each.

Switching to FurMark and its more pure GPU load, our results become compressed somewhat as the GTX 780 moves slightly ahead of the 7970GE. Power consumption relative to Titan is lower than what we expected it to be considering both cards are hitting their TDP limits, though compared to GTX 680 it’s roughly where it should be. At the same time this reflects a somewhat unexpected advantage for NVIDIA; despite the fact that GK110 is a bigger and logically more power hungry GPU than AMD’s Tahiti, the power consumption of the resulting cards isn’t all that different. Somehow NVIDIA has a slight efficiency advantage here.

Moving on to idle temperatures, we see that GTX 780 hits the same 30C mark as GTX Titan and 7970GE.

With GPU Boost 2.0, load temperatures are kept tightly in check when gaming. The GTX 780’s default throttle point is 80C, and that’s exactly what happens here, with GTX 780 bouncing around that number while shifting between its two highest boost bins. Note that like Titan however this means it’s quite a bit warmer than the open air cooled 7970GE, so it will be interesting to see if semi-custom GTX 780 cards change this picture at all.

Whereas GPU Boost 2.0 keeps a lid on things when gaming, it’s apparently a bit more flexible on FurMark, likely because the video card is already heavily TDP throttled.

Last but not least we have our look at idle noise. At 38dB GTX 780 is essentially tied with GTX Titan, which again comes at no great surprise. At least in our testing environment one would be hard pressed to tell the difference between GTX 680, GTX 780, and GTX Titan at idle. They’re essentially as quiet as a card can get without being silent.

Under BF3 we see the payoff of NVIDIA’s fan modifications, along with the slightly lower effective TDP of GTX 780. Despite – or rather because – it was built on the same platform as GTX Titan, there’s nowhere for idle noise to go down. As a result we have a 250W blower based card hitting 48.1dB under load, which is simply unheard of. At nearly a 4dB improvement over both GTX 680 and GTX 690 it’s a small but significant improvement over NVIDIA’s previous generation cards, and even Titan has the right to be embarrassed. Silent it is not, but this is incredibly impressive for a blower. The only way to beat something like this is with an open air card, as evidenced by the 7970GE, though that does comes with the usual tradeoffs for using such a cooler.

Because of the slightly elevated FurMark temperatures we saw previously, GTX 780 ends up being a bit louder than GTX Titan under FurMark. This isn’t something that we expect to see under any non-pathological workload, and I tend to favor BF3 over FurMark here anyhow, but it does point to there being some kind of minor difference in throttling mechanisms between the two cards. At the same time this means that GTX 780 is still a bit louder than our open air cooled 7970GE, though not by as large a difference as we saw with BF3.

Overall the GTX 780 generally meets or exceeds the GTX Titan in our power, temp, and noise tests, just as we’d expect for a card almost identical to Titan itself. The end result is that it maintains every bit of Titan’s luxury and stellar performance, and if anything improves on it slightly when we’re talking about the all-important aspects of load noise. It’s a shame that coolers such as 780’s are not a common fixture on cheaper cards, as this is essentially unparalleled as far as blower based coolers are concerned.

At the same time this sets up an interesting challenge for NVIDIA’s partners. To pass Greenlight they need to produce cards with coolers that function as good or as better than the reference GTX 780 in NVIDIA’s test environment. This is by no means impossible, but it’s not going to be an easy task. So it will be interesting to see what partners cook up, especially with the obligatory dual fan open air cooled models.

Compute Final Thoughts
POST A COMMENT

155 Comments

View All Comments

  • ambientblue - Thursday, August 08, 2013 - link

    you are a sucker if you are willing to pay so much for twice the vram and 10% performance over the 780... if you got your titans before the 780 was released then sure its a massive performance boost over 680s but that's because the 680s should have been cheaper and named 660, and titan should have cost the amount the 680 was going for. You wont be so satisfied when the GTX 880 comes out and obliterates your titan at half the cost. THen again with that kind of money youll probably just buy 3 of those. Reply
  • B3an - Thursday, May 23, 2013 - link

    I'd REALLY like to see more than just 3GB on high end cards. It's not acceptable. With the upcoming consoles having 8GB (with atleast 5GB+ usable for games) then even by the end of this year we may start seeing current high-end PC GPU's struggling due to lack of graphics RAM. These console games will have super high res textures, and when ported to PC, 3GB graphics RAM will not cut it at high res. I also have 2560x1600 monitors, and theres no way X1/PS4 games are going to run at this res with just 3GB. Yet the whole point of a high-end card is for this type of res as it's wasted on 1080p crap.

    Not enough graphics RAM was also a problem years ago on high-end GPU's. I remember having a 7950 G2X with only 512MB (1GB total but 512MB for each GPU) and it would get completely crippled (single digit FPS) from running games at 2560x1600 or even 1080p. Once you hit the RAM limit things literally become a slideshow. I can see this happening again just a few months from now, but to pretty much EVERYONE who doesn't have a Titan with 6GB.

    So i'm basically warning people thinking of buy a high-end card at this point - you seriously need to keep in mind that just a few months from now it could be struggling due to lack of graphics RAM. Either way, don't expect your purchase to last long, the RAM issue will definitely be a problem in the not too distant future (give it 18 months max).
    Reply
  • Vayra - Thursday, May 23, 2013 - link

    How can you be worried about the console developments, and especially when it comes to VRAM of all things, when even the next-gen consoles are now looking to be no more than 'on-par' with todays' PC performance in games. I mean, the PS4 is just a glorified midrange GPU in all respects and so is the X1 even though they treat things a bit differently, not using GDDR5. Even the 'awesome' Killzone and CoD Ghost trailers show dozens of super-low-res textures and areas completely greyed out so as not to consume performance. All we get with the new consoles is that finally, 2011's 'current-gen' DX11 tech is coming to the console @ 1080p. But both machines will be running on that 8GB as their TOTAL RAM, and will be using it for all their tasks. Do you really think any game is going to eat up 5 Gigs of VRAM on 1080p? Even Crysis 3 on PC does not do that on its highest settings (it just peaks at/over 3gb I believe?) at 1440p.

    Currently the only reason to own a gpu or system with over 2 GB of VRAM is because you play at ultra settings at a reso over 1080p. For 1080p, which is what 'this-gen' consoles are being built for (sadly...) 2 GB is still sufficient and 3 GB is providing headroom.

    Hey, and last but not least, Nvidia has to give us at least ONE reason to still buy those hopelessly priced Titans off them, right?

    Also, aftermarket versions of the 780 will of course be able to feature more VRAM as we have seen with previous generations on both camps. I'm 100% certain we will be seeing 4 GB versions soon.
    Reply
  • B3an - Friday, May 24, 2013 - link

    The power of a consoles GPU has nothing to do with it. Obviously these consoles will not match a high-end PC, but why would they have to in order to use more VRAM?! Nothing is stopping a mid-range or even a low-end PC GPU from using 4GB VRAM if it wanted to. Same with consoles. And obviously they will not use all 8GB for games (as i pointed out) but we're probably looking at atleast 4 - 5GB going towards games. The Xbox One for example is meant to use up to 3GB for the OS and other stuff, the remaining 5GB is totally available to games (or it's looking like that). Both the X1 and PS4 also have unified memory, meaning the GPU can use as much as it wants that isn't available to the OS.

    Crysis 3 is a bad example because this game is designed with ancient 8 year old console hardware in mind so it's crippled from the start even if it looks better on PC. When we start seeing X1/PS4 ports to PC the VRAM usage will definitely jump up because textures WILL be higher res and other things WILL be more complex (level design, physics, enemy A.I and so on). Infact the original Crysis actually has bigger open areas and better physics (explosions, mowing down trees) than Crysis 3 because it was totally designed for PC at the time. This stuff was removed in Crysis 3 because they had to make it play exactly the same across all platforms.

    I really think games will eat up 4+GB of VRAM within the next 18 months, especially at 2560x1600 and higher, and atleast use over 3GB at 1080p. The consoles have been holding PC's back for a very very long time. Even console ports made for ancient console hardware with 512MB VRAM can already use over 2GB on the PC version with enough AA + AF at 2560x1600. So thats just 1GB VRAM left on a 3GB card, and 1GB is easily gone by just doubling texture resolution.
    Reply
  • Akrovah - Thursday, May 23, 2013 - link

    You're forgetting that on these new consoles that 8GB is TOTAL system memory, not just the video RAM. While on a PC you have the 3GB of VRAM here plus the main system memory (probably around 8 Gigs beign pretty stnadard at thsi point).

    I can guarantee you the consoles are not using that entire amount, or even the 5+ availabe for games, as VRAM. And this part is just me talking out of my bum, but I doubt many games on these consoles will use more than 2GB of teh unified memory for VRAM.

    Also I don;t think the res has much to do with the video memory any more. Some quick math and even if the game is tripple buffering a resolution of 2560x1600 only needs about 35 Megs of storage. Unless my math is wrong
    2560x1600 = 4,096,000 pixels at 24 bits each = 98,304,000 bits to store a single completed frame.
    divide by 8 = 12,288,000 bytes /1024 = 12,000 KiB / 1024 = 11.72 MiB per frame.

    Somehow I don't think modern graphic card's video memory has anythign to do with screen resolution and mostly is used by the texture data.
    Reply
  • inighthawki - Thursday, May 23, 2013 - link

    Most back buffer swap chains are created with 32-bit formats, and even if they are not, chances are the hardware would convert this internally to a 32-bit format for performance to account for texture swizzling and alignment costs. Even so, a 2560x1600x32bpp back buffer would be 16MB, so you're looking at 32 or 48MB for double and triple buffering, respectively.

    But you are right, the vast majority of video memory usage will come from high resolution textures. A typical HD texture is already larger than a back buffer (2048x2048 is slightly larger than 2560x1600) and depending on the game engine may have a number of mip levels also loaded, so you can increase the costs by about 33%. (I say all of this assuming we are not using any form of texture compression just for the sake of example).

    I also hope anyone who buys a video card with large amounts of ram is also running 64-bit Windows :), otherwise their games can't maximize the use of the card's video memory.
    Reply
  • Akrovah - Friday, May 24, 2013 - link

    I was under the impression that on a 32 bit rendering pipeline the upper 8 bits were used for transparancy calulation, but that it was then filtered down to 24 bits when actually written to the buffer because that's how displays take information.

    But then I just made that up in my own mind because I don't actually know how or when the 32-bit render - 24-bit display conversion takes place.

    But assuming I was wrong and what you say is correct (a likely scenario in this case) my previous point still stands.
    Reply
  • jonjonjonj - Thursday, May 23, 2013 - link

    i wouldn't be worried. the lowend cpu and apu in consoles wont be pushing anything. the consoles are already outdated and they haven't even been launched. the consoles have 8GB TOTAL memory not 8GB of vram. Reply
  • B3an - Friday, May 24, 2013 - link

    Again, the power of these consoles has absolutely nothing to do with how much VRAM they can use. If a low-end PC GPU existed with 4GB VRAM, it can easily use all that 4GB if it wanted to.

    And it's unified memory in these consoles. It all acts as VRAM. ALL of the 8GB is available to the GPU and games that isn't used by the OS (which is apparently 3GB on the Xbox One for OS/other tasks, leaving 5GB to games).
    Reply
  • Akrovah - Friday, May 24, 2013 - link

    No, it doesn't all act as VRAM. You still have your data storage objects like all your variables (of which a game can have thousands) AI objects, pathfinding data, all the corodiantes for everything in the current level/map/whatever. Basically the entire state of the game that is operating behind the scenes. This is not insignifigant.

    All the non OS used RAM is available to the games yes, but games are storing a hell of alot more data than what is typically stored in video RAM. Hence PC games that need 2 GB of RAM also oly require 512 Megs of VRAM.
    Reply

Log in

Don't have an account? Sign up now