Power, Temperature, & Noise

Last but certainly not least, we have our obligatory look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason to ignore the noise.

It’s for that reason that GPU manufacturers also seek to keep power usage down, and under normal circumstances there’s a pretty clear relationship between power consumption, heat generated, and the amount of noise the fans will generate to remove that heat. At the same time however this is an area that NVIDIA is focusing on for Titan, as a premium product means they can use premium materials, going above and beyond what more traditional plastic cards can do for noise dampening.

GeForce GTX Titan Voltages
Titan Max Boost Titan Base Titan Idle
1.1625v 1.012v 0.875v

Stopping quickly to take a look at voltages, Titan’s peak stock voltage is at 1.162v, which correlates to its highest speed bin of 992MHz. As the clockspeeds go farther down these voltages drop, to a load low of 0.95v at 744MHz. This ends up being a bit less than the GTX 680 and most other desktop Kepler cards, which go up just a bit higher to 1.175v. Since NVIDIA is classifying 1.175v as an “overvoltage” on Titan, it looks like GK110 isn’t going to be quite as tolerant of voltages as GK104 was.

GeForce GTX Titan Average Clockspeeds
Max Boost Clock 992MHz
DiRT:S 992MHz
Shogun 2 966MHz
Hitman 992MHz
Sleeping Dogs 966MHz
Crysis 992MHz
Far Cry 3 979MHz
Battlefield 3 992MHz
Civilization V 979MHz

One thing we quickly notice about Titan is that thanks to GPU Boost 2 and the shift from what was primarily a power based boost system to a temperature based boost system is that Titan hits its maximum speed bin far more often and sustains it more often too, especially since there’s no longer a concept of a power target with Titan, and any power limits are based entirely by TDP.  Half of our games have an average clockspeed of 992MHz, or in other words never triggered a power or thermal condition that would require Titan to scale back its clockspeed. For the rest of our tests the worst clockspeed was all of 2 bins (26MHz) lower at 966MHz, with this being a mix of hitting both thermal and power limits.

On a side note, it’s worth pointing out that these are well in excess of NVIDIA’s official boost clock for Titan. With Titan boost bins being based almost entirely on temperature, the average boost speed for Titan is going to be more dependent on environment (intake) temperatures than GTX 680 was, so our numbers are almost certainly a bit higher than what one would see in a hotter environment.

Starting as always with a look at power, there’s nothing particularly out of the ordinary here. AMD and NVIDIA have become very good at managing idle power through power gating and other techniques, and as a result idle power has come down by leaps and bounds over the years. At this point we still typically see some correlation between die size and idle power, but that’s a few watts at best. So at 111W at the wall, Titan is up there with the best cards.

Moving on to our first load power measurement, as we’ve dropped Metro 2033 from our benchmark suite we’ve replaced it with Battlefield 3 as our game of choice for measuring peak gaming power consumption. BF3 is a difficult game to run, but overall it presents a rather typical power profile which of all the games in our benchmark suite makes it one of the best representatives.

In any case, as we can see Titan’s power consumption comes in below all of our multi-GPU configurations, but higher than any other single-GPU card. Titan’s 250W TDP is 55W higher than GTX 680’s 195W TDP, and with a 73W difference at the wall this isn’t too far off. A bit more surprising is that it’s drawing nearly 50W more than our 7970GE at the wall, given the fact that we know the 7970GE usually gets close to its TDP of 250W. At the same time since this is a live game benchmark, there are more factors than just the GPU in play. Generally speaking, the higher a card’s performance here, the harder the rest of the system will have to work to keep said card fed, which further increases power consumption at the wall.

Moving to Furmark our results keep the same order, but the gap between the GTX 680 and Titan widens, while the gap between Titan and the 7970GE narrows. Titan and the 7970GE shouldn’t be too far apart from each other in most situations due to their similar TDPs (even if NVIDIA and AMD TDPs aren’t calculated in quite the same way), so in a pure GPU power consumption scenario this is what we would expect to see.

Titan for its part is the traditional big NVIDIA GPU, and while NVIDIA does what they can to keep it in check, at the end of the day it’s still going to be among the more power hungry cards in our collection. Power consumption itself isn’t generally a problem with these high end cards so long as a system has the means to cool it and doesn’t generate much noise in doing so.

Moving on to temperatures, for a single card idle temperatures should be under 40C for anything with at least a decent cooler. Titan for its part is among the coolest at 30C; its large heatsink combined with its relatively low idle power consumption makes it easy to cool here.

Because Titan’s boost mechanisms are now temperature based, Titan’s temperatures are going to naturally gravitate towards its default temperature target of 80C as the card raises and lowers clockspeeds to maximize performance while keeping temperatures at or under that level. As a result just about any heavy load is going to see Titan within a couple of degrees of 80C, which makes for some very predictable results.

Looking at our other cards, while the various NVIDIA cards are still close in performance the 7970GE ends up being quite a bit cooler due to its open air cooler. This is typical of what we see with good open air coolers, though with NVIDIA’s temperature based boost system I’m left wondering if perhaps those days are numbered. So long as 80C is a safe temperature, there’s little reason not to gravitate towards it with a system like NVIDIA’s, regardless of the cooler used.

Load GPU Temperature - FurMark

With Furmark we see everything pull closer together as Titan holds fast at 80C while most of the other cards, especially the Radeons, rise in temperature. At this point Titan is clearly cooler than a GTX 680 SLI, 2C warmer than a single GTX 680, and still a good 10C warmer than our 7970GE.

Idle Noise Levels

Just as with the GTX 690, one of the things NVIDIA focused on was construction choices and materials to reduce noise generated. So long as you can keep noise down, then for the most part power consumption and temperatures don’t matter.

Simply looking at idle shows that NVIDIA is capable of delivering on their claims. 37.8dB is the quietest actively cooled high-end card we’ve measured yet, besting even the luxury GTX 690, and the also well-constructed GTX 680. Though really with the loudest setup being all of 40.5dB, none of these setups is anywhere near loud at idle.

It’s with load noise that we finally see the full payoff of Titan’s build quality. At 51dB it’s only marginally quieter than the GTX 680, but as we recall from our earlier power data, Titan is drawing nearly 70W more than GTX 680 at the wall. In other words, despite the fact that Titan is drawing significantly more power than GTX 680, it’s still as quiet as or quieter than the aforementioned card. This coupled with Titan’s already high performance is Titan’s true power in NVIDIA’s eyes; it’s not just fast, but despite its speed and despite its TDP it’s as quiet as any other blower based card out there, allowing them to get away with things such as Tiki and tri-SLI systems with reasonable noise levels.

Much like what we saw with temperatures under Furmark, noise under Furmark has our single-GPU cards bunching up. Titan goes up just enough to tie GTX 680 in our pathological scenario, meanwhile our multi-GPU cards start shooting up well past Titan, while the 7970GE jumps up to just shy of Titan. This is a worst case scenario, but it’s a good example of how GPU Boost 2.0’s temperature functionality means that Titan quite literally keeps its cool and thereby keeps its noise in check.

Of course we would be remiss to point out that in all these scenarios the open air cooled 7970GE is still quieter, and in our gaming scenario by actually by quite a bit. Not that Titan is loud, but it doesn’t compare to the 7970GE. Ultimately we get to the age old debate between blowers and open air coolers; open air coolers are generally quieter, but blowers allow for more flexibility with products, and are more lenient with cases with poor airflow.

Ultimately Titan is a blower so that NVIDIA can do concept PCs like Tiki, which is something an open air cooler would never be suitable for. For DIY builders the benefits may not be as pronounced, but this is also why NVIDIA is focusing so heavily on boutique systems where the space difference really matters. Whereas realistically speaking, AMD’s best blower-capable card is the vanilla 7970, a less power hungry but also much less powerful card.

Synthetics Final Thoughts
Comments Locked

337 Comments

View All Comments

  • Ryan Smith - Thursday, February 21, 2013 - link

    PCI\VEN_10DE&DEV_1005&SUBSYS_103510DE

    I have no idea what a Tesla card's would be, though.
  • alpha754293 - Thursday, February 21, 2013 - link

    I don't suppose you would know how to tell the computer/OS that the card has a different PCI DevID other than what it actually is, would you?

    NVIDIA Tesla C2075 PCI\VEN_10DE&DEV_1096
  • Hydropower - Friday, February 22, 2013 - link

    PCI\VEN_10DE&DEV_1022&SUBSYS_098210DE&REV_A1

    For the K20c.
  • brucethemoose - Thursday, February 21, 2013 - link

    "This TDP limit is 106% of Titan’s base TDP of 250W, or 265W. No matter what you throw at Titan or how you cool it, it will not let itself pull more than 265W sustained."

    The value of the Titan isn't THAT bad at stock, but 106%? Is that a joke!?

    Throw in an OC for OC comparison, and this card is absolutely ridiculous. Take the 7970 GE... 1250mhz is a good, reasonable 250mhz OC on air, a nice 20%-25% boost in performance.

    The Titan review sample is probably the best case scenario and can go 27MHz past turbo speed, 115MHZ past base speed, so maybe 6%-10%. That $500 performance gap starts shrinking really, really fast once you OC, and for god sakes, if you're the kind of person who's buying a $1000 GPU, you shouldn't intend to leave it at stock speeds.

    I hope someone can voltmod this card and actually make use of a waterblock, but there's another issue... Nvidia is obviously setting a precedent. Unless they change this OC policy, they won't be seeing any of my money anytime soon.
  • JarredWalton - Thursday, February 21, 2013 - link

    As someone with a 7970GE, I can tell you unequivocally that 1250MHz on air is not at all a given. My card can handle many games at 1150MMhz, but other titles and applications (say, running some compute stuff) and I'm lucky to get stability for more than a day at 1050MHz. Perhaps with enough effort playing with voltage mods and such I could improve the situation, but I'm happier living with a card for a couple years that doesn't crap out because of excessively high voltages.
  • CeriseCogburn - Saturday, February 23, 2013 - link

    " After a few hours of trial and error, we settled on a base of the boost curve of 9,80 MHz, resulting in a peak boost clock of a mighty 1,123MHz; a 12 per cent increase over the maximum boost clock of the card at stock.

    Despite the 3GB of GDDR5 fitted on the PCB's rear lacking any active cooling it too proved more than agreeable to a little tweaking and we soon had it running at 1,652MHz (6.6GHz effective), a healthy ten per cent increase over stock.

    With these 12-10 per cent increases in clock speed our in-game performance responded accordingly."

    http://www.bit-tech.net/hardware/2013/02/21/nvidia...

    Oh well, 12 is 6 if it's nVidia bash time, good job mr know it all.
  • Hrel - Thursday, February 21, 2013 - link

    YES! 1920x1080 has FINALLY arrived. It only took 6 years from when it became mainstream but it's FINALLY here! FINALLY! I get not doing it on this card, but can you guys PLEASE test graphics cards, especially laptop ones, at 1600x900 and 1280x720. A lot of the time when on a budget playing games at a lower resolution is a compromise you're more than willing to make in order to get decent quality settings. PLEASE do this for me, PLEASE!
  • JarredWalton - Thursday, February 21, 2013 - link

    Um... we've been testing 1366x768, 1600x900, and 1920x1080 as our graphics standards for laptops for a few years now. We don't do 1280x720 because virtually no laptops have that as their native resolution, and stretching 720p to 768p actually isn't a pleasant result (a 6.7% increase in resolution means the blurring is far more noticeable). For desktop cards, I don't see much point in testing most below 1080p -- who has a desktop not running at least 1080p native these days? The only reason for 720p or 900p on desktops is if your hardware is too old/slow, which is fine, but then you're probably not reading AnandTech for the latest news on GPU performance.
  • colonelclaw - Thursday, February 21, 2013 - link

    I must admit I'm a little bit confused by Titan. Reading this review gives me the impression it isn't a lot more than the annual update to the top-of-the-line GPU from Nvidia.
    What would be really useful to visualise would be a graph plotting the FPS rates of the 480, 580, 680 and Titan along with their release dates. From this I think we would get a better idea of whether or not it's a new stand out product, or merely this year's '780' being sold for over double the price.
    Right now I genuinely don't know if i should be holding Nvidia in awe or calling them rip-off merchants.
  • chizow - Friday, February 22, 2013 - link

    From Anandtech's 7970 Review, you can see relative GPU die sizes:

    http://images.anandtech.com/doci/5261/DieSize.png

    You'll also see the prices of these previous flagships has been mostly consistent, in the $500-650 range (except for a few outliers like the GTX 285 which came in hard economic times and the 8800Ultra, which was Nvidia's last ultra-premium card).

    You an check some sites that use easy performance rating charts, like computerbase.de to get a quick idea of relative performance increases between generations, but you can quickly see that going from a new generation (not half-node) like G80 > GT200 > GF100 > GK100/110 should offer 50%+ increase, generally closer to the 80% range over the predecessor flagship.

    Titan would probably come a bit closer to 100%, so it does outperform expectations (all of Kepler line did though), but it certainly does not justify the 2x increase in sticker price. Nvidia is trying to create a new Ultra-premium market without giving even a premium alternative. This all stems from the fact they're selling their mid-range part, GK104, as their flagship, which only occurred due to AMD's ridiculous pricing of the 7970.

Log in

Don't have an account? Sign up now