Power, Temperature, & Noise

Last but certainly not least, we have our obligatory look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason to ignore the noise.

It’s for that reason that GPU manufacturers also seek to keep power usage down, and under normal circumstances there’s a pretty clear relationship between power consumption, heat generated, and the amount of noise the fans will generate to remove that heat. At the same time however this is an area that NVIDIA is focusing on for Titan, as a premium product means they can use premium materials, going above and beyond what more traditional plastic cards can do for noise dampening.

GeForce GTX Titan Voltages
Titan Max Boost Titan Base Titan Idle
1.1625v 1.012v 0.875v

Stopping quickly to take a look at voltages, Titan’s peak stock voltage is at 1.162v, which correlates to its highest speed bin of 992MHz. As the clockspeeds go farther down these voltages drop, to a load low of 0.95v at 744MHz. This ends up being a bit less than the GTX 680 and most other desktop Kepler cards, which go up just a bit higher to 1.175v. Since NVIDIA is classifying 1.175v as an “overvoltage” on Titan, it looks like GK110 isn’t going to be quite as tolerant of voltages as GK104 was.

GeForce GTX Titan Average Clockspeeds
Max Boost Clock 992MHz
DiRT:S 992MHz
Shogun 2 966MHz
Hitman 992MHz
Sleeping Dogs 966MHz
Crysis 992MHz
Far Cry 3 979MHz
Battlefield 3 992MHz
Civilization V 979MHz

One thing we quickly notice about Titan is that thanks to GPU Boost 2 and the shift from what was primarily a power based boost system to a temperature based boost system is that Titan hits its maximum speed bin far more often and sustains it more often too, especially since there’s no longer a concept of a power target with Titan, and any power limits are based entirely by TDP.  Half of our games have an average clockspeed of 992MHz, or in other words never triggered a power or thermal condition that would require Titan to scale back its clockspeed. For the rest of our tests the worst clockspeed was all of 2 bins (26MHz) lower at 966MHz, with this being a mix of hitting both thermal and power limits.

On a side note, it’s worth pointing out that these are well in excess of NVIDIA’s official boost clock for Titan. With Titan boost bins being based almost entirely on temperature, the average boost speed for Titan is going to be more dependent on environment (intake) temperatures than GTX 680 was, so our numbers are almost certainly a bit higher than what one would see in a hotter environment.

Starting as always with a look at power, there’s nothing particularly out of the ordinary here. AMD and NVIDIA have become very good at managing idle power through power gating and other techniques, and as a result idle power has come down by leaps and bounds over the years. At this point we still typically see some correlation between die size and idle power, but that’s a few watts at best. So at 111W at the wall, Titan is up there with the best cards.

Moving on to our first load power measurement, as we’ve dropped Metro 2033 from our benchmark suite we’ve replaced it with Battlefield 3 as our game of choice for measuring peak gaming power consumption. BF3 is a difficult game to run, but overall it presents a rather typical power profile which of all the games in our benchmark suite makes it one of the best representatives.

In any case, as we can see Titan’s power consumption comes in below all of our multi-GPU configurations, but higher than any other single-GPU card. Titan’s 250W TDP is 55W higher than GTX 680’s 195W TDP, and with a 73W difference at the wall this isn’t too far off. A bit more surprising is that it’s drawing nearly 50W more than our 7970GE at the wall, given the fact that we know the 7970GE usually gets close to its TDP of 250W. At the same time since this is a live game benchmark, there are more factors than just the GPU in play. Generally speaking, the higher a card’s performance here, the harder the rest of the system will have to work to keep said card fed, which further increases power consumption at the wall.

Moving to Furmark our results keep the same order, but the gap between the GTX 680 and Titan widens, while the gap between Titan and the 7970GE narrows. Titan and the 7970GE shouldn’t be too far apart from each other in most situations due to their similar TDPs (even if NVIDIA and AMD TDPs aren’t calculated in quite the same way), so in a pure GPU power consumption scenario this is what we would expect to see.

Titan for its part is the traditional big NVIDIA GPU, and while NVIDIA does what they can to keep it in check, at the end of the day it’s still going to be among the more power hungry cards in our collection. Power consumption itself isn’t generally a problem with these high end cards so long as a system has the means to cool it and doesn’t generate much noise in doing so.

Moving on to temperatures, for a single card idle temperatures should be under 40C for anything with at least a decent cooler. Titan for its part is among the coolest at 30C; its large heatsink combined with its relatively low idle power consumption makes it easy to cool here.

Because Titan’s boost mechanisms are now temperature based, Titan’s temperatures are going to naturally gravitate towards its default temperature target of 80C as the card raises and lowers clockspeeds to maximize performance while keeping temperatures at or under that level. As a result just about any heavy load is going to see Titan within a couple of degrees of 80C, which makes for some very predictable results.

Looking at our other cards, while the various NVIDIA cards are still close in performance the 7970GE ends up being quite a bit cooler due to its open air cooler. This is typical of what we see with good open air coolers, though with NVIDIA’s temperature based boost system I’m left wondering if perhaps those days are numbered. So long as 80C is a safe temperature, there’s little reason not to gravitate towards it with a system like NVIDIA’s, regardless of the cooler used.

Load GPU Temperature - FurMark

With Furmark we see everything pull closer together as Titan holds fast at 80C while most of the other cards, especially the Radeons, rise in temperature. At this point Titan is clearly cooler than a GTX 680 SLI, 2C warmer than a single GTX 680, and still a good 10C warmer than our 7970GE.

Idle Noise Levels

Just as with the GTX 690, one of the things NVIDIA focused on was construction choices and materials to reduce noise generated. So long as you can keep noise down, then for the most part power consumption and temperatures don’t matter.

Simply looking at idle shows that NVIDIA is capable of delivering on their claims. 37.8dB is the quietest actively cooled high-end card we’ve measured yet, besting even the luxury GTX 690, and the also well-constructed GTX 680. Though really with the loudest setup being all of 40.5dB, none of these setups is anywhere near loud at idle.

It’s with load noise that we finally see the full payoff of Titan’s build quality. At 51dB it’s only marginally quieter than the GTX 680, but as we recall from our earlier power data, Titan is drawing nearly 70W more than GTX 680 at the wall. In other words, despite the fact that Titan is drawing significantly more power than GTX 680, it’s still as quiet as or quieter than the aforementioned card. This coupled with Titan’s already high performance is Titan’s true power in NVIDIA’s eyes; it’s not just fast, but despite its speed and despite its TDP it’s as quiet as any other blower based card out there, allowing them to get away with things such as Tiki and tri-SLI systems with reasonable noise levels.

Much like what we saw with temperatures under Furmark, noise under Furmark has our single-GPU cards bunching up. Titan goes up just enough to tie GTX 680 in our pathological scenario, meanwhile our multi-GPU cards start shooting up well past Titan, while the 7970GE jumps up to just shy of Titan. This is a worst case scenario, but it’s a good example of how GPU Boost 2.0’s temperature functionality means that Titan quite literally keeps its cool and thereby keeps its noise in check.

Of course we would be remiss to point out that in all these scenarios the open air cooled 7970GE is still quieter, and in our gaming scenario by actually by quite a bit. Not that Titan is loud, but it doesn’t compare to the 7970GE. Ultimately we get to the age old debate between blowers and open air coolers; open air coolers are generally quieter, but blowers allow for more flexibility with products, and are more lenient with cases with poor airflow.

Ultimately Titan is a blower so that NVIDIA can do concept PCs like Tiki, which is something an open air cooler would never be suitable for. For DIY builders the benefits may not be as pronounced, but this is also why NVIDIA is focusing so heavily on boutique systems where the space difference really matters. Whereas realistically speaking, AMD’s best blower-capable card is the vanilla 7970, a less power hungry but also much less powerful card.

Synthetics Final Thoughts
Comments Locked

337 Comments

View All Comments

  • JeBarr - Thursday, February 21, 2013 - link

    I would guess because as time goes by the reviewers here (and elsewhere) think they need to bench at settings used by the "majority". Even when that majority doesn't frequent, or even know the existance of, Anandtech.com. Go figure.

    I don't like it any more than you do...but for different reasons.

    I for one was happy to have a review site still benching at 16:10...which is what the long-time hardware enthusiasts/gamers prefer, that is, when they can't find a good CRT monitor ;)

    Just think of this review as the new bench standard going forward. A new starting point, if you will.
  • Ryan Smith - Monday, February 25, 2013 - link

    Bench 2013 will be going live soon. The backend is done (it's what I used to store and generate the charts here), but the frontend is part of a larger project...

    As for why the settings change, when we refresh our suite we sometimes change our settings to match what the latest generation of cards can do. When Titan sets the high bar for example, running 2560 at Ultra with 4xMSAA is actually practical.
  • TheJian - Thursday, February 21, 2013 - link

    NO Borderlands 2 (~6 million copies sold rated 89! not counting the addons rated high also)
    No Diablo3 (I hate the DRM but 10million+ sold of course rated high, but not by users)
    No Guild 2 (MMO with 3million copies sold rated 90!) even WOW Mists of pandaria has 3million or so now and 11 million playing the game's total content. I don't play WOW but it's still got a TON of users.
    No Assassin's Creed 3 (brings 680/7970 to low 30's 2560x1600)
    Crysis 3, warhead needs to die, and this needs to replace it (at the very LEAST). As shown below NOBODY is playing warhead. Wasted page space, and time spend benching it.

    Instead we get Crysis warhead...ROFL Well what can we expect Ryan still loves AMD.
    http://www.gametracker.com/search/warhead/
    Notice all the empty servers? Go ahead list them by players only 3 had over 10!..Most are ZERO players...LOL...Why even waste your time benchmarking this ignored game? Just to show NV weakness?
    Dirt Showdown - Raise your hand if you play this...Nope, you're all playing Dirt3 (wisely, or F1 etc anything that rates better than showdown)
    User ratings on metacritic of 70/4.7 (out of TEN not 5) and best summarized by gamespy (rated it a 40/100 on the frontpage of the metacritic site: http://www.metacritic.com/game/pc/dirt-showdown
    "DiRT: Showdown delivers bargain-basement entertainment value for the high, high price of $50. With its neutered physics, limited driving venues, clunky multiplayer, and diminished off-road racing options, discerning arcade racing fans should just write this one off as an unanticipated pothole in Codemaster's trailblazing DiRT series. "
    If you're going to use a racing game, at least make it a good one, not just the one AMD wins in. Why not F1 2012 (scored 80 at metacritic/6.8 from users). AMD wins in warhead which is also why crysis warhead is chosen even though nobody plays it (it's from 2008!). Again check the server list, who are you testing this for? What does it represent today? What other game based on it's engine? It's representing nothing correct? Nobody plays showdown either.

    How about adding some games people actually PLAY. I thought the whole point of benchmarking is to show us how games WE PLAY will run, is that not true at anandtech?

    Also no discussion of the frame delay ala Techreport:
    http://techreport.com/review/24381/nvidia-geforce-...
    No discussion of the frame latency issues that AMD is working on game by game. Their current beta I think just fixed the skyrim/borderland/guild wars2 issues which were awful.
    http://techreport.com/review/24218/a-driver-update...
    This has been an ongoing problem Anantech (ryan?) seems to just ignore. AMD is just getting to fixing this stuff in Jan...LOL. You can read more about it in the rematch of the 660TI/7950 here:
    http://techreport.com/review/23981/radeon-hd-7950-...
    Of course you can start at the beginning but this is where they recommend the 660TI and why (dec 2012 article).
    "The FPS average suggests near-parity performance between the 7950 and the GTX 660 Ti, with a tiny edge to the GeForce. The 99th percentile frame time, though, captures the impact of the Radeon's frame latency issues and suggests the GTX 660 Ti is easily the superior performer."
    More:
    "Instead, we have a crystal clear recommendation of the GeForce GTX 660 Ti over the Radeon HD 7950 for this winter's crop of blockbuster games. Perhaps AMD will smooth out some of the rough patches in later driver releases, but the games we've tested are already on the market—and Nvidia undeniably delivers the better experience in them, overall. "
    Even Tomshardware reports on delays now (albeit the wrong metric...LOL). Read the comments at techreport for why they're using the wrong one.

    No wonder they left out the xmas blockbusters and diablo3 (which will still sell probably 15million over it's life even though I would never buy it). I can name other games that are hot and new also:
    Dishonored, Deadspace 3, max payne 3, all highly rated. Max 3 barely hits 50's on top cards at 2560x1600 (7970ghz, 680 even lower), excellent test game and those are NOT the minimums (which can bring you to 20's/teens on lower cards). Witcher 2 (witcher 3 is coming), with uber sampling ENABLED is a taxer also.

    Dragon Age 2 at 2560x1600 will bring 7970/680 to teens/20's at minimums also, barely hits 40's avg (why use ONLY AVG at techspot I don't know, but better than maxes).
    http://www.techspot.com/review/603-best-graphics-c...

    START reporting MIN FPS for every game benched! There should be more discussion of the fact that in a lot of these games you hit teens for even $500 cards at 2560x1600 maxed out. Max fps means NOTHING. IF you hit 10-20fps a lot in a game your max means nothing. You won't want to play at that res, so what have you shown me? NOTHING. You should ALWAYS report MIN FPS as that dictates our gameplay experience and if it isn't always above 30 life sucks usually. Farcry 3 hits below 30 on both 680/7970 at 2560x1600.
    http://www.hardocp.com/article/2013/02/21/nvidia_g...
    And they don't have them on ULTRA, only titan is and none on 4xmsaa. At least they're giving max details/res you can expect to play and what it's min will be (better, you at least have USEFUL info after reading their benchmarks).

    From your article:
    "This is enough to get Titan to 74fps at 2560 with 4xMSAA, which is just fast enough to make BF3 playable at those settings with a single GPU."
    Why didn't you just report the minimums so we can see when ALL cards hit 30fps or less in all resolutions tested? If the game doesn't give a way to do this use fraps while running it (again, for ALL games). So it takes 74fps to get playable in BF3? It's easier to just give the minimums so people can see, otherwise are we supposed to attempt to extrapolate every one of your games without MINS listed? You did it for us in this sentence, but for ONE card and even then it's just a comment, not a number we can work with. It's YOU extrapolating your own guess that it would be playable given 74fps. What kind of benchmarking is this? I won't even get into your other comments throughout the articles on titan, It's more important to me to key on what you totally ignore that is VERY important to anyone picking ANY gpu. SMOOTHNESS of gameplay (latency testing) and MIN FPS so we know where we have no prayer of playing or what to expect playable on a given gpu. This is why Hardocp actually points to you guys as why your benchmarks suck. It's linked in most of their articles...LOL. FIX IT.
    http://www.hardocp.com/article/2008/02/11/benchmar...
    They have that in nearly every gpu article including the titan article. It's a valid point. But if you're not going to use IN GAME play, at least give min fps for canned etc. That link is in the test setup page of nearly every article on hardocp, you'd think you'd fix this so they'd stop. Your benchmarks represent something that doesn't reflect gameplay in most cases. The maxfps doesn't dictate fun factor. MIN does.

    One comment on Titan, I'd think about it at $800-850. Compute isn't important today at home for me, and won't be until more games use it like civ5 (they're just scratching surface here). At that point this card could become a monster compared to 690 without heat, noise etc. One day it may be worth $1000 to me, but for now it's not worth more than $800 (to me, no SFF needed, no compute needed). I don't like any dual chips or running multiple cards (see microstutter, latency delays etc), so once cheaper this would be tops on my list, but I don't usually spend over $360 on a card anyway...LOL. Most of the first run will go to boutique shops (20K first run I think). Maybe they'll drop it after that.

    LOL at anyone thinking the price sucks. Clearly you are NOT the target market. If you're product sells out at a given price, you priced it right. That's good business, and actually you probably should have asked more if it's gone in hours. You can still an SLI of titan in SFF, what other card can do that? You always pay a premium for the TOP card. Intel's extreme chips are $1000 too...No surprise. Same thing on the pro side is $2500 and not much different. IT's 20% slower than 690, but 690 can't go into SFF for the most part and certainly not as quiet or controllable. Also blows away 690 in compute if someone is after that. Though they need APPS that test this, not some home made anandtech benchmark. How about testing something I can actually USE and is relevant (no I don't count folding@home or bitcoin mining either, they don't make me money-a few coins?...LOL).
  • JeBarr - Thursday, February 21, 2013 - link

    I'm pretty sure Ryan has mentioned the benches you want are forthcoming. Maybe they haven't figured it all out yet...i dunno....but like you, I've been waiting what seems like a year or more for Anandtech to catch up with reality in GPU benching.
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    Yes, well I've found Frame Rate Target to be an absolute GEM in this area:

    " START reporting MIN FPS for every game benched! There should be more discussion of the fact that in a lot of these games you hit teens for even $500 cards at 2560x1600 maxed out. Max fps means NOTHING. IF you hit 10-20fps a lot in a game your max means nothing. "

    If you crank to max settings then have frame drop issues, FRAME RATE TARGET by nVidia of course, is excellent for minimizing and eliminating that issue.
    It really is a great and usable feature, and of course is for the most now already completely ignored.

    It was ported back to at least the top 500 series cards I don't remember exactly which ones right now, but that feature should have an entire article dedicated to it at every review site. It is AWESOME, and directly impacts minimum frame rates lofting nVidia to absolutely playable vs amd.

    I really think the bias won't ever be overcome. We used to hear nothing but eyefinity, yet now with nvidia cards capable of 4 monitors out of the box, it has suddenly become very unpopular for reviewers to mention eyefinity, surround, and surround plus ONE MORE in the nVidia case, without the need for any special adapters in many of nViida's partners card releases.

    So, it's really a sick situation.
  • Urbanos - Friday, February 22, 2013 - link

    he went through all the trouble of benchmarking in order to show that entry points for budget conscious users can get through Titan, but it doesn't actually prove that Titan is even worth the money without comparing it to at least 1 of its bigger competitors in the GPGPU market. Can you please consider adding that or having a new review based on the compute only.
  • codedivine - Friday, February 22, 2013 - link

    I am certainly interested in looking at the Xeon Phi if I can find the time and if we can arrange the resources to do so.

    My performance expectation (based on Intel white papers) is about 1700-1800 GFLops for SGEMM and 800-900 GFlops for DGEMM on the Xeon Phi 5110P. However, there are also a few benchmarks where I am expecting them to win as well thanks to the large cache on the Phi. Stay tuned.
  • Ryan Smith - Monday, February 25, 2013 - link

    This is really a consumer/prosumer level review, so the cards we're going to judge it against need to be comparable in price and intended audience. Not only can we not get some of those parts, but all of them cost many times more than Titan.

    If we were ever able to review K20, then they would be exactly the kinds of parts we'd try to include though.
  • kivig - Friday, February 22, 2013 - link

    There is a whole community of 3D people interested.
    Or when it will get added to bench table?
  • etriky - Saturday, February 23, 2013 - link

    +1
    Since this card at this price point is pointless for gaming I figured the article would be heavy on compute applications in order to give us a reason for it's existence.

    But then, nothing. No SmallLuxGpu or Cycles. Not even any commercial packages like Octane, or any of the Adobe products. I know LuxGPU and Blender used to be in the test suite. What happened?

Log in

Don't have an account? Sign up now