Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

The GTX Titan X represents a very interesting intersection for NVIDIA, crossing Maxwell’s unparalleled power efficiency with GTX Titan’s flagship level performance goals and similarly high power allowance. The end result is that this gives us a chance to see how well Maxwell holds up when pushed to the limit; to see how well the architecture holds up in the form of a 601mm2 GPU with a 250W TDP.

GeForce GTX Titan X Voltages
GTX Titan X Boost Voltage GTX 980 Boost Voltage GTX Titan X Idle Voltage
1.162v 1.225v 0.849v

Starting off with voltages, based on our samples we find that NVIDIA has been rather conservative in their voltage allowance, presumably to keep power consumption down. With the highest stock boost bin hitting a voltage of just 1.162v, GTX Titan X operates notably lower on the voltage curve than the GTX 980. This goes hand-in-hand with GTX Titan X’s stock clockspeeds, which are around 100MHz lower than GTX 980.

GeForce GTX Titan X Average Clockspeeds
Game GTX Titan X GTX 980
Max Boost Clock 1215MHz 1252MHz
Battlefield 4
1088MHz
1227MHz
Crysis 3
1113MHz
1177MHz
Mordor
1126MHz
1164MHz
Civilization: BE
1088MHz
1215MHz
Dragon Age
1189MHz
1215MHz
Talos Principle
1126MHz
1215MHz
Far Cry 4
1101MHz
1164MHz
Total War: Attila
1088MHz
1177MHz
GRID Autosport
1151MHz
1190MHz

Speaking of clockspeeds, taking a look at our average clockspeeds for GTX Titan X and GTX 980 showcases just why the 50% larger GM200 GPU only leads to an average performance advantage of 35% for the GTX Titan X. While the max boost bins are both over 1.2GHz, the GTX Titan has to back off far more often to stay within its power and thermal limits. The final clockspeed difference between the two cards depends on the game in question, but we’re looking at a real-world clockspeed deficit of 50-100MHz for GTX Titan X.

Idle Power Consumption

Starting off with idle power consumption, the GTX Titan X comes out strong as expected. Even at 8 billion transistors, NVIDIA is able to keep power consumption at idle very low, with all of our recent single-GPU NVIDIA cards coming in at 73-74W at the wall.

Load Power Consumption - Crysis 3

Load Power Consumption - FurMark

Meanwhile load power consumption for GTX Titan X is more or less exactly what we’d expect. With NVIDIA having nailed down their throttling mechanisms for Kepler and Maxwell, the GTX Titan X has a load power profile almost identical to the GTX 780 Ti, the closest equivalent GK110 card. Under Crysis 3 this manifests itself as a 20W increase in power consumption at the wall – generally attributable to the greater CPU load from GTX Titan X’s better GPU performance – while under FurMark the two cards are within 2W of each other.

Compared to the GTX 980 on the other hand, this is of course a sizable increase in power consumption. With a TDP difference on paper of 85W, the difference at the wall is an almost perfect match. GTX Titan X still offers Maxwell’s overall energy efficiency, delivering greatly superior performance for the power consumption, but this is a 250W card and it shows. Meanwhile the GTX Titan X’s power consumption also ends up being very close to the unrestricted R9 290X Uber, which in light of the Titan’s 44% 4K performance advantage further drives home the point about NVIDIA’s power efficiency lead at this time.

Idle GPU Temperature

With the same Titan cooler and same idle power consumption, it should come as no surprise that the GTX Titan X offers the same idle temperatures as its GK110 predecessors: a relatively cool 32C.

Load GPU Temperature - Crysis 3

Load GPU Temperature - FurMark

Moving on to load temperatures, the GTX Titan X has a stock temperature limit of 83C, just like the GTX 780 Ti. Consequently this is exactly where we see the card top out at under both FurMark and Crysis 3. 83C does lead to the card temperature throttling in most cases, though as we’ve seen in our look at average clockspeeds it’s generally not a big drop.

Idle Noise Levels

Last but not least we have our noise results. With the Titan cooler backing it, the GTX Titan X has no problem keeping quiet at idle. At 37.0db(A) it's technically the quietest card among our entire collection of high-end cards, and from a practical perspective is close to silent.

Load Noise Levels - Crysis 3

Load Noise Levels - FurMark

Much like GTX Titan X’s power profile, GTX Titan X’s noise profile almost perfectly mirrors the GTX 780 Ti. With the card hitting 51.3dB(A) under Crysis 3 and 52.4dB(A) under FurMark, it is respectively only 0.4dB and 0.1dB off from the GTX 780 Ti. From a practical perspective what this means is that the GTX Titan X isn’t quite the hushed card that was the GTX 980 – nor with a 250W TDP would we expect it to be – but for its chart-topping gaming performance it delivers some very impressive acoustics. The Titan cooler continues to serve NVIDIA well, allowing them to dissipate 250W in a blower without making a lot of noise in the process.

Overall then, from a power/temp/noise perspective the GTX Titan X is every bit as impressive as the original GTX Titan and its GTX 780 Ti sibling. Thanks to the Maxwell architecture and Titan cooler, NVIDIA has been able to deliver a 50% increase in gaming performance over the GTX 780 Ti without an increase in power consumption or noise, leading to NVIDIA once again delivering a flagship video card that can top the performance charts without unnecessarily sacrificing power consumption or noise.

Compute Overclocking
Comments Locked

276 Comments

View All Comments

  • Kevin G - Wednesday, March 18, 2015 - link

    There was indeed a bigger chip due closer to the GK104/GTX 680's launch: the GK100. However it was cancelled due to bugs in the design. A fixed revision eventually became the GK110 which was ultimately released as the Titan/GTX 780.

    After that there have been two more revisions. The GK110B is quick respin which all fully enabled dies stem from (Titan Black/GTX 780 Ti). Then late last nVidia surprised everyone with the GK210 which has a handful of minor architectural improvements (larger register files etc.).

    The morale of the story is that building large dies is hard and takes lots of time to get right.
  • chizow - Monday, March 23, 2015 - link

    We don't know what happened to GK100, it is certainly possible as I've guessed aloud numerous times that AMD's 7970 and overall lackluster pricing/performance afforded Nvidia the opportunity to scrap GK100 and respin it to GK110 while trotting GK104 out as its flagship, because it was close enough to AMD's best and GK100 may have had problems as you described. All of that led to considerable doubt whether or not we would see a big Kepler, a sentiment that was even dishonestly echoed by some Nvidia employees I got into it with on their forums.

    Only in October 2012 did we see signs of Big Kepler in the Titan supercomputer with K20X, but still no sign of a GeForce card. Its no doubt that a big die takes time, but Nvidia had always led with their big chip first, since G80 and this was the first time they deviated from that strategy while parading what was clearly their 2nd best, mid-range performance ASIC as flagship.

    Titan X sheds all that nonsense and goes back to their gaming roots. It is their best effort, up front, no BS. 8Bn transistors Inspired by Gamers and Made by Nvidia. So as someone who buys GeForce for gaming first and foremost, I'm going to reward them for those efforts so they keep rewarding me with future cards of this kind. :)
  • Railgun - Wednesday, March 18, 2015 - link

    With regards to the price, 12GB of RAM isn't justification enough for it. Memory isn't THAT expensive in the grand scheme of things. What the Titan was originally isn't what the Titan X is now. They can't be seen as the same lineage. If you want to say memory is the key, the original Titan with its 6GB could be seen as more than still relevant today. Crysis is 45% faster in 4K with the X than the original. Is that the chip itself or memory helping? I vote the former given the 690 is 30% faster in 4K with the same game than the original Titan, with only 4GB total memory. VRAM isn't going to really be relevant for a bit other than those that are running stupidly large spans. It's a shame as Ryan touches on VRAM usage in Middle Earth, but doesn't actually indicate what's being used. There too, the 780Ti beats the original Titan sans huge VRAM reserves. Granted, barely, but point being is that VRAM isn't the reason. This won't be relevant for a bit I think.

    You can't compare an aftermarket price to how an OEM prices their products. The top tier card other than the TiX is the 980, which has been mentioned ad nauseam that the TiX is NOT worth 80% more given its performance. If EVGA wants to OC a card out of their shop and charge 45% more than a stock clock card, then buyer beware if it's not a 45% gain in performance. I for one don't see the benefit of a card like that. The convenience isn't there given the tools and community support for OCing something one's self.

    I too game on 25x14 and there've been zero issues regarding VRAM, or the lack thereof.
  • chizow - Monday, March 23, 2015 - link

    I didn't say VRAM was the only reason, I said it was one of the reasons. The bigger reason for me is that it is the FULL BOAT GM200 front and center. No waiting. No cut cores. No cut SMs for compute. No cut down part because of TDP. It's 100% of it up front, 100% of it for gaming. I'm sold and onboard until Pascal. That really is the key factor, who wants to wait for unknown commodities and timelines if you know this will set you within +/-10% of the next fastest part's performance if you can guarantee you get it today for maybe a 25-30% premium? I guess it really depends on how much you value your current and near-future gaming experience. I knew from the day I got my ROG Swift (with 2x670 SLI) I would need more to drive it. 980 was a bit of a sidegrade in absolute performance and I still knew i needed more perf, and now I have it with Titan X.

    As for VRAM, 12GB is certainly overkill today, but I'd say 6GB isn't going to be enough soon enough. Games are already pushing 4GB (SoM, FC4, AC:U) and that's still with last-gen type textures. Once you start getting console ports with PC texture packs I could see 6 and 8GB being pushed quite easily, as that is the target framebuffer for consoles (2+6). So yes, while 12GB may be too much, 6GB probably isn't enough, especially once you start looking at 4K and Surround.

    Again, if you don't think the price is worth it over a 980 that's fine and fair, but the reality of it is, if you want better single-GPU performance there is no alternative. A 2nd 980 for SLI is certainly an option, but for my purposes and my resolution, I would prefer to stick to a single-card solution if possible, which is why I went with a Titan X and will be selling my 980 instead of picking up a 2nd one as I originally intended.

    Best part about Titan X is it gives another choice and a target level of performance for everyone else!
  • Frenetic Pony - Tuesday, March 17, 2015 - link

    They could've halved the ram, dropped the price by $200, and done a lot better without much to any performance hit.
  • Denithor - Wednesday, March 18, 2015 - link

    LOL.

    You just described the GTX 980 Ti, which will likely launch within a few months to answer the 390X.
  • chizow - Wednesday, March 18, 2015 - link

    @Frenetic Pony, maybe now, but what about once DX12 drops and games are pushing over 6GB? We already see games saturating 4GB, and we still haven't seen next-gen engine games like UE4. Why compromise for a few hundred less? You haven't seen all the complaints from 780Ti users about how 3GB isn't enough anymore? Shoudn't be a problem for this card, which is just 1 less thing to worry about.
  • LukaP - Thursday, March 19, 2015 - link

    Games dont push 4GB... Check the LTT Ultrawide video, where he barely got Shadow of Mordor on ultra to go past 4GBs on 3 ulrawide 1440p screens.

    And as a game dev i can tell you, with proper optimisations, more than 4GB is insane, on a GPU, unless you just load stuff in with a predictive algorithm, to avoid PCIe bottlenecks.

    And please do show me where a 780Ti user isnt happy with his cards performance at 1080-1600p. Because the card does, and will continue to perform great on those resolutions, since games wont really advance, due to consoles limiting again.
  • LukaP - Thursday, March 19, 2015 - link

    Also, DX12 wont make games magically use more VRAM. all it really does is it makes the CPU and GPU communicate better. It wont magically make games run or look better. both of those are up to the devs, and the look better part is certainly not the textures or polycounts. Its merely the amount of drawcalls per frame going up, meaning more UNIQUE objects. (contrary to more objects, which can be achieved through instancing easily in any modern engine, but Ubisoft havent learned that yet)
  • chizow - Monday, March 23, 2015 - link

    DX12 raises the bar for all games by enabling better visuals, you're going to get better top-end visuals across the board. Certainly you don't think UE4 when it debuts will have the same reqs as DX11 based games on UE3?

    Even if you have the same size textures as before 2K or 4K assets as is common now, the fact you are drawing more polygons enabled by DX12's lower overhead, higher draw call/poly capabilities means they need to be textured, meaning higher VRAM requirement unless you are using the same textures over and over again.

    Also, since you are a game dev, you would also know Devs are going more and more towards bindless or megatextures that specifically make great use of textures staying resident in local VRAM for faster accesses, rather than having to optimize and cache/load/discharge them.

Log in

Don't have an account? Sign up now