Overclocking

Finally, no review of a GTX Titan card would be complete without a look at overclocking performance.

From a design standpoint, GTX Titan X already ships close to its power limits. NVIDIA’s 250W TDP can only be raised another 10% – to 275W – meaning that in TDP limited scenarios there’s not much headroom to play with. On the other hand with the stock voltage being so low, in clockspeed limited scenarios there’s a lot of room for pushing the performance envelope through overvolting. And neither of these options addresses the most potent aspect of overclocking, which is pushing the entirely clockspeed curve higher at the same voltages by increasing the clockspeed offsets.

GTX 980 ended up being a very capable overclocker, and as we’ll see it’s much the same story for the GTX Titan X.

GeForce GTX Titan X Overclocking
Stock Overclocked
Core Clock 1002MHz 1202MHz
Boost Clock 1076Mhz 1276MHz
Max Boost Clock 1215MHz 1452MHz
Memory Clock 7GHz 7.8GHz
Max Voltage 1.162v 1.218v

Even when packing 8B transistors into a 601mm2, the GM200 GPU backing the GTX Titan X continues to offer the same kind of excellent overclocking headroom that we’ve come to see from the other Maxwell GPUs. Overall we have been able to increase our GPU clockspeed by 200MHz (20%) and the memory clockspeed by 800MHz (11%). At its peak this leads to the GTX Titan X pushing a maximum boost clock of 1.45GHz, and while TDP restrictions mean it can’t sustain this under most workloads, it’s still an impressive outcome for overclocking such a large GPU.

OC: Battlefield 4 - 3840x2160 - Ultra Quality - 0x MSAA

OC: Crysis 3 - 3840x2160 - High Quality + FXAA

OC: Shadow of Mordor - 3840x2160 - Ultra Quality

OC: The Talos Principle - 3840x2160 - Ultra Quality

OC: Total War: Attila - 3840x2160 - Max Quality + Perf Shadows

The performance gains from this overclock are a very consistent 16-19% across all 5 of our sample games at 4K, indicating that we're almost entirely GPU-bound as opposed to memory-bound. Though not quite enough to push the GTX Titan X above 60fps in Shadow of Mordor or Crysis 3, this puts it even closer than the GTX Titan X was at stock. Meanwhile we do crack 60fps on Battlefield 4 and The Talos Principle.

OC: Load Power Consumption - Crysis 3

OC: Load Power Consumption - FurMark

OC: Load GPU Temperature - Crysis 3

Load GPU Temperature - FurMark

OC: Load Noise Levels - Crysis 3

OC: Load Noise Levels - FurMark

The tradeoff for this overclock is of course power and noise, both of which see significant increases. In fact the jump in power consumption with Crysis is a bit unexpected – further research shows that the GTX Titan X shifts from being temperature limited to TDP limited as a result of our overclocking efforts – while FurMark is in-line with the 25W increase in TDP. The 55dB noise levels that result, though not extreme, also mean that GTX Titan X is drifting farther away from being a quiet card. Ultimately it’s a pretty straightforward tradeoff for a further 16%+ increase in performance, but a tradeoff nonetheless.

Power, Temperature, & Noise Final Words
Comments Locked

276 Comments

View All Comments

  • Dug - Thursday, March 19, 2015 - link

    Thank you for pointing this out.
  • chizow - Monday, March 23, 2015 - link

    Uh, they absolutely do push 4GB, its not all for the framebuffer but they use it as a texture cache that absolutely leads to a smoother gaming experience. I've seen SoM, FC4, AC:Unity all use the entire 4GB on my 980 at 1440p Ultra settings (textures most important ofc) even without MSAA.

    You can optimize as much as you like but if you can keep texture buffered locally it is going to result in a better gaming experience.

    And for 780Ti owners not being happy, believe what you like, but these are the folks jumping to upgrade even to 980 because that 3GB has crippled the card, especially at higher resolutions like 4K. 780Ti beats 290X in everything and every resolution, until 4K.

    https://www.google.com/?gws_rd=ssl#q=780+ti+3gb+no...
  • FlushedBubblyJock - Thursday, April 2, 2015 - link

    Funny how 3.5GB wass just recently a kickk to the insufficient groin, a gigantic and terrible lie, and worth a lawsuit due to performance issues... as 4GB was sorely needed, now 4GB isn't used....

    Yes 4GB isn't needed. It was just 970 seconds ago, but not now !
  • DominionSeraph - Tuesday, March 17, 2015 - link

    You always pay extra for the privilege of owning a halo product.
    Nvidia already rewrote the pricing structure in the consumer's favor when they released the GTX 970 -- a card with $650 performance -- at $329. You can't complain too much that they don't give you the GTX 980 for $400. If you want above the 970 you're going to pay for it. And Nvidia has hit it out of the ballpark with the Titan X. If Nvidia brought the high end of Maxwell down in price AMD would pretty much be out of business considering they'd have to sell housefire Hawaii at $150 instead of being able to find a trickle of pity buyers at $250.
  • MapRef41N93W - Tuesday, March 17, 2015 - link

    Maxwell architecture is not designed for FP64. Even the Quadro doesn't have it. It's one of the ways NVIDIA saved so much power on the same node.
  • shing3232 - Tuesday, March 17, 2015 - link

    I believe they could put FP64 into it if they want, but power efficiency is a good way to make ads.
  • MapRef41N93W - Tuesday, March 17, 2015 - link

    Would have required a 650mm^2 die which would have been at the limits of what can be done on TSMC 28nm node. Would have also meant a $1200 card.
  • MapRef41N93W - Tuesday, March 17, 2015 - link

    And the Quadro a $4000 card doesn't have it, so why would a $999 gaming card have it.
  • testbug00 - Tuesday, March 17, 2015 - link

    would it have? No. They could have given it FP64. Could they have given it FP64 without pushing the power and heat up a lot? Nope.

    the 390x silicon will be capable of over 3TFlop FP64 (the 390x probably locked to 1/8 performance, however) and will be a smaller chip than this. The price to pay will be heat and power. How much? Good question.
  • dragonsqrrl - Tuesday, March 17, 2015 - link

    Yes, it would've required a lot more transistors and die area with Maxwell's architecture, which relies on separate fp64 and fp32 cores. Comparing the costs associated with double precision performance directly to GCN is inaccurate.

Log in

Don't have an account? Sign up now