The Final Word On Overclocking

Before we jump into our performance breakdown, I wanted to take a few minutes to write a bit of a feature follow-up to our overclocking coverage from Tuesday. Since we couldn’t reveal performance numbers at the time – and quite honestly we hadn’t even finished evaluating Titan – we couldn’t give you the complete story on Titan. So some clarification is in order.

On Tuesday we discussed how Titan reintroduces overvolting for NVIDIA products, but now with additional details from NVIDIA along with our own performance data we have the complete picture, and overclockers will want to pay close attention. NVIDIA may be reintroducing overvolting, but it may not be quite what many of us were first thinking.

First and foremost, Titan still has a hard TDP limit, just like GTX 680 cards. Titan cannot and will not cross this limit, as it’s built into the firmware of the card and essentially enforced by NVIDIA through their agreements with their partners. This TDP limit is 106% of Titan’s base TDP of 250W, or 265W. No matter what you throw at Titan or how you cool it, it will not let itself pull more than 265W sustained.

Compared to the GTX 680 this is both good news and bad news. The good news is that with NVIDIA having done away with the pesky concept of target power versus TDP, the entire process is much simpler; the power target will tell you exactly what the card will pull up to on a percentage basis, with no need to know about their separate power targets or their importance. Furthermore with the ability to focus just on just TDP, NVIDIA didn’t set their power limits on Titan nearly as conservatively as they did on GTX 680.

The bad news is that while GTX 680 shipped with a max power target of 132%, Titan is again only 106%. Once you do hit that TDP limit you only have 6% (15W) more to go, and that’s it. Titan essentially has more headroom out of the box, but it will have less headroom for making adjustments. So hardcore overclockers dreaming of slamming 400W through Titan will come away disappointed, though it goes without saying that Titan’s power delivery system was never designed for that in the first place. All indications are that NVIDIA built Titan’s power delivery system for around 265W, and that’s exactly what buyers will get.

Second, let’s talk about overvolting. What we didn’t realize on Tuesday but realize now is that overvolting as implemented in Titan is not overvolting in the traditional sense, and practically speaking I doubt too many hardcore overclockers will even recognize it as overvolting. What we mean by this is that overvolting was not implemented as a direct control system as it was on past generation cards, or even the NVIDIA-nixed cards like the MSI Lightning or EVGA Classified.

Overvolting is instead a set of two additional turbo clock bins, above and beyond Titan’s default top bin. On our sample the top bin is 1.1625v, which corresponds to a 992MHz core clock. Overvolting Titan to 1.2 means unlocking two more bins: 1006MHz @ 1.175v, and 1019MHz @ 1.2v. Or put another way, overvolting on Titan involves unlocking only another 27MHz in performance.

These two bins are in the strictest sense overvolting – NVIDIA doesn’t believe voltages over 1.1625v on Titan will meet their longevity standards, so using them is still very much going to reduce the lifespan of a Titan card – but it’s probably not the kind of direct control overvolting hardcore overclockers were expecting. The end result is that with Titan there’s simply no option to slap on another 0.05v – 0.1v in order to squeak out another 100MHz or so. You can trade longevity for the potential to get another 27MHz, but that’s it.

Ultimately, this means that overvolting as implemented on Titan cannot be used to improve the clockspeeds attainable through the use of the offset clock functionality NVIDIA provides. In the case of our sample it peters out after +115MHz offset without overvolting, and it peters out after +115MHz offset with overvolting. The only difference is that we gain access to a further 27MHz when we have the thermal and power headroom available to hit the necessary bins.

GeForce GTX Titan Clockspeed Bins
Clockspeed Voltage
1019MHz 1.2v
1006MHz 1.175v
992MHz 1.1625v
979MHz 1.15v
966MHz 1.137v
953MHz 1.125v
940MHz 1.112v
927MHz 1.1v
914MHz 1.087v
901MHz 1.075v
888MHz 1.062v
875MHz 1.05v
862MHz 1.037v
849MHz 1.025v
836MHz 1.012v

Finally, as with the GTX 680 and GTX 690, NVIDIA will be keeping tight control over what Asus, EVGA, and their other partners release. Those partners will have the option to release Titan cards with factory overclocks and Titan cards with different coolers (i.e. water blocks), but they won’t be able to expose direct voltage control or ship parts with higher voltages. Nor for that matter will they be able to create Titan cards with significantly different designs (i.e. more VRM phases); every Titan card will be a variant on the reference design.

This is essentially no different than how the GTX 690 was handled, but I think it’s something that’s important to note before anyone with dreams of big overclocks throws down $999 on a Titan card. To be clear, GPU Boost 2.0 is a significant improvement in the entire power/thermal management process compared to GPU Boost 1.0, and this kind of control means that no one needs to be concerned with blowing up their video card (accidentally or otherwise), but it’s a system that comes with gains and losses. So overclockers will want to pay close attention to what they’re getting into with GPU Boost 2.0 and Titan, and what they can and cannot do with the card.

Titan's Performance Unveiled Titan’s Compute Performance (aka Ph.D Lust)
Comments Locked

337 Comments

View All Comments

  • CeriseCogburn - Tuesday, February 26, 2013 - link

    I really don't understand that mentality you have. I'm surrounded by thousands of dollars of computer parts and I certainly don't consider myself some sort of hardware enthusiast or addicted overclocker, or insane gamer.

    Yet this card is easily a consideration, since several other systems have far more than a thousand dollars in them on just the basics. It's very easy to spend a couple thousand even being careful.

    I don't get what the big deal is. The current crop of top end cards before this are starkly inadequate at common monitor resolutions.
    One must nearly ALWAYS turn down features in the popular benched games to be able to play.

    People just don't seem to understand that I guess. I have untold thousands of dollars in many computers and the only thing that will make them really gaming capable at cheap monitor resolutions is a card like this.

    Cripes my smartphone cost a lot more than the former top two cards just below Titan.

    This is the one area that comes to mind ( the only one that exists as far as I can tell) where the user is left with "my modern computer can't do it" - and that means, take any current taxing game (lots of those - let's say 50% of those reviewed as a rough thumb) and you're stuck unable to crank it up.

    Now 120hz monitors are becoming common, so this issue is increased.
    As you may have noticed, another poster exclaimed:
    " Finally ! 1920x1080 a card that can do it ! "

    There's the flat out closest to the truth, and I agree with that entirely, at least for this moment, as I stated here before the 7970 didn't do it when it was released and doesn't now and won't ever. (neither does the 680)

    I'm trying to deny it, but really it is already clear that the Titan doesn't cut it for everything at the above rez either, not really, and not at higher refresh rates.

    More is still needed, and this is the spot that is lacking for gamers, the video card.

    This card is the card to have, and it's not about bragging, it's about firing up your games and not being confronted with the depressing "turn off the eyecandy" and check the performance again... see if that is playable...

    I mean ****, that apparently does not bother any of you, and I do not know why.
    Everything else in your system is capable...
    This is an IMPORTANT PART that actually completes the package, where the end user isn't compromising.
  • HighTech4US - Thursday, February 21, 2013 - link

    If it does could we see a new story on performance using NVENC across the entire Kepler line along with any FREEware/PAYware software that utilizes it. I have an older Intel Q8300 that is used as my HTPC/Living Room Gaming System and encoding videos take a long time just using the CPU cores.

    If getting a Kepler GPU and using NVENC can speed up encoding significantly I would like to know. As that would be the lowest cost upgrade along with getting a Gaming card upgrade.

    Thanks
  • Ryan Smith - Thursday, February 21, 2013 - link

    Yes, NVEnc is present.
  • lkuzmanov - Thursday, February 21, 2013 - link

    excellent! now make it 30-40% cheaper and I'm on board.
  • Zink - Thursday, February 21, 2013 - link

    Rahul Garg picked the lowest HD 7970 scores in both cases from the Matsumoto et al. paper. The other higher GFLOPS scores represent performance using alternate kernels performing the same calculation on the same hardware as far as I can tell. Rahul needs to justify choosing only the lowest HD 7970 numbers in his report or I can only assume he is tilting the numbers in favor of Titan.
  • JarredWalton - Thursday, February 21, 2013 - link

    Picking the highest scoring results that are using optimized cores and running on different hardware in the first place (e.g. not the standard test bed) would be tilting the results very far in AMD's favor. A default run is basically what Titan gets to do, so the same for 7970 would make sense.
  • codedivine - Thursday, February 21, 2013 - link

    The different algorithms are actually not performing the exact same calculation. There are differences in matrix layouts and memory allocations. We chose the ones that are closest to the layouts and allocations we were testing on the Titan.

    In the future, we intend to test with AMD's official OpenCL BLAS. While Matsumoto's numbers are good for illustrative purposes. We would prefer running our own benchmarks on our own testbeds, and on real-world code which will typically use AMD's BLAS for AMD cards. AMD's OpenCL BLAS performance is actually a little bit lower than Matsumoto's numbers so I don't think we tilted the numbers in AMD's favour. If anything, we gave AMD a bit of benefit-of-the-doubt here.

    In the same vein, faster results than Nvidia's CUBLAS have been demonstrated on Nvidia hardware. However, we chose to test only using CUBLAS as all production code will typically use CUBLAS due to its reliability and support from Nvidia.

    AMD's OpenCL BLAS is a bit complicated to setup correctly and in my research, I have had problems with stability with it on Windows. Thus, we avoided it in this particular review but we will likely look at it in the future.
  • Zink - Thursday, February 21, 2013 - link

    Thanks, shouldn't have doubted you :)
  • Nfarce - Thursday, February 21, 2013 - link

    ...about my 680 purchase last April (nearly a year ago already, wow). Was so worried I made the wrong decision replacing two 570s knowing the Kepler was less than a year away. The news on this card has firmed up my decision to lock in with a second 680 now for moving up to a 2560x1440 monitor.

    Very *very* disappointing, Nvidia.
  • CeriseCogburn - Thursday, February 21, 2013 - link

    The new top card has been near the same as two of the former cards FOREVER.

    You people are nothing short of stupid nut jobs.

    There are not enough tampons at Johnson and Johnson warehouses for this thread.

    THE VERY SAME RATIO has occurred every time for all the prior launches.

Log in

Don't have an account? Sign up now