The Final Word On Overclocking

Before we jump into our performance breakdown, I wanted to take a few minutes to write a bit of a feature follow-up to our overclocking coverage from Tuesday. Since we couldn’t reveal performance numbers at the time – and quite honestly we hadn’t even finished evaluating Titan – we couldn’t give you the complete story on Titan. So some clarification is in order.

On Tuesday we discussed how Titan reintroduces overvolting for NVIDIA products, but now with additional details from NVIDIA along with our own performance data we have the complete picture, and overclockers will want to pay close attention. NVIDIA may be reintroducing overvolting, but it may not be quite what many of us were first thinking.

First and foremost, Titan still has a hard TDP limit, just like GTX 680 cards. Titan cannot and will not cross this limit, as it’s built into the firmware of the card and essentially enforced by NVIDIA through their agreements with their partners. This TDP limit is 106% of Titan’s base TDP of 250W, or 265W. No matter what you throw at Titan or how you cool it, it will not let itself pull more than 265W sustained.

Compared to the GTX 680 this is both good news and bad news. The good news is that with NVIDIA having done away with the pesky concept of target power versus TDP, the entire process is much simpler; the power target will tell you exactly what the card will pull up to on a percentage basis, with no need to know about their separate power targets or their importance. Furthermore with the ability to focus just on just TDP, NVIDIA didn’t set their power limits on Titan nearly as conservatively as they did on GTX 680.

The bad news is that while GTX 680 shipped with a max power target of 132%, Titan is again only 106%. Once you do hit that TDP limit you only have 6% (15W) more to go, and that’s it. Titan essentially has more headroom out of the box, but it will have less headroom for making adjustments. So hardcore overclockers dreaming of slamming 400W through Titan will come away disappointed, though it goes without saying that Titan’s power delivery system was never designed for that in the first place. All indications are that NVIDIA built Titan’s power delivery system for around 265W, and that’s exactly what buyers will get.

Second, let’s talk about overvolting. What we didn’t realize on Tuesday but realize now is that overvolting as implemented in Titan is not overvolting in the traditional sense, and practically speaking I doubt too many hardcore overclockers will even recognize it as overvolting. What we mean by this is that overvolting was not implemented as a direct control system as it was on past generation cards, or even the NVIDIA-nixed cards like the MSI Lightning or EVGA Classified.

Overvolting is instead a set of two additional turbo clock bins, above and beyond Titan’s default top bin. On our sample the top bin is 1.1625v, which corresponds to a 992MHz core clock. Overvolting Titan to 1.2 means unlocking two more bins: 1006MHz @ 1.175v, and 1019MHz @ 1.2v. Or put another way, overvolting on Titan involves unlocking only another 27MHz in performance.

These two bins are in the strictest sense overvolting – NVIDIA doesn’t believe voltages over 1.1625v on Titan will meet their longevity standards, so using them is still very much going to reduce the lifespan of a Titan card – but it’s probably not the kind of direct control overvolting hardcore overclockers were expecting. The end result is that with Titan there’s simply no option to slap on another 0.05v – 0.1v in order to squeak out another 100MHz or so. You can trade longevity for the potential to get another 27MHz, but that’s it.

Ultimately, this means that overvolting as implemented on Titan cannot be used to improve the clockspeeds attainable through the use of the offset clock functionality NVIDIA provides. In the case of our sample it peters out after +115MHz offset without overvolting, and it peters out after +115MHz offset with overvolting. The only difference is that we gain access to a further 27MHz when we have the thermal and power headroom available to hit the necessary bins.

GeForce GTX Titan Clockspeed Bins
Clockspeed Voltage
1019MHz 1.2v
1006MHz 1.175v
992MHz 1.1625v
979MHz 1.15v
966MHz 1.137v
953MHz 1.125v
940MHz 1.112v
927MHz 1.1v
914MHz 1.087v
901MHz 1.075v
888MHz 1.062v
875MHz 1.05v
862MHz 1.037v
849MHz 1.025v
836MHz 1.012v

Finally, as with the GTX 680 and GTX 690, NVIDIA will be keeping tight control over what Asus, EVGA, and their other partners release. Those partners will have the option to release Titan cards with factory overclocks and Titan cards with different coolers (i.e. water blocks), but they won’t be able to expose direct voltage control or ship parts with higher voltages. Nor for that matter will they be able to create Titan cards with significantly different designs (i.e. more VRM phases); every Titan card will be a variant on the reference design.

This is essentially no different than how the GTX 690 was handled, but I think it’s something that’s important to note before anyone with dreams of big overclocks throws down $999 on a Titan card. To be clear, GPU Boost 2.0 is a significant improvement in the entire power/thermal management process compared to GPU Boost 1.0, and this kind of control means that no one needs to be concerned with blowing up their video card (accidentally or otherwise), but it’s a system that comes with gains and losses. So overclockers will want to pay close attention to what they’re getting into with GPU Boost 2.0 and Titan, and what they can and cannot do with the card.

Titan's Performance Unveiled Titan’s Compute Performance (aka Ph.D Lust)
Comments Locked

337 Comments

View All Comments

  • ronin22 - Thursday, February 21, 2013 - link

    That's the point, it's not a gamerz card
  • Finally - Thursday, February 21, 2013 - link

    "Titan delivers the kind of awe-inspiring performance we have come to expect from NVIDIA’s most powerful video cards."
    If you hear unfiltered Nvidia marketing speak like this, you know that AT isn't fooling around when it comes to earning their PR dollars. Well done!
  • Scritty - Thursday, February 21, 2013 - link

    Paper launch? Fine. I get that. But I suspect stock levels will be seriously limited. Rumour has it that only 10,000 of these will be made - which seems very odd as even with a substantial profit marging - the ROI on development costs is going to be hard to recoup with a potential sales level as low as that.

    I'm looking to buy a couple of these as soon as they are available for SLI - maybe 3 for a triple set up if possible, but I can see there being real issues with stock. I decent solution 3 screen at 2560x1440 for sure - if you can get hold of them anywhere.
  • Ryan Smith - Thursday, February 21, 2013 - link

    Note that NVIDIA specifically shot down the 10K card rumor. As far as we've been advised and as best as we can tell, card availability will be similar to what we saw with the GTX 690. Which is to say tight at first, but generally available and will continue to be available.
  • Egg - Thursday, February 21, 2013 - link

    The chart on page 1 is missing a 'GB' under GTX Titan's VRAM listing. There aren't any 5760*1200 non-GE 7970 benchmarks. Also, on the Power, Temperature, and Noise page, "temperate" should be "temperature" just before the first chart.

    Additionally, the voltage issue HollyDOL and the strange Crysis Warhead 1080p E Shader/G Quality issue silverblue mentioned should be clarified as well. (I'm just repeating them here so they have a higher chance of being seen.)

    Also, Wolfram|Alpha interprets "gigaflops" as "billion floating point operations per second" by default, while offering an alternative interpretation that doesn't have the seconds unit. Wikipedia also defines flops as already having the time unit. By their standards, flops/s is technically incorrect. I'm not scientist, and I actually didn't notice this until typed gigaflops into Wolfram|Alpha, so take this for what little it's worth.

    It's silly to suggest that this card needs a voltmod and a waterblock. Very few people doing scientific work are going to be having time to do that. This card isn't intended to be a gaming card. Yes, there undoubtedly will be people on hwbot who would love to do such a thing, but relative to the population of scientists living on meager grants, they're small.

    It's also silly to say that Titan is a bad card because it isn't as efficient as other cards at password hashing or bitcoin mining. These embarallel workloads aren't representative of scientific workloads. Besides, the most dedicated people have a custom FPGAs or ASICs for those workloads.

    Saying that it shows Nvidia jacking up prices on its flagship is misleading. Yes, it's technically true. But I saw someone say that the GTX 680 was only a "midrange" card. The GTX 680 still competes with the Radeon 7970 GE. It isn't outright winning anymore - in certain games, it loses - and it's often substantially more expensive. But it's still reasonably competitive. Why did anyone expect Titan to push down GTX 680 prices? If anything, it might push down Tesla 20X prices, but I'm not holding my breath.
    Would anyone have complained about Nvidia being outrageously greedy if Titan didn't exist in the consumer space at all?

    (Moreover, the GTX 580 had FP64 performance at 1/8 FP32 performance, not Titan's 1/3. (http://www.anandtech.com/show/4008/nvidias-geforce...

    Simply looking at the specs partially explains why the card is so damn expensive. It's 7.1 billion transistors, compared to the GTX 690's 2*3.5 billion transistors. (Page 1 on this article). Going purely by transistor count, Titan is underpriced, because it's just as expensive as the GTX 690. Looking at die area instead is less forgiving, but don't forget that squeezing 7 billion transistors on a single die is more difficult than having two 3.5 billion transistor dies. Titan also has 2 extra gigabytes of GDDR5.

    The only valid criticism I've seen is that Titan can be outperformed by two 7970 GEs in certain, mostly FP32 compute workloads, which are a cheaper solution, especially for scientists who probably aren't as concerned with heat production as those working on the Titan supercomputer. After all, you can fit bigger fans in an EATX case than in most racks. 689 Gflops is still greater than 50% of 1309 Gflops; it's 53%. When you can find the cheapest 7970 GEs at a bit over $400, two 7970s will be about $200 cheaper.
    But figure in power: http://www.wolframalpha.com/input/?i=200+W+*+1+yea... . After a year of continuous usage (or two years of 50% utilization), and assuming that two 7970 GEs will use 200 more watts than a Titan - a fairly reasonable estimate in my opinion - Wolfram|Alpha informs us that we'll have saved $216.
    Not to mention the fact that two 7970s occupy around twice as much space as a Titan. That means you need more individual systems if you're looking to scale beyond a single workstation.
    And finally, anyone who needs as much memory per GPU as they can get will need Titan.
    It's hard to draw any real conclusions right now, though, with DirectCompute dubious and OpenCL broken. Great work on Nvidia's part, getting the drivers working...

    There's also the fact that Nvidia is marketing this as a gaming card, which is disingenuous and poor practice. But come on, we all read Anandtech for a reason. Overhyped marketing is nothing new in technology.

    So in conclusion - treat the GTX 680 as the flagship single-GPU consumer card. (They did call it a 680. See the GTX 580, 480, and 280.) It's in roughly in 7970GE's ballpark when it comes to price and performance. For gamers, Titan can effectively be ignored.
    If you need FP32 compute performance, consider multiple 7970 GEs as well as Titan.
    If you need FP64 compute performance, Titan is unparalleled, assuming you run it for a decent amount of time.
    And if you're trying to set a world record, well, I guess you can pay through the nose for Titan too.
  • Insomniator - Thursday, February 21, 2013 - link

    Thank you, so many here just sound like butt hurt kids that do not understand these concepts or maybe didn't even read the article. Few of them would buy it at the $700 they cry about wanting it to be.

    This card is not just for gamers, and even if it were, performance wise it crushes the next closest single GPU competitor. Remember when Intel EE editions were $1k? The best always costs extra... and in this case the card isn't even being marketed soley for gamers anyway.

    Until AMD puts out a new card that can beat it for cheaper, this will remain a $1k. Until then, the 680, 670, and 660 are all competitive products.
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    Don't expect the crybaby fools to respond. They'd prefer to pretend your post isn't here.

    If they do say anything, it will just be another repetitious pile of tinfoil hat lies Charlie D will be proud of.
  • Olaf van der Spek - Thursday, February 21, 2013 - link

    Still only average framerates? :(
    I had hoped you'd move to minimum framerate / max frametime based benchmarking. Averages are (and were) kinda meaningless.
  • Ryan Smith - Thursday, February 21, 2013 - link

    Actually we have some FRAPS data for a few of our games as a trial of some ideas. Unfortunately you won't see it for this article as there simply wasn't enough time to put that together on top of everything else. But keep your eyes peeled.
  • GiantPandaMan - Thursday, February 21, 2013 - link

    The Titan was a compute part, first and foremost. Gamers have much better alternatives in the 7970/680 route.

    Personally I think it's a pretty impressive piece of hardware, though there's no way in hell I'd ever buy it. That's because I'm a value oriented buyer and I don't have that much disposable income.

    I just don't get all the indignation and outrage. It's not like nVidia screwed you over in some way. They had an expensive piece of hardware designed for compute and said to themselves, what the hell, why not release it for gamers?

Log in

Don't have an account? Sign up now