Final Thoughts

Bringing things to a close, most of what we’ve seen with Titan has been a long time coming. Since the introduction of GK110 back at GTC 2012, we’ve had a solid idea of how NVIDIA’s grandest GPU would be configured, and it was mostly a question of when it would make its way to consumer hands, and at what clockspeeds and prices.

The end result is that with the largest Kepler GPU now in our hands, the performance situation closely resembles the Fermi and GT200 generations. Which is to say that so long as you have a solid foundation to work from, he who builds the biggest GPU builds the most powerful GPU. And at 551mm2, once more NVIDIA is alone in building massive GPUs.

No one should be surprised then when we proclaim that GeForce GTX Titan has unquestionably reclaimed the single-GPU performance crown for NVIDIA. It’s simply in a league of its own right now, reaching levels of performance no other single-GPU card can touch. At best, at its very best, AMD’s Radeon HD 7970GE can just match Titan, which is quite an accomplishment for AMD, but then at Titan’s best it’s nearly a generation ahead of the 7970GE. Like its predecessors, Titan delivers the kind of awe-inspiring performance we have come to expect from NVIDIA’s most powerful video cards.

With that in mind, as our benchmark data has shown, Titan’s performance isn’t quite enough to unseat this generation’s multi-GPU cards like the GTX 690 or Radeon HD 7990. But with that said this isn’t a new situation for us, and we find our editorial stance has not changed: we still suggest single-GPU cards over multi-GPU cards when performance allows for it. Multi-GPU technology itself is a great way to improve performance beyond what a single GPU can do, but as it’s always beholden to the need for profiles and the inherent drawbacks of AFR rendering, we don’t believe it’s desirable in situations such as Titan versus the GTX 690. The GTX 690 may be faster, but Titan is going to deliver a more consistent experience, just not quite at the same framerates as the GTX 690.

Meanwhile in the world of GPGPU computing Titan stands alone. Unfortunately we’re not able to run a complete cross-platform comparison due to Titan’s outstanding OpenCL issue, but from what we have been able to run Titan is not only flat-out powerful, but NVIDIA has seemingly delivered on their compute efficiency goals, giving us a Kepler family part capable of getting far closer to its theoretical efficiency than GTX 680, and closer than any other GPU before it. We’ll of course be taking a further look at Titan in comparison to other GPUs once the OpenCL situation is resolved in order to come to a better understanding of its relative strengths and weaknesses, but for the first wave of Titan buyers I’m not sure that’s going to matter. If you’re doing GPU computing, are invested in CUDA, and need a fast compute card, then Titan is the compute card CUDA developers and researchers have been dreaming of.

Back in the land of consumer gaming though, we have to contend with the fact that unlike any big-GPU card before it, Titan is purposely removed from the price/performance curve. NVIDIA has long wanted to ape Intel’s ability to have an extreme/luxury product at the very top end of the consumer product stack, and with Titan they’re going ahead with that.

The end result is that Titan is targeted at a different demographic than GTX 580 or other such cards, a demographic that has the means and the desire to purchase such a product. Being used to seeing the best video cards go for less we won’t call this a great development for the competitive landscape, but ultimately this is far from the first luxury level computer part, so there’s not much else to say other than that this is a product for a limited audience. But what that limited audience is getting is nothing short of an amazing card.

Like the GTX 690, NVIDIA has once again set the gold standard for GPU construction, this time for a single-GPU card. GTX 680 was a well-built card, but next to Titan it suddenly looks outdated. For example, despite Titan’s significantly higher TDP it’s no louder than the GTX 680, and the GTX 680 was already a quiet card. Next to price/performance the most important metric is noise, and by focusing on build quality NVIDIA has unquestionably set the new standard for high-end, high-TDP video cards.

On a final note, normally I’m not one for video card gimmicks, but after having seen both of NVIDIA’s Titan concept systems I have to say NVIDIA has taken an interesting route in justifying the luxury status of Titan. With the Radeon HD 7970 GHz Edition only available with open air or exotic cooling, Titan has been put into a position where it’s the ultimate blower card by a wide margin. The end result is that in scenarios where blowers are preferred and/or required, such as SFF PCs or tri-SLI, Titan is even more of an improvement over the competition than it is for traditional desktop computers. Or as Anand has so eloquently put it with his look at Falcon Northwest’s Tiki, when it comes to Titan “The days of a high end gaming rig being obnoxiously loud are thankfully over.”

Wrapping things up, on Monday we’ll be taking a look at the final piece of the puzzle: Origin’s tri-SLI full tower Genesis PC. The Genesis has been an interesting beast for its use of water cooling with Titan, and with the Titan launch behind us we can now focus on what it takes to feed 3 Titan video cards and why it’s an impeccable machine for multi-monitor/surround gaming. So until then, stay tuned.

Power, Temperature, & Noise
Comments Locked

337 Comments

View All Comments

  • chizow - Friday, February 22, 2013 - link

    Idiot...has the top end card cost 2x as much every time? Of course not!!! Or we'd be paying $100K for GPUs!!!
  • CeriseCogburn - Saturday, February 23, 2013 - link

    Stop being an IDIOT.

    What is the cost of the 7970 now, vs what I paid for it at release, you insane gasbag ?
    You seem to have a brainfart embedded in your cranium, maybe you should go propose to Charlie D.
  • chizow - Saturday, February 23, 2013 - link

    It's even cheaper than it was at launch, $380 vs. $550, which is the natural progression....parts at a certain performance level get CHEAPER as new parts are introduced to the market. That's called progress. Otherwise there would be NO INCENTIVE to *upgrade* (look this word up please, it has meaning).

    You will not pay the same money for the same performance unless the part breaks down, and semiconductors under normal usage have proven to be extremely venerable components. People expect progress, *more* performance at the same price points. People will not pay increasing prices for things that are not essential to life (like gas, food, shelter), this is called the price inelasticity of demand.

    This is a basic lesson in business, marketing, and economics applied to the semiconductor/electronics industry. You obviously have no formal training in any of the above disciplines, so please stop commenting like a ranting and raving idiot about concepts you clearly do not understand.
  • CeriseCogburn - Saturday, February 23, 2013 - link

    They're ALREADY SOLD OUT STUPID IDIOT THEORIST.

    LOL

    The true loser, an idiot fool, wrong before he's done typing, the "education" is his brainwashed fried gourd Charlie D OWNZ.
  • chizow - Sunday, February 24, 2013 - link

    And? There's going to be some demand for this card just as there was demand for the 690, it's just going to be much lower based on the price tag than previous high-end cards. I never claimed anything otherwise.

    I outlined the expectations, economics, and buying decisions in general for the tech industry and in general, they hold true. Just look around and you'll get plenty of confirmation where people (like me) who previously bought 1, 2, 3 of these $500-650 GPUs are opting to pass on a single Titanic at $1000.

    Nvidia's introduction of an "ultra-premium" range is an unsustainable business model because it assumes Nvidia will be able to sustain this massive performance lead over AMD. Not to mention they will have a harder time justifying the price if their own next-gen offering isn't convincingly faster.
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    You're not the nVidia CEO nor their bean counter, you whacked out fool.

    You're the IDIOT that babbles out stupid concepts with words like "justifying", as you purport to be an nVidia marketing hired expert.

    You're not. You're a disgruntled indoctrinated crybaby who can't move on with the times, living in a false past, and waiting for a future not here yet.
  • Oxford Guy - Thursday, February 21, 2013 - link

    The article's first page has the word luxury appearing five times. The blurb, which I read prior to reading the article's first page has luxury appearing twice.

    That is 7 uses of the word in just a bit over one page.

    Let me guess... it's a luxury product?
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    It's stupid if you ask me. But that's this place, not very nVidia friendly after their little didn't get the new 98xx fiasco, just like Tom's.

    A lot of these top tier cards are a luxury, not just the Titan, as one can get by with far less, the problem is, the $500 cards fail often at 1920x resolution, and this one perhaps can be said to have conquered just that, so here we have a "luxury product" that really can't do it's job entirely, or let's just say barely, maybe, as 1920X is not a luxury resolution.
    Turn OFF and down SOME in game features, and that's generally, not just extreme case.

    People are fools though, almost all the time. Thus we have this crazed "reviews" outlook distortion, and certainly no such thing as Never Settle.
    We're ALWAYS settling when it comes to video card power.
  • araczynski - Thursday, February 21, 2013 - link

    too bad there's not a single game benchmark in that whole article that I give 2 squirts about. throw in some RPG's please, like witcher/skyrim.
  • Ryan Smith - Thursday, February 21, 2013 - link

    We did test Skyrim only to ultimately pass on it for a benchmark. The problem with Skyrim (and RPGs in general) is that they're typically CPU limited. In this case our charts would be nothing but bar after bar at roughly 90fps, which wouldn't tell us anything meaningful about the GPU.

Log in

Don't have an account? Sign up now