Final Thoughts

Bringing things to a close, most of what we’ve seen with Titan has been a long time coming. Since the introduction of GK110 back at GTC 2012, we’ve had a solid idea of how NVIDIA’s grandest GPU would be configured, and it was mostly a question of when it would make its way to consumer hands, and at what clockspeeds and prices.

The end result is that with the largest Kepler GPU now in our hands, the performance situation closely resembles the Fermi and GT200 generations. Which is to say that so long as you have a solid foundation to work from, he who builds the biggest GPU builds the most powerful GPU. And at 551mm2, once more NVIDIA is alone in building massive GPUs.

No one should be surprised then when we proclaim that GeForce GTX Titan has unquestionably reclaimed the single-GPU performance crown for NVIDIA. It’s simply in a league of its own right now, reaching levels of performance no other single-GPU card can touch. At best, at its very best, AMD’s Radeon HD 7970GE can just match Titan, which is quite an accomplishment for AMD, but then at Titan’s best it’s nearly a generation ahead of the 7970GE. Like its predecessors, Titan delivers the kind of awe-inspiring performance we have come to expect from NVIDIA’s most powerful video cards.

With that in mind, as our benchmark data has shown, Titan’s performance isn’t quite enough to unseat this generation’s multi-GPU cards like the GTX 690 or Radeon HD 7990. But with that said this isn’t a new situation for us, and we find our editorial stance has not changed: we still suggest single-GPU cards over multi-GPU cards when performance allows for it. Multi-GPU technology itself is a great way to improve performance beyond what a single GPU can do, but as it’s always beholden to the need for profiles and the inherent drawbacks of AFR rendering, we don’t believe it’s desirable in situations such as Titan versus the GTX 690. The GTX 690 may be faster, but Titan is going to deliver a more consistent experience, just not quite at the same framerates as the GTX 690.

Meanwhile in the world of GPGPU computing Titan stands alone. Unfortunately we’re not able to run a complete cross-platform comparison due to Titan’s outstanding OpenCL issue, but from what we have been able to run Titan is not only flat-out powerful, but NVIDIA has seemingly delivered on their compute efficiency goals, giving us a Kepler family part capable of getting far closer to its theoretical efficiency than GTX 680, and closer than any other GPU before it. We’ll of course be taking a further look at Titan in comparison to other GPUs once the OpenCL situation is resolved in order to come to a better understanding of its relative strengths and weaknesses, but for the first wave of Titan buyers I’m not sure that’s going to matter. If you’re doing GPU computing, are invested in CUDA, and need a fast compute card, then Titan is the compute card CUDA developers and researchers have been dreaming of.

Back in the land of consumer gaming though, we have to contend with the fact that unlike any big-GPU card before it, Titan is purposely removed from the price/performance curve. NVIDIA has long wanted to ape Intel’s ability to have an extreme/luxury product at the very top end of the consumer product stack, and with Titan they’re going ahead with that.

The end result is that Titan is targeted at a different demographic than GTX 580 or other such cards, a demographic that has the means and the desire to purchase such a product. Being used to seeing the best video cards go for less we won’t call this a great development for the competitive landscape, but ultimately this is far from the first luxury level computer part, so there’s not much else to say other than that this is a product for a limited audience. But what that limited audience is getting is nothing short of an amazing card.

Like the GTX 690, NVIDIA has once again set the gold standard for GPU construction, this time for a single-GPU card. GTX 680 was a well-built card, but next to Titan it suddenly looks outdated. For example, despite Titan’s significantly higher TDP it’s no louder than the GTX 680, and the GTX 680 was already a quiet card. Next to price/performance the most important metric is noise, and by focusing on build quality NVIDIA has unquestionably set the new standard for high-end, high-TDP video cards.

On a final note, normally I’m not one for video card gimmicks, but after having seen both of NVIDIA’s Titan concept systems I have to say NVIDIA has taken an interesting route in justifying the luxury status of Titan. With the Radeon HD 7970 GHz Edition only available with open air or exotic cooling, Titan has been put into a position where it’s the ultimate blower card by a wide margin. The end result is that in scenarios where blowers are preferred and/or required, such as SFF PCs or tri-SLI, Titan is even more of an improvement over the competition than it is for traditional desktop computers. Or as Anand has so eloquently put it with his look at Falcon Northwest’s Tiki, when it comes to Titan “The days of a high end gaming rig being obnoxiously loud are thankfully over.”

Wrapping things up, on Monday we’ll be taking a look at the final piece of the puzzle: Origin’s tri-SLI full tower Genesis PC. The Genesis has been an interesting beast for its use of water cooling with Titan, and with the Titan launch behind us we can now focus on what it takes to feed 3 Titan video cards and why it’s an impeccable machine for multi-monitor/surround gaming. So until then, stay tuned.

Power, Temperature, & Noise
Comments Locked

337 Comments

View All Comments

  • etriky - Sunday, February 24, 2013 - link

    OK, after a little digging I guess I shouldn't be to upset about not having Blender benches in this review. Tesla K20 and GeForce GTX TITAN support was only added to Blender on the 2/21 and requires a custom build (it's not in the main release). See http://www.miikahweb.com/en/blender/svn-logs/commi... for more info
  • Ryan Smith - Monday, February 25, 2013 - link

    As noted elsewhere, OpenCL was broken in the Titan launch drivers, greatly limiting what we could run. We have more planned including SLG's LuxMark, which we will publish an update for once the driver situation is resolved.
  • kukreknecmi - Friday, February 22, 2013 - link

    If you look at Azui's PDF, with using different type of kernel , results for 7970 are :

    SGEMM : 2646 GFLOP
    DGEMM : 848 GFLOP

    Why did u take the lowest numbers for 7970 ??
  • codedivine - Friday, February 22, 2013 - link

    This was answered above. See one of my earlier comments.
  • gwolfman - Friday, February 22, 2013 - link

    ASUS: http://www.newegg.com/Product/Product.aspx?Item=N8...
    OR
    Titan gfx card category (only one shows up for now): http://www.newegg.com/Product/ProductList.aspx?Sub...

    Anand and staff, post this in your news feed please! ;)
  • extide - Friday, February 22, 2013 - link

    PLEASE start including Folding@home benchmarks!!!
  • TheJian - Sunday, February 24, 2013 - link

    Why? It can't make me any money and isn't a professional app. It tells us nothing. I'd rather see photoshop, premier, some finite analysis app, 3d Studiomax, some audio or content creation app or anything that can be used to actually MAKE money. They should be testing some apps that are actually used by those this is aimed at (gamers who also make money on their PC but don't want to spend $2500-3500 on a full fledged pro card).

    What does any card prove by winning folding@home (same with bitcoin crap, botnets get all that now anyway)? If I cure cancer is someone going to pay me for running up my electric bill? NOPE. Only a fool would spend a grand to donate electricity (cpu/gpu cycles) to someone else's next Billion dollar profit machine (insert pill name here). I don't care if I get cancer, I won't be donating any of my cpu time to crap like this. Benchmarking this proves nothing on a home card. It's like testing to see how fast I can spin my car tires while the wheels are off the ground. There is no point in winning that contest vs some other car.

    "If we better understand protein misfolding we can design drugs and therapies to combat these illnesses."
    Straight from their site...Great I'll make them a billionaire drug and get nothing for my trouble or my bill. FAH has to be the biggest sucker pitch I've ever seen. Drug companies already rip me off every time I buy a bottle of their pills. They get huge tax breaks on my dime too, no need to help them, or for me to find out how fast I can help them...LOL. No point in telling me sythentics either. They prove nothing other than your stuff is operating correctly and drivers set up right. Their perf has no effect on REAL use of products as they are NOT a product, thus not REAL world. Every time I see the word synthetic and benchmark in the same sentence it makes me want to vomit. If they are limited on time (usually reviewers are) I want to see something benchmarked that I can actually USE for real.

    I feel the same way about max fps. Who cares? You can include them, but leaving out MIN is just dumb. I need to know when a game hits 30fps or less, as that means I don't have a good enough card to get the job done and either need to spend more or turn things down if using X or Y card.
  • Ryan Smith - Monday, February 25, 2013 - link

    At noted elsewhere, FAHBench is in our plans. However we cannot do anything further until NVIDIA fixes OpenCL support.
  • vanwazltoff - Friday, February 22, 2013 - link

    the 690, 680 and 7970 have had almost a year to brew and improve with driver updates, i suspect that after a few drivers and an overclock titan will creep up on a 690 and will probably see a price deduction after a few months. dont clock out yet, just think what this could mean for 700 and 800 series cards, its obvious nvidia can deliver
  • TheJian - Sunday, February 24, 2013 - link

    It already runs 1150+ everywhere. Most people hit around 1175 max OC stable on titan. Of course this may improve with aftermarket solutions for cooling but it looks like they hit 1175 or so around the world. And that does hit 690 perf and some cases it wins. In compute it's already a winner.

    If there is no die shrink on the next gens from either company I don't expect much. You can only do so much with 250-300w before needing a shrink to really see improvements. I really wish they'd just wait until 20nm or something to give us a real gain. Otherwise will end up with a ivy,haswell deal. Where you don't get much (5-15%). Intel won't wow again until 14nm. Graphics won't wow again until the next shrink either (full shrink, not the halves they're talking now).

Log in

Don't have an account? Sign up now