Final Thoughts

Bringing things to a close, most of what we’ve seen with Titan has been a long time coming. Since the introduction of GK110 back at GTC 2012, we’ve had a solid idea of how NVIDIA’s grandest GPU would be configured, and it was mostly a question of when it would make its way to consumer hands, and at what clockspeeds and prices.

The end result is that with the largest Kepler GPU now in our hands, the performance situation closely resembles the Fermi and GT200 generations. Which is to say that so long as you have a solid foundation to work from, he who builds the biggest GPU builds the most powerful GPU. And at 551mm2, once more NVIDIA is alone in building massive GPUs.

No one should be surprised then when we proclaim that GeForce GTX Titan has unquestionably reclaimed the single-GPU performance crown for NVIDIA. It’s simply in a league of its own right now, reaching levels of performance no other single-GPU card can touch. At best, at its very best, AMD’s Radeon HD 7970GE can just match Titan, which is quite an accomplishment for AMD, but then at Titan’s best it’s nearly a generation ahead of the 7970GE. Like its predecessors, Titan delivers the kind of awe-inspiring performance we have come to expect from NVIDIA’s most powerful video cards.

With that in mind, as our benchmark data has shown, Titan’s performance isn’t quite enough to unseat this generation’s multi-GPU cards like the GTX 690 or Radeon HD 7990. But with that said this isn’t a new situation for us, and we find our editorial stance has not changed: we still suggest single-GPU cards over multi-GPU cards when performance allows for it. Multi-GPU technology itself is a great way to improve performance beyond what a single GPU can do, but as it’s always beholden to the need for profiles and the inherent drawbacks of AFR rendering, we don’t believe it’s desirable in situations such as Titan versus the GTX 690. The GTX 690 may be faster, but Titan is going to deliver a more consistent experience, just not quite at the same framerates as the GTX 690.

Meanwhile in the world of GPGPU computing Titan stands alone. Unfortunately we’re not able to run a complete cross-platform comparison due to Titan’s outstanding OpenCL issue, but from what we have been able to run Titan is not only flat-out powerful, but NVIDIA has seemingly delivered on their compute efficiency goals, giving us a Kepler family part capable of getting far closer to its theoretical efficiency than GTX 680, and closer than any other GPU before it. We’ll of course be taking a further look at Titan in comparison to other GPUs once the OpenCL situation is resolved in order to come to a better understanding of its relative strengths and weaknesses, but for the first wave of Titan buyers I’m not sure that’s going to matter. If you’re doing GPU computing, are invested in CUDA, and need a fast compute card, then Titan is the compute card CUDA developers and researchers have been dreaming of.

Back in the land of consumer gaming though, we have to contend with the fact that unlike any big-GPU card before it, Titan is purposely removed from the price/performance curve. NVIDIA has long wanted to ape Intel’s ability to have an extreme/luxury product at the very top end of the consumer product stack, and with Titan they’re going ahead with that.

The end result is that Titan is targeted at a different demographic than GTX 580 or other such cards, a demographic that has the means and the desire to purchase such a product. Being used to seeing the best video cards go for less we won’t call this a great development for the competitive landscape, but ultimately this is far from the first luxury level computer part, so there’s not much else to say other than that this is a product for a limited audience. But what that limited audience is getting is nothing short of an amazing card.

Like the GTX 690, NVIDIA has once again set the gold standard for GPU construction, this time for a single-GPU card. GTX 680 was a well-built card, but next to Titan it suddenly looks outdated. For example, despite Titan’s significantly higher TDP it’s no louder than the GTX 680, and the GTX 680 was already a quiet card. Next to price/performance the most important metric is noise, and by focusing on build quality NVIDIA has unquestionably set the new standard for high-end, high-TDP video cards.

On a final note, normally I’m not one for video card gimmicks, but after having seen both of NVIDIA’s Titan concept systems I have to say NVIDIA has taken an interesting route in justifying the luxury status of Titan. With the Radeon HD 7970 GHz Edition only available with open air or exotic cooling, Titan has been put into a position where it’s the ultimate blower card by a wide margin. The end result is that in scenarios where blowers are preferred and/or required, such as SFF PCs or tri-SLI, Titan is even more of an improvement over the competition than it is for traditional desktop computers. Or as Anand has so eloquently put it with his look at Falcon Northwest’s Tiki, when it comes to Titan “The days of a high end gaming rig being obnoxiously loud are thankfully over.”

Wrapping things up, on Monday we’ll be taking a look at the final piece of the puzzle: Origin’s tri-SLI full tower Genesis PC. The Genesis has been an interesting beast for its use of water cooling with Titan, and with the Titan launch behind us we can now focus on what it takes to feed 3 Titan video cards and why it’s an impeccable machine for multi-monitor/surround gaming. So until then, stay tuned.

Power, Temperature, & Noise
Comments Locked

337 Comments

View All Comments

  • chizow - Saturday, February 23, 2013 - link

    I haven't use this rebuttal in a long time, I reserve it for only the most deserving, but you sir are retarded.

    Everything you've written above is anti-progress, you've set Moore's law and semiconductor progress back 30 years with your asinine rants. If idiots like you running the show, no one would own any electronic devices because we'd be paying $50,000 for toaster ovens.
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    Yeah that's a great counter you idiot... as usual when reality barely glints a tiny bit through your lying tin foiled dunce cap, another sensationalistic pile of bunk is what you have.
    A great cover for a cornered doofus.
    When you finally face your immense error, you'll get over it.

  • hammer256 - Thursday, February 21, 2013 - link

    Not to sound like a broken record, but for us in scientific computing using CUDA, this is a godsend.
    The GTX 680 release was a big disappointment for compute, and I was worried that this is going to be the trend going forward with Nvidia: nerfed compute card for the consumers that focuses on graphics, and compute heavy professional cards for the HPC space.
    I was worried that the days of cheap compute are gone. These days might still be numbered, but at least for this generation Titan is going to keep it going.
  • ronin22 - Thursday, February 21, 2013 - link

    +1
  • PCTC2 - Thursday, February 21, 2013 - link

    For all of you complaining about the $999 price tag. It's like the GTX 690 (or even the 8800 Ultra, for those who remember it). It's a flagship luxury card for those who can afford it.

    But that's beside the real point. This is a K20 without the price premium (and some of the valuable Tesla features). But for researchers on a budget, using homegrown GPGPU compute code that doesn't validate to run only on Tesla cards, these are a godsend. I mean, some professional programs will benefit from having a Tesla over a GTX card, but these days, researchers are trying to reach into HPC space without the price premium of true HPC enterprise hardware. The GTX Titan is a good middle point. For the price of a Quadro K5000 and a single Tesla K20c card, they can purchase 4 GTX Titans and still have some money to spare. They don't need SLI. They just need the raw compute power these cards are capable of. So as entry GPU Compute workstation cards, these cards hit the mark for those wanting to enter GPU compute on a budget. As a graphics card for your gaming machine, average gamers need not apply.
  • ronin22 - Thursday, February 21, 2013 - link

    "average gamers need not apply"

    If only people had read this before posting all this hate.

    Again, gamers, this card is not for you. Please get the cr*p out of here.
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    You have to understand, the review sites themselves have pushed the blind fps mentality now for years, not to mention insanely declared statistical percentages ripened with over-interpretation on the now contorted and controlled crybaby whiners. It's what they do every time, they feel it gives them the status of consumer advisor, Nader protege, fight the man activist, and knowledgeable enthusiast.

    Unfortunately that comes down the ignorant demands we see here, twisted with as many lies and conspiracies as are needed, to increase the personal faux outrage.
  • Dnwvf - Thursday, February 21, 2013 - link

    In absolute terms, this is the best non-Tesla compute card on the market.

    However, looking at flops/$, you'd be better off buying 2 7970Ghz Radeons, which would run around $60 less and give you more total Flops. Look at the compute scores - Titan is generally not 2x a single 7970. And in some of the compute scores, the 7970 wins.

    2 7970ghz (not even in crossfire mode, you don't need that for OpenCL), will beat the crap out of Titan and cost less. They couldn't run AOPR on the AMD cards..but everybody knows from bitcoin that Amd cards rule over nvidia for password hashing ( just google bitcoin bit_align_int to see why).

    There's an article on Toms Hardware where they put a bunch of nvidia and amd cards through a bunch of compute benchmarks, and when amd isn't winning, the gtx 580 generally beats the 680...most likely due to its 512 bit bus. Titan is still a 384 bit bus...can't really compare on price because Phi costs an arm and a leg like Tesla, but you have to acknowledge that Phi is probably gonna rock out with its 512 bit bus.

    Gotta give Nvidia kudos for finally not crippling fp64, but at this price point, who cares? If you're looking to do compute and have a GPU budget of $2K, you could buy:

    An older Tesla
    2 Titans
    -or-
    Build a system with 2 7970Ghz and 2 Gtx 580.

    And the last system would be the best...compute on the amd cards for certain algorithms, on the nvidia cards for the others, and pci bandwidth issues aside, running multiple complex algorithms simultaneously will rock because you can enqueue and execute 4 OpenCL kernels simultaneously. You'd have to shop around for a while to find some 580's though.

    Gamers aren't gonna buy this card unless they're spending Daddy's money, and serious compute folk will realize quickly that if they buy a mobo that will fit 2 or 4 double-width cards, depending on Gpu budget, they can get more flops per dollar with a multiple-card setup (think of it as a micro-sized Gpu compute cluster). Don't believe me? Google Jeremi Gosni oclhashcat.

    I'm not much for puns, but this card is gonna flop. (sorry)
  • DanNeely - Thursday, February 21, 2013 - link

    Has any eta on when the rest of the Kepler refresh is due leaked out yet?
  • HisDivineOrder - Thursday, February 21, 2013 - link

    It's way out of my price range, first and foremost.

    Second, I think the pricing is a mistake, but I know where they are coming from. They're using the same Intel school of thought on SB-E compared to IB. They price it out the wazoo and only the most luxury of the luxury gamers will buy it. It doesn't matter that the benchmarks show it's only mostly better than its competition down at the $400-500 range and not the all-out destruction you might think it capable of.

    The cost will be so high it will be spoken of in whispers and with wary glances around, fearful that the Titan will appear and step on you. It'll be rare and rare things are seen as legendary just so long as they can make the case it's the fastest single-GPU out there.

    And they can.

    So in short, it's like those people buying hexacore CPU's from Intel. You pay out the nose, you get little real gain and a horrible performance per dollar, but it is more marketing than common sense.

    If nVidia truly wanted to use this product to service all users, they would have priced it at $600-700 and moved a lot more. They don't want that. They're fine with the 670/680 being the high end for a majority of users. Those cards have to be cheap to make by now and with AMD's delays/stalls/whatever's, they can keep them the way they are or update them with a firmware update and perhaps a minor retooling of the fab design to give it GPU Boost 2.

    They've already set the stage for that imho. If you read the way the article is written about GPU Boost 2 (both of them), you can see nVidia is setting up a stage where they introduce a slightly modified version of the 670 and 680 with "minor updates to the GPU design" and GPU Boost 2, giving them more headroom to improve consistency with the current designs.

    Which again would be stealing from Intel's playbook of supplement SB-E with IB mainstream cores.

    The price is obscene, but the only people who should actually care are the ones who worship at the altar of AA. Start lowering that and suddenly even a 7950 is way ahead of what you need.

Log in

Don't have an account? Sign up now