Final Words

When NVIDIA introduced the original GTX Titan in 2013 they set a new bar for performance, quality, and price for a high-end video card. The GTX Titan ended up being a major success for the company, a success that the company is keen to repeat. And now with their Maxwell architecture in hand, NVIDIA is in a position to do just that.

For as much of a legacy as the GTX Titan line can have at this point, it’s clear that the GTX Titan X is as worthy a successor as NVIDIA could hope for. NVIDIA has honed the already solid GTX Titan design, and coupled it with their largest Maxwell GPU, and in the process has put together a card that once again sets a new bar for performance and quality. That said, from a design perspective GTX Titan X is clearly evolutionary as opposed to the revolution that was the original GTX Titan, but it is nonetheless an impressive evolution.

Overall then it should come as no surprise that from a gaming performance standpoint the GTX Titan X stands alone. Delivering an average performance increase over the GTX 980 of 33%, GTX Titan X further builds on what was already a solid single-GPU performance lead for NVIDIA. Meanwhile compared to its immediate predecessors such as the GTX 780 Ti and the original GTX Titan, the GTX Titan X represents a significant, though perhaps not-quite-generational 50%-60% increase in performance. However perhaps most importantly, this performance improvement comes without any further increase in noise or power consumption as compared to NVIDIA’s previous generation flagship.

Meanwhile from a technical perspective, the GTX Titan X and GM200 GPU represent an interesting shift in high-end GPU design goals for NVIDIA, one whose ramifications I’m not sure we fully understand yet. By building what’s essentially a bigger version of GM204, heavy on graphics and light on FP64 compute, NVIDIA has been able to drive up performance without a GM204-like increase in die size. At 601mm2 GM200 is still NVIDIA’s largest GPU to date, but by producing their purest graphics GPU in quite some time, it has allowed NVIDIA to pack more graphics horsepower than ever before into a 28nm GPU. What remains to be seen then is whether this graphics/FP32-centric design is a one-off occurrence for 28nm, or if this is the start of a permanent shift in NVIDIA GPU design.

But getting back to the video card at hand, there’s little doubt of the GTX Titan X’s qualifications. Already in possession of the single-GPU performance crown, NVIDIA has further secured it with the release of their latest GTX Titan card. In fact there's really only one point we can pick at with the GTX Titan X, and that of course is the price. At $999 it's priced the same as the original GTX Titan - so today's $999 price tag comes as no surprise - but it's still a high price to pay for Big Maxwell. NVIDIA is not bashful about treating GTX Titan as a luxury card line, and for better and worse GTX Titan X continues this tradition. GTX Titan X, like GTX Titan before it, is a card that is purposely removed from the price/performance curve.

Meanwhile, the competitive landscape is solidly in NVIDIA's favor we feel. We would be remiss not to mention multi-GPU alternatives such as the GTX 980 in SLI and AMD's excellent Radeon R9 295X2. But as we've mentioned before when reviewing these setups before, multi-GPU is really only worth chasing when you've exhausted single-GPU performance. R9 295X2 in turn is a big spoiler on price, but we continue to believe that a single powerful GPU is a better choice for consistent performance, at least if you can cover the cost of GTX Titan X.

Finally on a lighter note, with the launch of the GTX Titan X we wave good-bye to GTX Titan as an entry-level double precision compute card. NVIDIA dumping high-performance FP64 compute has made GTX Titan X a better graphics card and even a better FP32 compute card, but it means that the original GTX Titan's time as NVIDIA's first prosumer card was short-lived. I suspect that we haven't seen the end of NVIDIA's forays into entry-level FP64 compute cards like the original GTX Titan, but that next card will not be GTX Titan X.

Overclocking
Comments Locked

276 Comments

View All Comments

  • modeless - Tuesday, March 17, 2015 - link

    This *is* a compute card, but for an application that doesn't need FP64: deep learning. In fact, deep learning would do even better with FP16. What deep learning does need is lots of ALUs (check) and lots of RAM (double check). Deep learning people were asking for more RAM and they got it. I'm considering buying one just for training neural nets.
  • Yojimbo - Tuesday, March 17, 2015 - link

    Yes, I got that idea from the keynote address, and I think that's why they have 12GB of RAM. But how much deep-learning-specific compute demand is there? Are there lots of people who use compute just for deep learning and nothing else that demands FP64 performance? Enough that it warrants building an entire GPU (M200) just for them? Surely NVIDIA is counting mostly on gaming sales for Titan and whatever cut-down M200 card arrives later.
  • Yojimbo - Wednesday, March 18, 2015 - link

    Oh, and of course also counting on the Quadro sales in the workstation market.
  • DAOWAce - Tuesday, March 17, 2015 - link

    Nearly double the performance of a single 780 when heavily OC'd, jesus christ, I wish I had disposable income.

    I already got burned by buying a 780 though ($722 before it dropped $200 a month later due to the Ti's release), so I'd much rather at this point extend the lifespan of my system by picking up some cheap second hand 780 and dealing with SLI's issues again (haven't used it since my 2x 460's) while I sit and wait for the 980 Ti to get people angry again or even until the next die shrink.

    At any rate, I won't get burned again buying my first ever enthusiast card, that's for damn sure.
  • Will Robinson - Wednesday, March 18, 2015 - link

    Well Titan X looks like a really mean machine.A bit pricey but Top Dog has always been like that for NV so you can't ping it too badly on that.
    I'm really glad NVDA has set their "Big Maxwell" benchmark because now it's up to R390X to defeat it.
    This will be flagship V flagship with the winner taking all the honors.
  • poohbear - Wednesday, March 18, 2015 - link

    Couldn't u show us a chart of VRAM usage for Shadows of Mordor instead of minimum frames? Argus Monitor charts VRAM usage, it would've been great to see how much average and maximum VRAM Shadows of Mordor uses (of the available 12gb).
  • Meaker10 - Wednesday, March 18, 2015 - link

    They only show paged ram, not actual usage.
  • ChristopherJack - Wednesday, March 18, 2015 - link

    I'm surprised how often the ageing 7990 tops this. I had no doubt what so ever that the 295x2 was going to stomp all over this & that's what bothered me about everyone claiming the Titan X was going to be the fastest graphics card, blah, blah, blah. Yes I'm aware those are dual GPU cards in xfire, no I don't care because they're single cards & can be found for significantly lower prices if price/performance is the only concern.
  • Pc_genjin - Wednesday, March 18, 2015 - link

    So... as a person who has the absolute worst timing ever when it comes to purchasing technology, I built a brand new PC - FOR THE FIRST TIME IN NINE YEARS - just three days ago with 2 x GTX 980s. I haven't even received them yet, and I run across several reviews for this - today. Now, the question is: do I attempt to return the two 980s, saving $100 in the process? Or is it just better to keep the 980s? (Thankfully I didn't build the system yet, and consequently open them already, or I'd be livid.). Thanks for any advice, and sorry for any arguments I spark, yikes.)
  • D. Lister - Wednesday, March 18, 2015 - link

    The 2x980s would be significantly more powerful than a single Titan X, even with 1/3rd the total VRAM.

Log in

Don't have an account? Sign up now