Final Words

When NVIDIA introduced the original GTX Titan in 2013 they set a new bar for performance, quality, and price for a high-end video card. The GTX Titan ended up being a major success for the company, a success that the company is keen to repeat. And now with their Maxwell architecture in hand, NVIDIA is in a position to do just that.

For as much of a legacy as the GTX Titan line can have at this point, it’s clear that the GTX Titan X is as worthy a successor as NVIDIA could hope for. NVIDIA has honed the already solid GTX Titan design, and coupled it with their largest Maxwell GPU, and in the process has put together a card that once again sets a new bar for performance and quality. That said, from a design perspective GTX Titan X is clearly evolutionary as opposed to the revolution that was the original GTX Titan, but it is nonetheless an impressive evolution.

Overall then it should come as no surprise that from a gaming performance standpoint the GTX Titan X stands alone. Delivering an average performance increase over the GTX 980 of 33%, GTX Titan X further builds on what was already a solid single-GPU performance lead for NVIDIA. Meanwhile compared to its immediate predecessors such as the GTX 780 Ti and the original GTX Titan, the GTX Titan X represents a significant, though perhaps not-quite-generational 50%-60% increase in performance. However perhaps most importantly, this performance improvement comes without any further increase in noise or power consumption as compared to NVIDIA’s previous generation flagship.

Meanwhile from a technical perspective, the GTX Titan X and GM200 GPU represent an interesting shift in high-end GPU design goals for NVIDIA, one whose ramifications I’m not sure we fully understand yet. By building what’s essentially a bigger version of GM204, heavy on graphics and light on FP64 compute, NVIDIA has been able to drive up performance without a GM204-like increase in die size. At 601mm2 GM200 is still NVIDIA’s largest GPU to date, but by producing their purest graphics GPU in quite some time, it has allowed NVIDIA to pack more graphics horsepower than ever before into a 28nm GPU. What remains to be seen then is whether this graphics/FP32-centric design is a one-off occurrence for 28nm, or if this is the start of a permanent shift in NVIDIA GPU design.

But getting back to the video card at hand, there’s little doubt of the GTX Titan X’s qualifications. Already in possession of the single-GPU performance crown, NVIDIA has further secured it with the release of their latest GTX Titan card. In fact there's really only one point we can pick at with the GTX Titan X, and that of course is the price. At $999 it's priced the same as the original GTX Titan - so today's $999 price tag comes as no surprise - but it's still a high price to pay for Big Maxwell. NVIDIA is not bashful about treating GTX Titan as a luxury card line, and for better and worse GTX Titan X continues this tradition. GTX Titan X, like GTX Titan before it, is a card that is purposely removed from the price/performance curve.

Meanwhile, the competitive landscape is solidly in NVIDIA's favor we feel. We would be remiss not to mention multi-GPU alternatives such as the GTX 980 in SLI and AMD's excellent Radeon R9 295X2. But as we've mentioned before when reviewing these setups before, multi-GPU is really only worth chasing when you've exhausted single-GPU performance. R9 295X2 in turn is a big spoiler on price, but we continue to believe that a single powerful GPU is a better choice for consistent performance, at least if you can cover the cost of GTX Titan X.

Finally on a lighter note, with the launch of the GTX Titan X we wave good-bye to GTX Titan as an entry-level double precision compute card. NVIDIA dumping high-performance FP64 compute has made GTX Titan X a better graphics card and even a better FP32 compute card, but it means that the original GTX Titan's time as NVIDIA's first prosumer card was short-lived. I suspect that we haven't seen the end of NVIDIA's forays into entry-level FP64 compute cards like the original GTX Titan, but that next card will not be GTX Titan X.

Overclocking
Comments Locked

276 Comments

View All Comments

  • looncraz - Tuesday, March 17, 2015 - link

    If the most recent slides (allegedly leaked from AMD) hold true, the 390x will be at least as fast as the Titan X, though with only 8GB of RAM (but HBM!).

    A straight 4096SP GCN 1.2/3 GPU would be a close match-up already, but any other improvements made along the way will potentially give the 390X a fairly healthy launch-day lead.

    I think nVidia wanted to keep AMD in the dark as much as possible so that they could not position themselves to take more advantage of this, but AMD decided to hold out, apparently, until May/June (even though they apparently already have some inventory on hand) rather than give nVidia a chance to revise the Titan X before launch.

    nVidia blinked, it seems, after it became apparent AMD was just going to wait out the clock with their current inventory.
  • zepi - Wednesday, March 18, 2015 - link

    Unless AMD has achieved considerable increase in perf/w, they are going to have really hard time tuning those 4k shaders to a reasonable frequency without being a 450W card.

    Not that being a 500W is necessarily a deal breaker for everyone, but in practice cooling a 450W card without causing ear shattering level of noise is very difficult compared to cooling a 250W card.

    Let us wait and hope, since AMD really would need to get a break and make some money on this one...
  • looncraz - Wednesday, March 18, 2015 - link

    Very true. We know that with HBM there should already be a fairly beefy power savings (~20-30W vs 290X it seems).

    That doesn't buy them room for 1,280 more SPs, of course, but it should get them a healthy 256 of them. Then, GCN 1.3 vs 1.1 should have power advantages as well. GCN 1.2 vs 1.0 (R9 285 vs R9 280) with 1792 SPs showed a 60W improvement, if we assume GCN 1.1 to GCN 1.3 shows a similar trend the 390X should be pulling only about 15W more than the 290X with the rumored specs without any other improvements.

    Of course, the same math says the 290X should be drawing 350W, but that's because it assumes all the power is in the SPs... But I do think it reveals that AMD could possibly do it without drawing much, if any, more power without making any unprecedented improvements.
  • Braincruser - Wednesday, March 18, 2015 - link

    Yeah, but the question is, How well will the memory survive on top of a 300W GPU?
    Because the first part in a graphic card to die from high temperatures is the VRAM.
  • looncraz - Thursday, March 19, 2015 - link

    It will be to the side, on a 2.5d interposer, I believe.

    GPU thermal energy will move through the path of least resistance (technically, to the area with the greatest deltaT, but regulated by the material thermal conductivity coefficient), which should be into the heatsink or water block. I'm not sure, but I'd think the chips could operate in the same temperature range as the GPU, but maybe not. It may be necessary to keep them thermally isolated. Which shouldn't be too difficult, maybe as simple as not using thermal pads at all for the memory and allowing them to passively dissipate heat (or through interposer mounted heatsinks).

    It will be interesting to see what they have done to solve the potential issues, that's for sure.
  • Xenonite - Thursday, March 19, 2015 - link

    Yes, I agree that AMD would be able to absolutely destroy NVIDIA on the performance front if they designed a 500W GPU and left the PCB and waterblock design to their AIB partners.

    I would also absolutely love to see what kind of performance a 500W or even a 1kW graphics card would be able to muster; however, since a relatively constant 60fps presented with less than about 100ms of total system latency has been deemed sufficient for a "smooth and responsive" gaming experience, I simply can't imagine such a card ever seeing the light of day.
    And while I can understand everyone likes to pretend that they are saving the planet with their <150W GPUs, the argument that such a TDP would be very difficult to cool does not really hold much water IMHO.

    If, for instance, the card was designed from the ground up to dissipate its heat load over multiple 200W~300W GPUs, connected via a very-high-speed, N-directional data interconnect bus, the card could easily and (most importantly) quietly be cooled with chilled-watercooling dissipating into a few "quad-fan" radiators. Practically, 4 GM200-size GPUs could be placed back-to-back on the PCB, with each one rendering a quarter of the current frame via shared, high-speed frame buffers (thereby eliminating SLI-induced microstutter and "frame-pacing" lag). Cooling would then be as simple as installing 4 standard gpu-watercooling loops with each loop's radiator only having to dissipate the TDP of a single GPU module.
  • naxeem - Tuesday, March 24, 2015 - link

    They did solve that problem with a water-cooling solution. 390X WCE is probably what we'll get.
  • ShieTar - Wednesday, March 18, 2015 - link

    Who says they don't allow it? EVGA have already anounced two special models, a superclocked one and one with a watercooling-block:

    http://eu.evga.com/articles/00918/EVGA-GeForce-GTX...
  • Wreckage - Tuesday, March 17, 2015 - link

    If by fast you mean June or July. I'm more interested in a 980ti so I don't need a new power supply.
  • ArmedandDangerous - Saturday, March 21, 2015 - link

    There won't ever be a 980 Ti if you understand Nvidia's naming schemes. Ti's are for unlocked parts, there's nothing to further unlock on the 980 GM204.

Log in

Don't have an account? Sign up now