Meet The GeForce GTX Titan X

Now that we’ve had a chance to look at the GM200 GPU at the heart of GTX Titan X, let’s take a look at the card itself.

From a design standpoint NVIDIA put together a very strong card with the original GTX Titan, combining a revised, magnesium-less version of their all-metal shroud with a high performance blower and vapor chamber assembly. The end result was a high performance 250W card that was quieter than some open-air cards, much quieter than a bunch of other blowers, and shiny to look at to boot. This design was further carried forward for the reference GTX 780 series, its stylings copied for the GTX Titan Z, and used with a cheaper cooling apparatus for the reference GTX 980.

For GTX Titan X, NVIDIA has opted to leave well enough alone, having made virtually no changes to the shroud or cooling apparatus. And truth be told it’s hard to fault NVIDIA right now, as this design remains the gold (well, aluminum) standard for a blower. Looks aside, after years of blowers that rattled, or were too loud, or didn’t cool discrete components very well, NVIDIA is sitting on a very solid design that I’m not really sure how anyone would top (but I’d love to see them try).

In any case, our favorite metal shroud is back once again. Composed of a cast aluminum housing and held together using a combination of rivets and screws, it’s as physically solid a shroud as we’ve ever seen. Meanwhile having already done a partial black dye job for GTX Titan Black and GTX 780 Ti – using black lettering a black-tinted polycarbonate window – NVIDIA has more or less completed the dye job by making the metal shroud itself almost completely black. What remains are aluminum accents and the Titan lettering (Titan, not Titan X, curiously enough) being unpainted aluminum as well. The card measures 10.5” long overall, which at this point is NVIDIA’s standard size for high-end GTX cards.

Drilling down we have the card’s primary cooling apparatus, composed of a nickel-tipped wedge-shaped heatsink and ringed radial fan. The heatsink itself is attached to the GPU via a copper vapor chamber, something that has been exclusive to GTX 780/Titan cards and provides the best possible heat transfer between the GPU and heatsink. Meanwhile the rest of the card is covered with a black aluminum baseplate, providing basic heatsink functionality for the VRMs and other components while also protecting them.

Finally at the bottom of the stack we have the card itself, complete with the GM200 GPU, VRAM chips, and various discrete components. Unlike the shroud and cooler, GM200’s PCB isn’t a complete carry-over from GK110, but it is none the less very similar with only a handful of changes made. This means we’re looking at the GPU and VRAM chips towards the front of the card, while the VRMs and other discrete components occupy the back. New specifically to GTX Titan X, NVIDIA has done some minor reworking to improve airflow to the discrete components and reduce temperatures, along with employing molded inductors.

As with GK110, NVIDIA still employs a 6+2 phase VRM design, with 6 phases for the GPU and another 2 for the VRAM. This means that GTX Titan X has a bit of power delivery headroom – NVIDIA allows the power limit to be increased by 10% to 275W – but hardcore overclockers will find that there isn’t an extreme amount of additional headroom to play with. Based on our sample the actual shipping voltage at the max boost clock is fairly low at 1.162v, so in non-TDP constrained scenarios there is some additional headroom through overvolting, up to 1.237v in the case of our sample.

In terms of overall design, the need to house 24 VRAM chips to get 12GB of VRAM means that the GTX Titan X has chips on the front as well as the back. Unlike the GTX 980 then, for this reason NVIDIA is once again back to skipping the backplate, leaving the back side of the card bare just as with the previous GTX Titan cards.

Moving on, in accordance with GTX Titan X’s 250W TDP and the reuse of the GTX Titan cooler, power delivery for the GTX Titan X is identical to its predecessors. This means a 6-pin and an 8-pin power connector at the top of the card, to provide up to 225W, with the final 75W coming from the PCIe slot. Interestingly the board does have another 8-pin PCIe connector position facing the rear of the card, but that goes unused for this specific GM200 card.

Meanwhile display I/O follows the same configuration we saw on GTX 980. This is 1x DL-DVI-I, 3x DisplayPort 1.2, and 1x HDMI 2.0, with a total limit of 4 displays. In the case of GTX Titan the DVI port is somewhat antiquated at this point – the card is generally overpowered for the relatively low maximum resolutions of DL-DVI – but on the other hand the HDMI 2.0 port is actually going to be of some value here since it means GTX Titan X can drive a 4K TV. Meanwhile if you have money to spare and need to drive more than a single 4K display, GTX Titan X also features a pair of SLI connectors for even more power.

In fact 4K will be a repeating theme for GTX Titan X, as this is one of the primary markets/use cases NVIDIA will be going after with the card. With GTX 980 generally good up to 2560x1440, the even more powerful GTX Titan X is best suited for 4K and VR, the two areas where GTX 980 came up short. In the case of 4K even a single GTX Titan X is going to struggle at times – we’re not at 60fps at 4K with a single GPU quite yet – but GTX Titan should be good enough for framerates between 30fps and 60fps at high quality settings. To fill the rest of the gap NVIDIA is also going to be promoting 4Kp60 G-Sync monitors alongside the GTX Titan X, as the 30-60fps range is where G-sync excels. And while G-sync can’t make up for lost frames it can take some of the bite out of sub-60fps framerates, making it a smoother/cleaner experience than it would otherwise be.

Longer term NVIDIA also sees the GTX Titan X as their most potent card for VR headsets., and they made sure that GTX Titan X was on the showfloor for GDC to drive a few of the major VR demos. Certainly VR will take just about whatever rendering power you can throw at it, if only in the name of reducing rendering latency. But overall we’re still very early in the game, especially with commercial VR headsets still being in development.

Finally, speaking of the long term, I wanted to hit upon the subject of the GTX Titan X’s 12GB of VRAM. With most other Maxwell cards already using 4Gb VRAM chips, the inclusion of 12GB of VRAM in NVIDIA’s flagship card was practically a given, especially since it doubles the 6GB of VRAM the original GTX Titan came with. At the same time however I’m curious to see just how long it takes for games to grow into this space. The original GTX Titan was fortunate enough to come out with 6GB right before the current-generation consoles launched, and with them their 8GB memory configurations, leading to a rather sudden jump in VRAM requirements that the GTX Titan was well positioned to handle. Much like 6GB in 2013, 12GB is overkill in 2015, but unlike the original GTX Titan I suspect 12GB will remain overkill for a much longer period of time, especially without a significant technology bump like the consoles to drive up VRAM requirements.

GM200 - All Graphics, Hold The Double Precision Our 2015 GPU Benchmark Suite & The Test
Comments Locked

276 Comments

View All Comments

  • nos024 - Wednesday, March 18, 2015 - link

    Well lets see. Even when it launches, will it be readily available and not highly priced like the 290X. If the 290x was readily available when it was launched, I would've bought one.
  • eanazag - Wednesday, March 18, 2015 - link

    Based on leaked slides referencing Battlefield 4 at 4K resolution the 390X is 1.6x the 290X. In the context of this review results we could guess it comes up slightly short at 4K ultra and 10 fps faster than the Titan X at 4K medium. Far Cry 4 came in at 1.55 x the 290X.

    290X non-uber 4K ultra - BF4 - 35.5 fps x 1.6 = 56.8. >> Titan 58.3
    290X non-uber 4K medium - BF4 - 65.9 fps x 1.6 = 105.44 >> Titan 94.8

    290X non-uber 4K ultra - FC4 - 31.2 fps x 1.55 = 48.36 >> Titan 42.1
    290X non-uber 4K medium - FC4 - 40.9 fps x 1.55 = 63.395 >> Titan 60.5

    These numbers don't tell the whole story on how AMD arrived with the figures, but it paints the picture of a GPU that goes toe-to-toe with the Titan X. The slides also talk about a water cooler edition. I'm suspecting the wattage will be in the same ball park as the 290X and likely higher.

    With the Titan X full breadth compute muscle, I am not sure what the 980 Ti will look like. I suspect Nvidia is holding that back based on whatever AMD releases, so they can unload a smack down trump card. Rumored $700 for the 390X WCE with 8GB HBM (high bandwidth memory - 4096 bit width) and in Q2 (April-June). Titan X and 390X at the same price given what I know at the moment I would go with the Titan X.

    Stack your GPU $'s for July.
  • FlushedBubblyJock - Thursday, April 2, 2015 - link

    If the R9 390X doesn't come out at $499 months and months from now, it won't be worth it.
  • shing3232 - Tuesday, March 17, 2015 - link

    1/32 FP32? so, this is a big gaming core.
  • Railgun - Tuesday, March 17, 2015 - link

    Exactly why it's not a $999 card.
  • shing3232 - Tuesday, March 17, 2015 - link

    but, it was priced at 999.
  • Railgun - Tuesday, March 17, 2015 - link

    What I mean is that it's not worth being a 999 card. Yes, it's priced at that, but it's value doesn't support it.
  • Flunk - Tuesday, March 17, 2015 - link

    Plenty of dolts bought the first Titan as a gaming card so I'm sure someone will buy this. At least there's a bigger performance difference between the Titan X and GTX 980 than there was between the Titan and GTX 780.
  • Kevin G - Tuesday, March 17, 2015 - link

    Except the GTX 780 came after the Titan launched. Rather it was the original Titan compared to the GTX 680 and here we see a similar gap between the Titan X and the GTX 980. It is also widely speculated that we'll see a cut down GM200 to fit between the GTX 980 and the Titan X so history looks like it will repeat itself.
  • chizow - Tuesday, March 17, 2015 - link

    @Railgun, I'd disagree and I was very vocal against the original Titan for a number of reasons. Mainly because Nvidia used the 7970 launch as an opportunity to jump their 2nd fastest chip as mainstream. 2ndly, because they held back their flagship chip nearly a full year (GTX 680 launched Mar 2012, Titan Feb 2013) while claiming the whole time there was no bigger chip, they tried to justify the higher price point because it was a "compute" card and lastly because it was a cut down chip and we knew it.

    Titan X isn't being sold with any of those pretenses and now that the new pricing/SKU structure has settled in (2nd fastest chip = new $500 flagship), there isn't any of that sticker shock anymore. Its the full chip, there's no complaints about them holding anything back, and 12GB of VRAM is a ridiculous amount of VRAM to stick on a card, and that costs money. If EVGA wants to release an $800 Classified 980 and people see value in it, then certainly this Titan X does as well.

    At least for me, it is the more appealing option for me now than getting a 2nd 980 for SLI. Slightly lower performance, lower heat, no SLI/scaling issues, and no framebuffer VRAM concerns for the foreseeable future. I game at 2560x1440p on an ROG Swift btw, so that is right in this card's wheelhouse.

Log in

Don't have an account? Sign up now