In 2014/2015, it took NVIDIA 6 months from the launch of the Maxwell 2 architecture to get GTX Titan X out the door. All things considered, that was a fast turnaround for a new architecture. However now that we’re the Pascal generation, it turns out NVIDIA is in the mood to set a speed record, and in more ways than one.

Announced this evening by Jen-Hsun Huang at an engagement at Stanford University is the NVIDIA Titan X, NVIDIA’s new flagship video card. Based on the company’s new GP102 GPU, it’s launching in less than two weeks, on August 2nd.

NVIDIA GPU Specification Comparison
  NVIDIA Titan X GTX 1080 GTX Titan X GTX Titan
CUDA Cores 3584 2560 3072 2688
Texture Units 224? 160 192 224
ROPs 96? 64 96 48
Core Clock 1417MHz 1607MHz 1000MHz 837MHz
Boost Clock 1531MHz 1733MHz 1075MHz 876MHz
TFLOPs (FMA) 11 TFLOPs 9 TFLOPs 6.6 TFLOPs 4.7 TFLOPs
Memory Clock 10Gbps GDDR5X 10Gbps GDDR5X 7Gbps GDDR5 6Gbps GDDR5
Memory Bus Width 384-bit 256-bit 384-bit 384-bit
VRAM 12GB 8GB 12GB 6GB
FP64 1/32 1/32 1/32 1/3
FP16 (Native) 1/64 1/64 N/A N/A
INT8 4:1 ? ? ?
TDP 250W 180W 250W 250W
GPU GP102 GP104 GM200 GK110
Transistor Count 12B 7.2B 8B 7.1B
Die Size 471mm2 314mm2 601mm2 551mm2
Manufacturing Process TSMC 16nm TSMC 16nm TSMC 28nm TSMC 28nm
Launch Date 08/02/2016 05/27/2016 03/17/2015 02/21/2013
Launch Price $1200 MSRP: $599
Founders $699
$999 $999

Let’s dive right into the numbers, shall we? The NVIDIA Titan X will be shipping with 3584 CUDA cores. Assuming that NVIDIA retains their GP104-style consumer architecture here – and there’s every reason to expect they will – then we’re looking at 28 SMs, or 40% more than GP104 and the GTX 1080.

It’s interesting to note here that 3584 CUDA cores happens to be the exact same number of CUDA cores also found in the Tesla P100 accelerator. These products are based on very different GPUs, but I bring this up because Tesla P100 did not use a fully enabled GP100 GPU; its GPU features 3840 CUDA cores in total. NVIDIA is not confirming the total number of CUDA cores in GP102 at this time, but if it’s meant to be a lightweight version of GP100, then this may not be a fully enabled card. This would also maintain the 3:2:1 ratio between GP102/104/106, as we saw with GM200/204/206.

On the clockspeed front, Titan X will be clocked at 1417MHz base and 1531MHz boost. This puts the total FP32 throughput at 11 TFLOPs (well, 10.97…), 24% higher than GTX 1080. In terms of expected performance, NVIDIA isn’t offering any comparisons to GTX 1080 at this time, but relative to the Maxwell 2 based GTX Titan X, they are talking about an up to 60% performance boost.

Feeding the beast that is GP102 is a 384-bit GDDR5X memory bus. NVIDIA will be running Titan X’s GDDR5X at the same 10Gbps as on GTX 1080, so we’re looking at a straight-up 50% increase in memory bus size and resulting memory bandwidth, bringing Titan X to 480GB/sec.

At this point in time there are a few unknowns about other specifications of the card. ROP count and texture unit count have not been disclosed (and this is something NVIDIA rarely posts on their site anyhow), but based on GP104 and GP106, I believe it’s safe to assume that we’re looking at 224 texture units and 96 ROPs respectively. To put this into numbers then, theoretical performance versus a GTX 1080 would be 24% more shading/texturing/geometry/compute performance, 50% more memory bandwidth, and 33% more ROP throughput. Or relative GTX Titan X (Maxwell 2), 56% more shading/texturing/geometry/compute performance, 43% more memory bandwidth, and 42% more ROP throughput. Of course, none of this takes into account any of Pascal’s architectural advantages such as a new delta color compression system.

Meanwhile like the past Titans, the new Titan X is a 250W card, putting it 70W (39%) above GTX 1080. In pictures released by NVIDIA and confirmed by their spec sheet, this will be powered by the typical 8-pin + 6-pin power connector setup. And speaking of pictures, the handful of pictures released so far confirm that the card will be following NVIDIA’s previous reference design, in the new GTX 1000 series triangular style. This means we’re looking at a blower based card – now clad in black for Titan X – using a vapor chamber setup like the GTX 1080 and past Titan cards.

The TDP difference between Titan X and GTX 1080 may also explain some of rationale behind the performance estimates above. In the Maxwel 2 generation, GTX Titan X (250W) consumed 85W more than GTX 980 (165W); but for the Pascal generation, NVIDIA only gets another 70W. As power is the ultimate factor limiting performance, it stands to reason that NVIDIA can't increase performance over GTX 1080 (in the form of CUDA cores and clockspeeds) by as much as they could over GTX 980. There is always the option to go above 250W - Tesla P100 in mezzanine form goes to 300 W - but for a PCIe form factor, 250W seems to be the sweet spot for NVIDIA.

Moving on, display I/O is listed as DisplayPort 1.4, HDMI 2.0b, and DL-DVI; NVIDIA doesn’t list the number of ports (and they aren’t visible in product photos), but I’d expect that it’s 3x DP, 1x HDMI, and 1x DL-DVI, just as with the past Titan X and GTX 1080.

From a marketing standpoint, it goes without saying that NVIDIA is pitching the Titan X as their new flagship card. What is interesting however is that it’s not being classified as a GeForce card, rather it’s the amorphous “NVIDIA Titan X”, being neither Quadro, Tesla, nor GeForce. Since the first card’s introduction in 2013, the GTX Titan series has always walked a fine line as a prosumer card, balanced between a relatively cheap compute card for workstations, and an uber gaming card for gaming PCs.

That NVIDIA has removed this card from the GeForce family would seem to further cement its place as a prosumer card. On the compute front the company is separately advertising the card's 44 TOPs INT8 compute performance - INT8 being frequently used for neural network inference - which is something they haven't done before for GeForce or Titan cards. Though make no mistake: the company’s GeForce division is marketing the card and it’s listed on GeForce.com, so it is still very much a gaming card as well.

As for pricing and availability, NVIDIA’s flagships have always been expensive, and NVIDIA Titan X even more so. The card will retail for $1200, $200 more than the previous GTX Titan X (Maxwell 2), and $500 more than the NVIDIA-built GTX 1080 Founders Edition. Given the overall higher prices for the GTX 1000 series, this isn’t something that surprises me, but none the less it means buying NVIDIA’s best card just got a bit more expensive. Meanwhile for distribution, making a departure from previous generations, the card is only being sold directly by NVIDIA through their website. The company’s board partners will not be distributing it, though system builders will still be able to include it.

Overall the announcement of this new Titan card, its specifications, and its timing raises a lot of questions. Does GP102 have fast FP64/FP16 hardware, or is it purely a larger GP104, finally formalizing the long-anticipated divide between HPC and consumer GPUs? Just how much smaller is GP102 versus GP100? How has NVIDIA been able to contract their launch window by so much for the Pascal generation, launching 3 GPUs in the span of 3 months? These are all good questions I hope we’ll get an answer to, and with an August 2nd launch it looks like we won’t be waiting too long.

Update 07/25: NVIDIA has given us a few answers to the question above. We have confirmation that the FP64 and FP16 rates are identical to GP104, which is to say very slow, and primarily there for compatibility/debug purposes. With the exception of INT8 support, this is a bigger GP104 throughout.

Meanwhile we have a die size for GP102: 471mm2, which is 139mm2 smaller than GP100. Given that both (presumably) have the same number of FP32 cores, the die space savings and implications are significant. This is as best of an example as we're ever going to get on the die space cost of the HPC features limited to GP100: NVLInk, fast FP64/FP16 support, larger register files, etc. By splitting HPC and graphics/inference into two GPUs, NVIDIA can produce GP102 at what should be a significantly lower price (and higher yield), something they couldn't do until the market for compute products based on GP100 was self-sustaining.

Finally, NVIDIA has clarified the branding a bit. Despite GeForce.com labeling it "the world’s ultimate graphics card," NVIDIA this morning has stated that the primary market is FP32 and INT8 compute, not gaming. Though gaming is certainly possible - and I fully expect they'll be happy to sell you $1200 gaming cards - the tables have essentially been flipped from the past Titan cards, where they were treated as gaming first and compute second. This of course opens the door to a proper GeForce branded GP102 card later on, possibly with neutered INT8 support to enforce the market segmentation.

POST A COMMENT

228 Comments

View All Comments

  • Dobson123 - Friday, July 22, 2016 - link

    Holy shit, that was fast. Reply
  • Dobson123 - Friday, July 22, 2016 - link

    Confusing name btw. Reply
  • nathanddrews - Friday, July 22, 2016 - link

    I've been saying it for a while, but NVIDIA wants to be Apple. Closed garden "ecosystem" (Shield/GeForce Experience/GameStream/G-Sync), planned obsolescence (poor support for previous gen GPUs), and obscene markup of halo products.

    That said, I bet this Titan X (2016) will be the first ever, truly 4K Ultra 60+fps GPU. That's an accomplishment worth celebrating. When the 1080Ti comes out for half this price, that will be an accomplishment worth buying.
    Reply
  • ImSpartacus - Friday, July 22, 2016 - link

    I was surprised that they didn't just keep using the "Titan" name so we end up with 2014's Titan, 2015's Titan, etc. Reply
  • Ian Cutress - Monday, July 25, 2016 - link

    A lot of cars work this way. Ford ShootingBrake 2015, 2016, 2017 Reply
  • Rock1m1 - Friday, July 22, 2016 - link

    A lot of what they are doing suggests that, but thankfully so far they are going by the middle path, semi-open ecosystem which I think is the best for both innovation and consumers. Reply
  • nathanddrews - Friday, July 22, 2016 - link

    Semi-open? GameStream/Shield only works with NVIDIA hardware. G-Sync only works with NVIDIA hardware. CUDA? PhysX? HairWorks? SmokeWorks? I don't know if that last one is real, but I mean, the list goes on and on. They have done very little to pursue anything "open". Not that they are required to, of course.

    For the record, I'm not suggesting that AMD is altruistic in this regard either, but at least their pursuit of OpenCL, FreeSync, and HSA is in contrast to much of what NVIDIA is doing.
    Reply
  • Murloc - Saturday, July 23, 2016 - link

    yeah but AMD may be doing that open stuff because they're in a weak position and they would just garden wall themselves out of the market. Reply
  • nevcairiel - Monday, July 25, 2016 - link

    AMD just didn't have much of a choice. They don't have the market share to push for their own methods, as much as they would want to. Reply
  • TheJian - Tuesday, July 26, 2016 - link

    You are aware Hairworks (and all of gameworks AFAIK) works fine on AMD hardware correct? The only problem comes when you jack up tessellation to above 4x (8x in some stuff?) in games like Witcher 3. There is nothing wrong with NV exploiting their hardware to the max for users each gen and AMD should do the same. Even Nvidia's older hardware gets hit (980 and below), the same way as AMD's with 64x. It was seemingly targeted at maxwell2 and up for that level which admittedly doesn't change things much. The 4-8x settings look fine and is easily adjustable but you still get to see the hair stuff (which is pretty cool). Cuda is different and they have spent ~8Billion+ on developing it for the specific architecture in their gpus. Why anyone would share those gains over the last decade with the competition is beyond me. The fact that AMD can't afford to do it, is a benefit again to NV buyers. If AMD hadn't gone console/apu they would have the R&D to spend on CPU/GPU (their CORE tech) instead of low margin stuff, and there are many more management decisions that have screwed them (paying 3x price for ATI etc). Both VEGA/ZEN were pushed off due to AMD's own selection of consoles first. Nvidia passed stating consoles would rob R&D from CORE products. People scoffed saying they were butthurt...LOL. NO, it was looking out for their core customers! You're complaining that Nvidia isn't helping AMD. With ~80% share, they don't have to (at least you got that part).

    https://www.pcper.com/reviews/Graphics-Cards/Borde...
    physx does run on cpu, just not as well since it's single threaded (last I checked, not sure today as of gameworks integration now). Consoles use it too. Again, it isn't Nvidia's job to help AMD.

    I already gave solutions for gamestream in a previous response to you, but of course YMMV (though people use them fine supposedly low lag etc). I just think NV doing their own solution is the best way to go when talking timings, lag etc as they know their hardware. The feature would likely lose quality if done by 3rd party for AMD or NV. You may think AMD has a better approach to their stuff, but it's only because they are weak. It is also the reason they don't draw many users for "features" that are literally game changers. AMD pursued OpenCL because they couldn't afford to fund a cuda alternative on their own or funding for the schools/certification system to get people to use it and learn it ;) Same story with everything else. If you can't afford your own cuda, you either lack the feature, or go in with a group hoping to at least have some success. OpenCL was their only option IMHO, but again all of these issues are due to a lack of profits yearly and losing ~8B in the last 15yrs, which ironically is about the amount NV spent on getting Cuda to where it is. Cuda is one of the main reasons I may go NV this time as I have aspirations past just games (at some point soon) and have delayed this purchase far longer than normal (usually every 3yrs, was waiting on gsync monitor I wanted, die shrink etc). But if Vega is awesome, cool and uses less watts...I could go that route now and upgrade when the need for pro stuff really hits me & just toss vega to my dad :) Whine about what AMD is doing to themselves, this isn't an NV problem. Put out a great product and quit low-balling them. I hate that as a consumer as I like cheap stuff too, but they need 5-10yrs of profits, not 15yrs of losses (well they had a few profit years but c'mon).
    Reply

Log in

Don't have an account? Sign up now