Today at GTC NVIDIA announced their next GTX Titan family card. Dubbed the GTX Titan Z (no idea yet on why it’s Z), the card is NVIDIA's obligatory entry into the dual-GPU/single-card market, finally bringing NVIDIA’s flagship GK110 GPU into a dual-GPU desktop/workstation product.

While NVIDIA has not released the complete details about the product – in particular we don’t know precise clockspeeds or TDPs – we have been given some information on core configuration, memory, pricing, and availability.

  GTX Titan Z GTX Titan Black GTX 780 Ti GTX Titan
Stream Processors 2 x 2880 2880 2880 2688
Texture Units 2 x 240 240 240 224
ROPs 2 x 48 48 48 48
Core Clock 700MHz? 889MHz 875MHz 837MHz
Boost Clock ? 980MHz 928MHz 876MHz
Memory Clock 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5 6GHz GDDR5
Memory Bus Width 2 x 384-bit 384-bit 384-bit 384-bit
VRAM 2 x 6GB 6GB 3GB 6GB
FP64 1/3 FP32 1/3 FP32 1/24 FP32 1/3 FP32
TDP ? 250W 250W 250W
Transistor Count 2 x 7.1B 7.1B 7.1B 7.1B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Launch Date 04/XX/14 02/18/14 11/07/13 02/21/13
Launch Price $2999 $999 $699 $999

In brief, the GTX Titan Z is a pair of fully enabled GK110 GPUs. NVIDIA isn’t cutting any SMXes or ROP partitions to bring down power consumption, so each half of the card is equivalent to a GTX 780 Ti or GTX Titan Black, operating at whatever (presumably lower) clockspeeds NVIDIA has picked. And although we don’t have precise clockspeeds, NVIDIA has quoted the card as having 8 TFLOPS of FP32 performance, which would put the GPU clockspeed at around 700MHz, nearly 200MHz below GTX Titan Black’s base clock (to say nothing of boost clocks).

On the memory front GTX Titan Z is configured with 12GB of VRAM, 6GB per GPU. NVIDIA’s consumer arm has also released the memory clockspeed specifications, telling us that the card won’t be making any compromises there, operating at the same 7GHz memory clockspeed of the GTX Titan Black. This being something of a big accomplishment given the minimal routing space a dual-GPU card provides.

In terms of build the GTX Titan Z shares a lot of similarities to NVIDIA’s previous generation dual-GPU card, the GTX 690. NVIDIA is keeping the split blower design, with a single axial fan pushing out air via both the front and the back of the card, essentially exhausting half the hot air and sending the other half back into the case. We haven’t had any hands-on time with the card, but NVIDIA is clearly staying with the black metal styling of the GTX Titan Black.

The other major unknown right now is power consumption. GTX Titan Black is rated for 250W, and meanwhile NVIDIA was able to get a pair of roughly 200W GTX 680s into the 300W GTX 690 (with reduced clockspeeds). So it’s not implausible that GTX Titan Z is a 375W card, but we’ll have to wait and see.

But perhaps the biggest shock will be price. The GTX Titan series has already straddled the prosumer line with its $1000/GPU pricing; GTX Titan was by far the fastest thing on the gaming market in the winter of 2013, while GTX Titan Black is a bit more professional-leaning due to the existence of the GTX 780 Ti. With GTX Titan Z, NVIDIA will be asking for a cool $3000 for the card, or three-times the price of a GTX Titan Black.

It goes without saying then that GTX Titan Z is aimed at an even more limited audience than the GTX Titan and GTX Titan Black. To be sure, NVIDIA is still targeting both gamers and compute users with this card, and since it is a GeForce card it will use the standard GeForce driver stack, but the $3000 price tag is much more within the realm of compute users than gamers. For gamers this may as well be a specialty card, like an Asus ARES.

Now for compute users this will still be an expensive card, but potentially very captivating. Per FLOP GTX Titan Black is still a better deal, but with compute users there is a far greater emphasis on density. Meanwhile the GTX Titan brand has by all accounts been a success for NVIDIA, selling more cards to compute users than they had ever expected, so a product like GTX Titan Z is more directly targeted at those users. I have no doubt that there are compute users who will be happy with it – like the original GTX Titan it’s far cheaper per FP64 FLOP than any Tesla card, maintaining its “budget compute” status – but I do wonder if part of the $3000 pricing is in reaction to GTX Titan undercutting Tesla sales.

Anyhow, we should have more details next month. NVIDIA tells us that they’re expecting to launch the card in April, so we would expect to hear more about it in the next few weeks.

Source: NVIDIA

Comments Locked


View All Comments

  • beck2050 - Wednesday, March 26, 2014 - link

    Absolutely awesome! Now that's what I'm talking about. Halo dreams Btw I have no doubt that the rich 1% ers will grab every last one. That's where the money is these days.
  • BradCube - Wednesday, March 26, 2014 - link

    I realise most people are going to think this is absurd waste of money, but I was pretty excited to hear the announcement. From the perspective of video production world, this is an awesome way to get 4 extremely well performing GPU's into my system while only taking up 4 slots leaving me room for RAID, Video IO, and a Red Rocket or audio expansion all within the one enclosure. Density comes at a cost I suppose.

    I'm guessing it's more my demographic that this product is really targeted towards. Or perhaps exuberant small form factor PC's that can only afford to give two slots to GPU? Can imagine cooling this guy in an enclosure that small to be pretty challenging however...
  • The_Assimilator - Wednesday, March 26, 2014 - link

    It's a triple-slot card.
  • BradCube - Thursday, March 27, 2014 - link

    Doh! Well now I feel like an idiot :P.

    Still should equate to some decent space savings in comparision to 4 double slot cards.
  • d3athr0ned - Wednesday, March 26, 2014 - link

    and nvidiots and crew think this will compare with the r9 295x @1ghz hybrid cooled(reference) cooler
  • mm_d - Wednesday, March 26, 2014 - link

    It's called Titan Z because letter "Z" looks like number 2, I think.
  • d3athr0ned - Wednesday, March 26, 2014 - link

    lol mmd true
  • sascha - Wednesday, March 26, 2014 - link

    Nice card.
  • Shadowmaster625 - Wednesday, March 26, 2014 - link

    It wouldnt be worth $3000 even if it was 20nm and the card only had a TDP of 240W. But at least then you could argue the case for it. As it is, its just a fed funny money grab.
  • mapesdhs - Thursday, March 27, 2014 - link

    An item is only ever worth what someone is willing to pay.

    If a professional can use this card to complete a task that results in a much greater
    return (by whatever means), then the investment will more than pay for itself, in
    which case the price isn't a problem at all.

    This is why SGI was able to sell IR-based RealityCentre systems for a million+,
    because companies that used them (eg. oil/gas) recovered their investment
    often in just a single session using the system. In the case of oil exploration,
    being able to more accurately determine where not to drill based on stereo
    visulation of GIS data; every failed test drill wastes a lot of money, so a better
    hit rate made the RealityCentres an excellent investment.

    When I was an admin of a R.C. in the early 2000s, I asked an oil company guy
    about this; he estimated their $1.5M R.C. cost was recouped in just 6 seconds.

    Same concept applies today. The AE guy I know told me the most important
    thing for him is interactive responsive while working with the app, being able to
    see the results of changes as quickly as possible, which means he can make
    decisions about what to do sooner, or experiment with more alternatives in the
    same time frame. Thus, spending more on something that enables this will
    pay off in the long term.


Log in

Don't have an account? Sign up now