As the two year GPU cycle continues in earnest, we’ve reached the point where NVIDIA is gearing up for their annual desktop product line refresh. With the GeForce 600 series proper having launched over a year ago, all the way back in March of 2012, most GeForce 600 series products are at or are approaching a year old, putting us roughly halfway through Kepler’s expected 2 year lifecycle. With their business strongly rooted in annual upgrades, this means NVIDIA’s GPU lineup is due for a refresh.

How NVIDIA goes about their refreshes has differed throughout the years. Unlike the CPU industry (specifically Intel), the GPU industry doesn’t currently live on any kind of tick-tock progression method. New architectures are launched on new process nodes, which in turn ties everything to the launch of those new process nodes by TSMC. Last decade saw TSMC doing yearly half-node steps, allowing incremental fab-driven improvements every year. But with TSMC no longer doing half-node steps as of 40nm, this means fab-drive improvements now come only every two years.

In lieu of new process nodes and new architectures, NVIDIA has opted to refresh based on incremental improvements within their product lineups. With the Fermi generation, NVIDIA initially shipped most GeForce 400 Fermi GPUs with one or more disabled functional units. This helped to boost yields on a highly temperamental 40nm process, but it also left NVIDIA an obvious route of progression for the GeForce 500 series. With the GeForce 600 series on the other hand, 28nm is relatively well behaved and NVIDIA has launched fully-enabled products at almost every tier, leaving them without an obvious route of progression for the Kepler refresh.

So where does NVIDIA go from here? As it turns out NVIDIA’s solution for their annual refresh is essentially the same: add more functional units. NVIDIA of course doesn’t have more functional units to turn on within their existing GPUs, so instead they’re doing the next best thing, acquiring more functional units by climbing up the GPU ladder itself. And with this in mind, this brings us to today’s launch, the GeForce GTX 780.

The GeForce GTX 780 is the follow-up to last year’s GeForce GTX 680, and is a prime example of refreshing a product line by bringing in a larger, more powerful GPU that was previously relegated to a higher tier product. Whereas GTX 680 was based on a fully-enabled GK104 GPU, GTX 780 is based on a cut-down GK110 GPU, NVIDIA’s monster GPU first launched into the prosumer space with GTX Titan earlier this year. Going this route doesn’t offer much in the way of surprises since GK110 is a known quantity, but as we’ll see it allows NVIDIA to improve performance while slowly bringing down GPU prices.

  GTX Titan GTX 780 GTX 680 GTX 580
Stream Processors 2688 2304 1536 512
Texture Units 224 192 128 64
ROPs 48 48 32 48
Core Clock 837MHz 863MHz 1006MHz 772MHz
Shader Clock N/A N/A N/A 1544MHz
Boost Clock 876Mhz 900Mhz 1058MHz N/A
Memory Clock 6GHz GDDR5 6GHz GDDR5 6GHz GDDR5 4GHz GDDR5
Memory Bus Width 384-bit 384-bit 256-bit 384-bit
VRAM 6GB 3GB 2GB 1.5GB
FP64 1/3 FP32 1/24 FP32 1/24 FP32 1/8 FP32
TDP 250W 250W 195W 244W
Transistor Count 7.1B 7.1B 3.5B 3B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 40nm
Launch Price $999 $649 $499 $499

As the first of the desktop GeForce 700 lineup, GeForce GTX 780 is in almost every sense of the word a reduced price, reduced performance version of GTX Titan. This means that on the architectural side we’re looking at the same GK110 GPU, this time with fewer functional units. Titan’s 14 SMXes have been reduced to just 12 SMXes, reducing the shader count from 2688 to 2304, and the texture unit count from 224 to 192.

At the same time because NVIDIA has gone from disabling 1 SMX (Titan) to disabling 3 SMXes, GTX 780’s GPC count is now going to be variable thanks to the fact that GK110 packs 3 SMXes to a GPC. GTX 780 cards will either have 5 GPCs or 4 GPCs depending on whether the 3 disabled SMXes are all in the same GPC or not. This is nearly identical to what happened with the GTX 650 Ti, and as with the GTX 650 Ti it’s largely an intellectual curiosity since the difference in GPCs won’t notably impact performance. But it is something worth pointing out.

Moving on with our Titan comparison, much to our surprise NVIDIA has not touched the ROP/memory blocks at all (something they usually do), meaning GTX 780 comes with all 48 ROPs tied to a 384bit memory bus just as Titan does. Clockspeeds aside, this means that GTX 780 maintains Titan’s ROP/memory throughput rather than taking a performance hit, which bodes well for ROP and memory-bound scenarios. Note however that while the memory bus is the same width, NVIDIA has dropped Titan’s massive 6GB of RAM for a more conservative 3GB, giving GTX 780 the same memory bandwidth while giving it less RAM overall.

As for clockspeeds, clockspeeds have actually improved slightly, thanks to the fact that fewer SMXes need to be powered. Whereas GTX Titan had a base clockspeed of 837MHz, GTX 780 is 2 bins higher at 863MHz, with the boost clock having risen from 876MHz to 900MHz. Memory clocks meanwhile are still at 6GHz, the same as Titan, giving GTX 780 the full 288GB/sec of memory bandwidth to work from.

Taken in altogether, when it comes to theoretical performance GTX 780 should have 88% of Titan’s shading, texturing, and geometry performance, and 100% of Titan’s memory bandwidth. Meanwhile on the ROP side of matters, we actually have an interesting edge case where thanks to GTX 780’s slightly higher clockspeeds, its theoretical ROP performance exceeds Titan’s by about 3%. In practice this doesn’t occur – the loss of the SMXes is far more significant – but in ROP-bound scenarios GTX 780 should be able to stay close to Titan.

 

For better or worse, power consumption is also going to be very close between GTX 780 and Titan. Titan had a 250W TDP and so does GTX 780, so there won’t be much of a decrease in power consumption despite the decrease in performance. This is more atypical of NVIDIA since lower tier products usually have lower TDPs, but ultimately it comes down to leakage, binning, and the other factors that dictate how GPU tiers need to be structured so that NVIDIA can harvest as many GPUs as possible. On the other hand the fact that the TDP is still 250W (with the same +6% kicker) means that GTX 780 should have a bit more TDP headroom than Titan since GTX 780 has fewer SMXes and RAM chips to power.

On a final note from a feature/architecture standpoint there are a couple of differences between the GTX 780 and GTX Titan that buyers will want to be aware of. Even though Titan is being sold under the GeForce label, it was essentially NVIDIA’s first prosumer product, crossing over between gaming and compute. GTX 780 on the other hand is a pure gaming/consumer part like the rest of the GeForce lineup, meaning NVIDIA has stripped it of Titan’s marquee compute feature: uncapped double precision (FP64) performance. As a result GTX 780 can offer 90% of GTX Titan’s gaming performance, but it can only offer a fraction of GTX Titan’s FP64 compute performance, topping out at 1/24th FP32 performance rather than 1/3rd like Titan. Titan essentially remains as NVIDIA’s entry-level compute product, leaving GTX 780 to be a high-end gaming product.

Meanwhile, compared to the GTX 680 which it will be supplanting, the GTX 780 should be a big step up in virtually every way. As NVIDIA likes to put it, GTX 780 is 50% more of everything than GTX 680. 50% more SMXes, 50% more ROPs, 50% more RAM, and 50% more memory bandwidth. In reality due to the clockspeed differences the theoretical performance difference isn’t nearly as large – we’re looking at just a 29% increase in shading/texturing/ROP performance – but this still leaves GTX 780 as being much more powerful than its predecessor. The tradeoff of course being that with a 250W TDP versus GTX 680’s 195W TDP, GTX 780 also draws around 28% more power; without a process node improvement, performance improvements generally come about by moving along the power/performance curve.

Moving on to pricing and competitive positioning, it unfortunately won’t just be GTX 780’s performance that’s growing. As we’ve already seen clearly with the launch of GTX Titan, GK110 is in a class of its own as far as GPUs go; AMD simply doesn’t have a GPU big enough to compete on raw performance. Consequently NVIDIA is under no real pricing pressure and can price GTX 780 wherever they want. In this case GTX 780 isn’t just 50% more hardware than the GTX 680, but it’s about 50% more expensive too. NVIDIA will be pricing the GTX 780 at $650, $350 below the GTX Titan and GTX 690, and around $200-$250 more than the GTX 680. This has the benefit of bringing Titan-like performance down considerably, but as an x80 card it’s priced well above its predecessor, which launched back at the more traditional price point of $500. NVIDIA is no stranger to the $650 price point – they initially launched the GTX 280 there back in 2008 – but this is the first time in years they’ll be able to hold that position.

At $650, the GTX 780 is more of a gap filler than it is a competitor. Potential Titan buyers will want to pay close attention to the GTX 780 since it offers 90% of Titan’s gaming performance, but that’s about it for GTX 780’s competition. Above it the GTX 690 and Radeon HD 7990 offer much better gaming performance for much higher prices (AFR issues aside), and the next-closest card below GTX 780 will be the GTX 680 and Radeon HD 7970 GHz Edition, for which GTX 780 is 20%+ faster. As a cheaper Titan this is a solid price, but otherwise it’s still somewhat of a luxury card compared to the GTX 680 and its ilk.

Meanwhile as far as availability goes this will be a standard hard launch. And unlike GTX Titan and GTX 690 all of NVIDIA’s usual partners will be participating, so there will be cards from a number of companies available from day one, with semi-custom cards right around the corner.

Finally, looking at GTX 780 as an upgrade path, NVIDIA’s ultimate goal here isn’t to sell the card as an upgrade to existing GTX 680 owners, but rather as with past products the upgrade path is targeted at those buying video cards at 2+ year intervals. GTX 580 is 2.5 years old, while GTX 480 and GTX 280 are older still. A $650 won’t move GTX 680 owners, but with GTX 780 in some cases doubling GTX 580’s performance NVIDIA believe it may very well move Fermi owners, and they’re almost certainly right.

May 2013 GPU Pricing Comparison
AMD Price NVIDIA
AMD Radeon HD 7990 $1000 GeForce GTX Titan/GTX 690
  $650 GeForce GTX 780
Radeon HD 7970 GHz Edition $450 GeForce GTX 680
Radeon HD 7970 $390  
  $350 GeForce GTX 670
Radeon HD 7950 $300  

 

Meet The GeForce GTX 780
Comments Locked

155 Comments

View All Comments

  • lukarak - Friday, May 24, 2013 - link

    1/3rd FP32 and 1/24th FP32 is nowhere near 10-15% apart. Gaming is not everything.
  • chizow - Friday, May 24, 2013 - link

    Yes fine cut gaming performance on 780 and Titan down to 1/24th and see how many of these you sell at $650 and $1000.
  • Hrel - Friday, May 24, 2013 - link

    THANK YOU!!!! WHY this kind of thing isn't IN the review is beyond me. As much good work as Nvidia is doing they're pricing schemes, naming schemes and general abuse of customers has turned me off of them forever. Which convenient because AMD is really getting their shit together quickly.
  • chizow - Saturday, May 25, 2013 - link

    Ryan has danced around this topic in the past, he's a pretty straight shooter overall but it goes without saying why he isn't harping on this in his review. He has to protect his (and AT's) relationship with Nvidia to keep the gravy train flowing. They have gotten in trouble with Nvidia in the past (sometime around the "not covering PhysX enough" fiasco, along with HardOCP) and as a result, their review allocation suffered.

    In the end, while it may be the truth, no one with a vested interest in these products and their future success contributing to their livelihoods wants to hear about it, I guess. It's just us, the consumers that suffer for it, so I do feel it's important to voice my opinion on the matter.
  • Ryan Smith - Sunday, May 26, 2013 - link

    While you are welcome to your opinion and I doubt I'll be able to change it, I would note that I take a dim view towards such unfounded nonsense.

    We have a very clear stance with NVIDIA: we write what we believe. If we like a product we praise it, if we don't like a product we'll say so, and if we see an issue we'll bring it up. We are the press and our role is clear; we are not any company's friend or foe, but a 3rd party who stakes our name and reputation (and livelihood!) on providing unbiased and fair analysis of technologies and products. NVIDIA certainly doesn't get a say in any of this, and the only thing our relationship is built upon is their trusting our methods and conclusions. We certainly don't require NVIDIA's blessing to do what we do, and publishing the truth has and always will come first, vendor relationships be damned. So if I do or do not mention something in an article, it's not about "protecting the gravy train", but about what I, the reviewer, find important and worth mentioning.

    On a side note, I would note that in the 4 years I have had this post, we have never had an issue with review allocation (and I've said some pretty harsh things about NVIDIA products at times). So I'm not sure where you're hearing otherwise.
  • chizow - Monday, May 27, 2013 - link

    Hi Ryan I respect your take on it and as I've said already, you generally comment on and understand more about the impact of pricing and economy more than most other reviews, which is a big part of the reason I appreciate AT reviews over others.

    That being said, much of this type of commentary about pricing/economics can be viewed as editorializing, so while I'm not in any way saying companies influence your actual review results and conclusions, the choice NOT to speak about topics that may be considered out of bounds for a review does not fall under the scope of your reputation or independence as a reviewer.

    If we're being honest here, we're all human and business is conducted between humans with varying degrees of interpersonal relationships. While you may consider yourself truthful and forthcoming always, the tendency to bite your tongue when friendships are at stake is only natural and human. Certainly, a "How's your family?" greeting is much warmer than a "Hey what's with all that crap you wrote about our GTX Titan pricing?" when you meet up at the latest trade show or press event. Similarly, it should be no surprise when Anand refers to various moves/hires at these companies as good/close friends, that he is going to protect those friendships where and when he can.

    In any case, the bit I wrote about allocation was about the same time ExtremeTech got in trouble with Nvidia and felt they were blacklisted for not writing enough about PhysX. HardOCP got in similar trouble for blowing off entire portions of Nvidia's press stack and you similarly glossed over a bunch of the stuff Nvidia wanted you to cover. Subsequently, I do recall you did not have product on launch day and maybe later it was clarified there was some shipping mistake. Was a minor release, maybe one of the later Fermi parts. I may be mistaken, but pretty sure that was the case.
  • Razorbak86 - Monday, May 27, 2013 - link

    Sounds like you've got an axe to grind, and a tin-foil hat for armor. ;)
  • ambientblue - Thursday, August 8, 2013 - link

    Well, you failed to note how the GTX 780 is essentially kepler's version of a GTX 570. It's priced twice as high though. The Titan should have been a GTX 680 last year... its only a prosumer card because of the price LOL. that's like saying the GTX 480 is a prosumer card!!!
  • cityuser - Thursday, May 23, 2013 - link

    whatever Nvidia do, it never improve their 2D quality, I mean , look at what nVidia will give you at BluRay playing, the color still dead , dull, not really enjoyable.
    It's terrible to use nVidia to HD home cinema, whatever setting you try.
    Why nVidia can ignore this? because it's spoiled.
  • Dribble - Thursday, May 23, 2013 - link

    What are you going on about?

    Bluray is digital, hdmi is digital - that means the signal is decoded and basically sent straight to the TV - there is no fiddling with colours, or sharpening or anything else required.

Log in

Don't have an account? Sign up now