Final Thoughts

NVIDIA is primarily pitching the GeForce GTX 780 as the next step in their high-end x80 line of video cards, a role it fits into well. At the same time however I can’t help but to keep going back to GTX Titan comparisons due to the fact that the GTX 780 is by every metric a cut-down GTX Titan card. Whether this is a good thing or not is open to debate, but with NVIDIA’s emergence into the prosumer market with GTX Titan and the fact that there’s now a single-GPU video card above the traditionally top-tier x80 card, this complicates things as compared to past x80 card launches.

Anyhow, we’ll start with the obvious: the GeForce GTX 780 is a filler card whose most prominent role will be filling the game between sub-$500 cards and this odd prosumer/luxury/ultra-enthusiast market that has taken root above $500. If there’s to be a $1000 single-GPU card in NVIDIA’s product stack then it’s simply good business to have something between that and the sub-$500 market, and that something is the GTX 780.

For the small number of customers that can afford a card in this price segment, the GTX 780 is an extremely strong contender. In fact it’s really the only contender – at least as far as single-GPU cards go – as AMD won’t directly be competing with GK110. The end result is that with the GTX 780 delivering an average of 90% of Titan’s gaming performance for 65% of the price, this is by all rights the Titan Mini, the cheaper video card Titan customers have been asking for. From that perspective the GTX 780 is nothing short of an amazing deal for the level of performance offered, especially since it maintains the high build quality and impressive acoustics that helped to define Titan.

On the other hand, as an x80 card the GTX 780 is pretty much a tossup. The full generational performance improvement is absolutely there, as the GTX 780 beats the last-generation GTX 580 by an average of 80%. NVIDIA knows their market well, and for most buyers in a 2-3 year upgrade cycle this is the level of performance necessary to spur on an upgrade.

The catch comes down to pricing. $650 for the GTX 780 makes all the sense in the world from NVIDIA’s perspective – GTX Titan sales have exceeded NVIDIA’s expectations – so between that and Tesla K20 sales the GK110 GPU is in high demand right now. At the same time the performance of the GTX 780 is high enough that AMD can’t directly compete with the card, leaving NVIDIA without competition and free to set prices as they would like, and this is exactly what they have done.

This doesn’t make GTX 780 a bad card, and on the contrary it’s probably a better card than any x80 card before it, particularly when it comes to build quality. But it’s $650 for a product tier that for the last 5 years was a $500 product tier. To that end no one likes a price increase, ourselves included. Ultimately some fraction of the traditional x80 market will make the jump to $650, and for the rest there will be the remainder of the GeForce 700 family or holding out for the eventual GeForce 800 family.

Moving on, it’s interesting to note that with the launch of Titan and now the GTX 780, the high-end single-GPU market looks almost exactly like it did back in 2011. The prices have changed, but otherwise we’ve returned to unchallenged NVIDIA domination of the high end, with AMD fighting the good fight at lower price points. The 22% performance advantage that the GTX 780 enjoys over the Radeon HD 7970GHz Edition cements NVIDIA’s performance lead, while the price difference between the cards means that the 7970GE is still a very strong contender in its current $400 market and a clear budget-saving spoiler like the 6970 before it.

Finally, to bring things to a close we turn our gaze towards the future of the rest of the GeForce 700 family.  The GTX 780 is the first of the GeForce 700 family but it clearly won’t be the last. A cut-down GK110 card as GTX 780 was the logical progression for NVIDIA, but what to use to replace GTX 670 is a far murkier question as NVIDIA has a number of good choices at their disposal. Mull that over for a bit, and hopefully we’ll be picking up the subject soon.

Power, Temperature, & Noise
Comments Locked

155 Comments

View All Comments

  • littlebitstrouds - Thursday, May 23, 2013 - link

    Being a system builder for video editors, I'd love to get some video rendering performance numbers.
  • TheRealArdrid - Thursday, May 23, 2013 - link

    The performance numbers on Far Cry 3 really show just how poorly Crysis was coded. There's no reason why new top-end hardware should still struggle on a 6 year old game.
  • zella05 - Thursday, May 23, 2013 - link

    Just no. crysis looks way better than farcry 3. dont forget, crysis is a pc game, farcry is a console port
  • Ryan Smith - Thursday, May 23, 2013 - link

    On a side note, I like Far Cry 3, but I'd caution against using it as a baseline for a well forming game. It's an unusually fussy game. We have to disable HT to make it behave, and the frame pacing even on single GPU cards is more variable than what we see in most other games.
  • zella05 - Thursday, May 23, 2013 - link

    there has to be something wrong with your testing? how on earth can 2560x1440 only shave 1fps of all those cards? impossible. I have dual 580s on a dell 1440p monitor and I can say with complete conviction that when playing Crysis 3 you lose at LEAST 10% frame rate. Explain yourselves?
  • WeaselITB - Thursday, May 23, 2013 - link

    There are two 1080p graphs -- one "High Quality" and one "Very High Quality" ... the 1440p graph is "High Quality."
    Comparing HQ between the two gives 79.4 to 53.1 for the 780 ... seems about right to me.

    -Weasel
  • BrightCandle - Thursday, May 23, 2013 - link

    Both of your measures taken from FCAT have issues which I will try to explain below.

    1) The issue with the 95% point

    If we take a game where 5% of the frames are being produced very inconsistently then the 95% point wont capture the issue. But worse is the fact that a 1 in 100 frame that takes twice as long is very noticeable when playing to everyone. Just 1% of the frames having an issue is enough to see a noticeable problem. Our eyes don't work by taking 95% of the frames, our eyes require a level of consistency on all frames. Thus the 95% point is not the eqvuialent of minimum FPS, that would be the 100% point. The 95% point is arbitary and ultimately not based on how we perceive the smoothness of frames. It captures AMDs current crossfire issue but it fails to have the resolution necessary as a metric to capture the general problem and compare single cards.

    2) The issue with the delta averaging

    By comparing to the average frame time this method would incorrectly categorise clearly better performing cards. Its the same mistake Tomshardware made. In essence if you have a game and sometimes that game is CPU limited (common) and then GPU limited the two graphics cards will show similar frame rates at some moments and the faster of them will show dramatically higher performance at other times. This makes the swing from the minimum/average to the high fps much wider. But it could be a perfectly consistent experience in the sense that frame to frame for the most part the variation is minimal. Your calculation would tell us the variation of the faster card was a problem, when actually it wasn't.

    The reason that measure isn't right is that it fails to recognise the thing we humans see as a problem. We have issue with individual frames that take a long time. We also have issues with inconsistent delivery of animation in patterns. If we take 45 fps for example the 16/32/16/32 pattern that can produce in vsync is highly noticeable. The issue is that frame to frame we are seeing variation. This is why all the other review sites show the frame times, because the stuttering on a frame by frame basis really matters.

    We don't particularly have issues with a single momentary jump up or down in frame rate, we might notice them but its momentary and then we adapt rapidly. What our brains do not adapt to rapidly is continuous patterns of odd delivery of frames. Thus any measure where you try to reduce the amount of data needs to be based on that moment by moment variation between individual or small numbers of frames, because big jumps up and down in fps that last for 10s of seconds are not a problem, the issue is the 10ms swing between two individual frames that keeps happening. You could look for patterns, you could use signal frequency analysis and various other techniques to tune out the "carrier" signal of the underlying FPS. But what you can't do is compare it to the average, that just blurs the entire picture. A game that started at 30 fps for half the trace and then was 60 fps for half the trace with no other variation is vastly better than one that continuously oscillates between 30 and 60 fps every other frame.

    Its also important to understand that you analysis is missing fraps. Fraps isn't necessarily good for measuring what the cards are doing but it is essentially the best current way to measure what the game engine is doing. The GPU is impacting on the game simulation and its timing and variation in this affects what goes into the frames. So while FCAT captures if the frames come out smoothly it does not tell us anything about whether the contents is at the right time, fraps is what does that. NVidia is downplaying that tool because they have FCAT and are trying to show off their frame metering and AMD is downplaying it because their cards have issues but it is still a crucial measure. The ideal picture is both that the fraps times are consistent and the FCAT measures are consistent, they after all measure the input into the GPU and the output and we need both to get a true picture of the sub component.

    Thus I am of the opinion your data doesn't currently show what you thought it did and your analysis needs work.
  • rscsrAT - Thursday, May 23, 2013 - link

    As far as I understood the delta averaging, it adds the time difference between two adjacent frames.
    To make it clear, if you have 6 frames with 16/32/16/32/16/32ms per frame, you would calculate the value with (5*16)/((3*16+3*32)/6)=333%.
    But if you have 6 framse with 16/16/16/32/32/32ms per frame, you would have 16/((3*16+3*32)/6)=67%.
    Therefore you still have a higher value for a higher fluctuating framerate than with a steady framerate.
  • WeaselITB - Thursday, May 23, 2013 - link

    For your #1 -- 95th percentile is a pretty common statistical analysis tool http://en.wikipedia.org/wiki/68-95-99.7_rule ... I'm assuming that they're assuming a normal distribution, which intuitively makes sense given that you'd expect most results to be close to the mean. I'd be interested in seeing the 3-sigma values, as that would further point out the extreme outliers, and would probably satisfy your desire for the "1%" as well.

    For your #2 -- they're measuring what you're describing, the differences between individual frametimes. Compare their graphs on the "Our First FCAT" page between the line graph of the frametimes of the cards and the bar graph after they massaged the data. The 7970GE has the smallest delta percentage, and the tightest line graph. The 7990 has the largest delta percentage (by far), and the line graph is all over the place. Their methodology of coming up with the "delta percentage" difference is sound.

    -Weasel
  • jonjonjonj - Thursday, May 23, 2013 - link

    amd get your act together so we have some competition. i really don't even see the point to this card at this price. what are they going to do for the 770? sell and even more crippled GK110 for $550? and the 760ti will be $450? or are they just going to sell the the 680 as a 770?

Log in

Don't have an account? Sign up now