When NVIDIA launched their first consumer GK110 card back in February with the GeForce GTX Titan, one of the interesting (if frustrating) aspects of the launch was that we knew we wouldn’t be getting a “complete” GK110 card right away. GTX Titan was already chart-topping fast, easily clinching the crown for NVIDIA, but at the same time it was achieving those high marks with only 14 of GK110’s 15 SMXes active. The 15th SMX, though representing just 7% of GK110’s compute/geometry hardware, offered the promise of just a bit more performance out of GK110, and a promise that would have to wait to be fulfilled another day. For a number of reasons, NVIDIA would keep a little more performance in the tank in reserve for use in the future.

Jumping forward 8 months to the past few weeks, and things have significantly changed in the high-end video card market. With the launch of AMD’s new flagship video card, Radeon R9 290X, AMD has unveiled the means to once again compete with NVIDIA at the high end. And at the same time they have shown that they have the wherewithal to get into a fantastic, bloody price war for control of the high-end market. Right out of the gate 290X was fast enough to defeat GTX 780 and battle GTX Titan to a standstill, at a price hundreds of dollars cheaper than NVIDIA’s flagship card. The outcome of this has been price drops all around, with GTX 780 shedding $150, GTX Titan being all but relegated to the professional side of “prosumer,” and an unexpectedly powerful Radeon R9 290 practically starting the same process all over again just 2 weeks later.

With that in mind, NVIDIA has long become accustomed to controlling the high-end market and the single-GPU performance crown. AMD and NVIDIA may go back and forth at times, but at the end of the day it’s usually NVIDIA who comes out on top. So with AMD knocking at their door and eyeing what has been their crown, the time has come for NVIDIA to tap their reserve tank and to once again cement their hold. The time has come for GTX 780 Ti.

  GTX 780 Ti GTX Titan GTX 780 GTX 770
Stream Processors 2880 2688 2304 1536
Texture Units 240 224 192 128
ROPs 48 48 48 32
Core Clock 875MHz 837MHz 863MHz 1046MHz
Boost Clock 928Mhz 876Mhz 900Mhz 1085MHz
Memory Clock 7GHz GDDR5 6GHz GDDR5 6GHz GDDR5 7GHz GDDR5
Memory Bus Width 384-bit 384-bit 384-bit 256-bit
VRAM 3GB 6GB 3GB 2GB
FP64 1/24 FP32 1/3 FP32 1/24 FP32 1/24 FP32
TDP 250W 250W 250W 230W
Transistor Count 7.1B 7.1B 7.1B 3.5B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Launch Date 11/07/13 02/21/13 05/23/13 05/30/13
Launch Price $699 $999 $649 $399

Getting right down to business, GeForce GTX 780 Ti unabashedly a response to AMD’s Radeon R9 290X while also serving as NVIDIA’s capstone product for the GeForce 700 series. With NVIDIA finally ready and willing to release fully enabled GK110 based cards – a process that started with Quadro K6000 – GTX 780 Ti is the obligatory and eventual GeForce part to bring that fully enabled GK110 GPU to the consumer market. By tapping the 15th and final SMX for a bit more performance and coupling it with a very slight clockspeed bump, NVIDIA has the means to fend off AMD’s recent advance while offering a refresh of their product line just in time for a busy holiday season and counter to the impending next-generation console launch.

Looking at the specifications for GTX 780 Ti in detail, at the hardware level GTX 780 Ti is the fully enabled GK110 GeForce part we’ve long been waiting for. Featuring all 15 SMXes up and running, GTX 780 Ti features 25% more compute/geometry/texturing hardware than the GTX 780 it essentially replaces, or around 7% more hardware than the increasingly orphaned GTX Titan. To that end the only place that GTX 780 Ti doesn’t improve on GTX Titan/780 is in the ROP department, as both of those cards already featured all 48 ROPs active, alongside the associated memory controllers and L2 cache.

Coupled with the fully enabled GK110 GPU, NVIDIA has given GTX 780 Ti a minor GPU clockspeed bump to make it not only the fastest GK110 card overall, but also the highest clocked card. The 875MHz core clock and 928MHz boost clock is only 15MHz and 28MHz faster than GTX 780 in their respective areas, but with GTX 780 already being clocked higher than GTX Titan, GTX 780 Ti doesn’t need much more in the way of GPU clockspeed to keep ahead of the competition and its older siblings. As a result if we’re comparing GTX 780 Ti to GTX 780, then GTX 780 relies largely on its SMX advantage to improve performance, combining a 1% clockspeed bump and a 25% increase in shader harder to offer 27% better shading/texturing/geometry performance and just 1% better ROP throughput than GTX 780. Or to compare things to Titan, then GTX 780 Ti relies on its more significant 5% clockspeed advantage coupled with its 7% functional unit increase to offer a 12% increase in shading/texturing/geometry performance, alongside a 5% increase in ROP throughput.

With specs and numbers in mind, there is one other trick up GTX 780 Ti’s sleeve to help push it past everything else, and that is a higher 7GHz memory clock. NVIDIA has given GK110 the 7GHz GDDR5 treatment with the GTX 780 Ti (making it the second card after GTX 770 to get this treatment), giving GTX 780 Ti 336GB/sec of memory bandwidth. This is 17% more than either GTX Titan or GTX 780, and even edging out the recently released Radeon R9 290X’s 320GB/sec. The additional memory bandwidth, though probably not absolutely necessary from what we’ve seen with GTX Titan, will help NVIDIA get as much out of GK110 as they can and further separate the card from other NVIDIA and AMD cards alike.

The only unfortunate news here when it comes to memory will be that unlike Titan, NVIDIA is sticking with 3GB for the default RAM amount on GTX 780 Ti. Though the performance ramifications of this will be minimal (at least at this time), will put the card in an odd spot of having less RAM than the cheaper Radeon R9 290 series.

Taken altogether then, GTX 780 Ti stands to be anywhere between 1% and 27% faster than GTX 780 depending on whether we’re looking at a ROP-bound or shader-bound scenario. Otherwise it stands to be between 5% and 17% faster than GTX Titan depending on whether we’re ROP-bound or memory bandwidth-bound.

Meanwhile let’s quickly talk about power consumption. As GTX 780 Ti is essentially just a spec bump of the GK110 hardware we’ve seen for the last 8 months, power consumption won’t officially be changing. NVIDIA designed GTX Titan and GTX 780 with the same power delivery system and the same TDP limit, with GTX 780 Ti further implementing the same system and the same limits. So officially GTX 780 Ti’s TDP stands at 250W just like the other GK110 cards. Though in practice power consumption for GTX 780 Ti will be higher than either of those other cards, as the additional performance it affords will mean that GTX 780 Ti will be on average closer to that 250W limit than either of those cards.

Finally, let’s talk about pricing, availability, and competitive positioning. On a pure performance basis NVIDIA expects GTX 780 Ti to be the fastest single-GPU video card on the market, and our numbers back them up on this. Consequently NVIDIA is going to be pricing and positioning GTX 780 Ti a lot like GTX Titan/780 before it, which is to say that it’s going to be priced as a flagship card rather than a competitive card. Realistically AMD can’t significantly threaten GTX 780 Ti, and although it’s not going to be quite the lead that NVIDIA enjoyed over AMD earlier this year, it’s enough of a lead that NVIDIA can pretty much price GTX 780 Ti based solely on the fact that it’s the fastest thing out there. And that’s exactly what NVIDIA has done.

To that end GTX 780 Ti will be launching at $699, $300 less than GTX Titan but $50 higher than the original GTX 780’s launch price. By current prices this will put it $150 over the R9 290X or $200 over the reprised GTX 780, a significant step over each. GTX 780 Ti will have the performance to justify its positioning, but just as the previous GK110 cards it’s going to be an expensive product. Meanwhile GTX Titan will be remaining at $999, despite the fact that it’s now officially dethroned as the fastest GeForce card (GTX 780 having already made it largely redundant). At this point it will live on as NVIDIA’s entry level professional compute card, keeping its unique FP64 performance advantage over the other GeForce cards.

Elsewhere on a competitive basis, until such a time where factory overclocked 290X cards hit the market, the only real single-card competition for GTX 780 Ti will be the Radeon HD 7990, AMD’s Tahiti based dual-GPU card, which these days retails for close to $800. Otherwise the closest competition will be dual card setups, such as the GTX 770 SLI, R9 280X CF, and R9 290 CF. All of those should present formidable challenges on a pure performance basis, though it will bring with it the usual drawbacks of multi-GPU rendering.

Meanwhile, as an added perk NVIDIA will be extending their recently announced “The Way It’s Meant to Be Played Holiday Bundle with SHIELD” promotion to the GTX 780 Ti, which consists of Assassins’ Creed IV, Batman: Arkham Origins, Splinter Cell: Blacklist, and a $100 SHIELD discount. NVIDIA has been inconsistent about this in the past, so it’s a nice change to see it included with their top card. As always, the value of bundles are ultimately up to the buyer, but for those who do place value in the bundle it should offset some of the sting of the $699 price tag.

Finally, for launch availability this will be a hard launch. Reference cards should be available by the time this article goes live, or shortly thereafter. It is a reference launch, and while custom cards are in the works NVIDIA is telling us they likely won’t hit the shelves until December.

Fall 2013 GPU Pricing Comparison
AMD Price NVIDIA
  $700 GeForce GTX 780 Ti
Radeon R9 290X $550  
  $500 GeForce GTX 780
Radeon R9 290 $400  
  $330 GeForce GTX 770
Radeon R9 280X $300  
  $250 GeForce GTX 760
Radeon R9 270X $200  
  $180 GeForce GTX 660
  $150 GeForce GTX 650 Ti Boost
Radeon R7 260X $140  

 

Meet The GeForce GTX 780 Ti
Comments Locked

302 Comments

View All Comments

  • Hrel - Thursday, November 7, 2013 - link

    You talk about Titan as still being plausible as a compute card, yet the AMD cards, all of them, outperform both the Titan and the 780ti. Then the 780ti out performs the Titan. Nvidia beats itself here; and AMD beats them by a massive margin. Then you throw in the fact that Nvidia is essentially not even trying to compete on a price/performance basis and all of a sudden buying an Nvidia card makes absolutely no sense.

    Honestly I'm happy about this. I can't buy AMD CPU's since Intel so completely wallops them; but now, finally, I have no excuse to recommend any GPU except an AMD GPU. Good on ya folks, hopefully your CPU department starts firing on all cylinders like this.
  • Galatian - Thursday, November 7, 2013 - link

    I might not be an expert but I keep wondering what these new chips from AMD and Nvidia mean for their next generation? Clearly bringing out this full featured chips (which were once only supposed to be sold as workstation graphic chips) because 22nm keeps being delayed, will put pressure on their next chips. For example I guess the 780 chips are at the performance level Nvidia probably targeted Maxwell at. Maybe they are now pushed into releasing a full blown Maxwell chip to begin with.
  • TheJian - Thursday, November 7, 2013 - link

    This ^^^ Excellent that both have set the bar so much higher now. Realistically though it shouldn't be hard to top with the die shrink. It's just that they will not be able to give us such a gimped card at launch of 20nm for either side, saving a ton for refresh. They will be forced to give us something semi-real out of the gate :) I can't wait for maxwell 20nm. AMD will have to produce an awesome chip (like if AMD goes all out, low watts/heat/noise, 20% faster than NV basically like reverse of 780ti vs. 290x) in order for me to not want Gsync. No lag, stutter, tearing is worth a ton to me.

    http://wccftech.com/alleged-nvidia-maxwell-archite...
    If this is true, AMD better have some good stuff up their sleeve. 6144 cuda cores? Plus all the other enhancements would be potent. I don't believe this though. Even with a die shrink it would seemingly be a HUGE die but too lazy to do that math right now to see how plausible and too far away...LOL. I could believe 4608 though with 6 GPC/18SMX/256alu's and 6144 maybe held for 16nm/14nm or something.
  • AngelOfTheAbyss - Thursday, November 7, 2013 - link

    The difference between Titan and the 780 cards is the FP64 performance (1/3 vs 1/24 FP32),
    Using 64bit (double precision) floating point operations simplifies a lot of things when implementing numerical algorithms. If you use 32bit (single precision) operations, you often have to resort to some numerical skulduggery to get the desired accuracy.
  • TheJian - Thursday, November 7, 2013 - link

    Quit looking at sites like Anandtech/Toms that don't show much CUDA perf. Quit looking at OPENCL crap on Nvidia (only a retard buys NV cards and doesn't run CUDA whenever they have a job that can be done with a CUDA app or an app that has a Cuda plugin!). None of the crap running here would be done on Titan. You wouldn't run Sony Vegas either which has tons of issues running nvidia (google it, vegas cuda - badabing bad idea to buy this app for NV go ADOBE). You'd buy an Adobe lic like the rest of the world for Photos or Video editing and you'd turn on Cuda.

    I'll bet EVERY penny and object I own that they will run adobe the second AMD gets OpenCL in it (which is coming)...ROFL. How much does AMD pay this site? ;) They won't run AMD vs. NV in adobe until then. Of course if someone else shows it sucks still even after optimizing the upcoming revs of adobe apps, I guess they won't do it even then ;)

    Ask Anandtech why they don't run Cuda vs. AMD (in anything, amd can usually go OpenCL, DirectX or OpenGL in the same apps that use Cuda). You can run any pro app and pick luxrender for AMD and say, Octane/furryball etc for NV, yet anandtech refuses. Or just run adobe and choose cuda for nv and OpenGL for AMD. You can do Adobe tests with a freaking trial download.

    http://www.tomshardware.com/reviews/best-workstati...
    Look at that and the next 3 pages of cuda benchmarks and marvel as a $1000 card (titan) blows away $2000-$5000 cards (W6000 etc).

    Tomshardware does the same crap as anandtech. Note they say "NOT SUPPORTED" for all cuda benchmarks. But all they have to do is use LUXRENDER for all of them and pit them head to head with Cuda. I've asked many times why they do this for all the benchmarks in their forums and they NEVER have responded...ROFL. Why do they run any OpenCL benchmark on NV at either site? Run some real stuff like Adobe AMD vs NV. The world uses premiere and photoshop (largely).

    http://www.tomshardware.com/reviews/geforce-gtx-78...
    marvel now as 780TI blows away the same cards $2000+ and nearly does it 2x faster than most...LOL. Understand? But why is AMD not included?
    3dsmax+iray (run luxrender for AMD).
    Blender 2.66 (run luxrender for AMD).
    Why tomshardware why?...ROFL. They read here too ;)
    OctaneRender™ for...
    ArchiCAD Cinema4D, Inventor, Maya, Revit, Softimage, 3ds Max, Blender, Daz Studio, Lightwave, Poser, Rhino (sketchup & carrara, autocad etc coming soon)

    Luxrender:
    3dsmax, lightwave3d, blender, dazstudio, poser, cinema4d, softimage, sketchup & carrara

    See how they overlap? Lux for AMD vs. Octane for NV. simple. But that would show how weak AMD is and how strong cuda is after 7yrs and billions in development ;) Heck pit any plugin you want for AMD vs. NV Cuda. Cuda is available for the top 200 apps but anandtech/tomshardware seem to be incapable of running them against each other. Well, anandtech does have an AMD portal page...LOL. :) Just saying...You should never run LUX with NVidia. Run LUX vs. Octane! Pick an app above and run both plugins against each other for AMD/NV. Simple. I understand hating on Cuda for being proprietary, but these two sites ignoring it and acting like NV is slow due to opencl is ridiculous and misleading.

    Instead of the above, Anandtech runs luxmark (pick an app, use plugins instead of dumb opencl benchmark which highlights ONLY AMD), sony vegas (pit it against Adobe/Cuda/premiere easy to render the same vid in both), CLbenchmark? ROFL....How about something we can make money with instead of this fake crap? At least show the other side:
    http://www.ozone3d.net/benchmarks/physx-fluidmark/
    Which runs on both AMD and NV. Fluidmark:
    "This benchmark exploits OpenGL for graphics acceleration and requires an OpenGL 2.0 compliant graphics card: NVIDIA GeForce 5/6/7/8/9/GTX200 (and higher), AMD/ATI Radeon 9600+, 1k/2k/3k/4k (and higher) or a S3 Graphics Chrome 400 series."

    Folding@home can't make you a dime either...waste of time testing this. You like high electric bills for warm fuzzy feelings? Not me. Think some pill company will pay you to solve cancer? NOPE.
    Syscompute+AMP crap home made benchmark to suit AMD? No thanks...Can't make me a dime. NOT REAL. Doesn't anyone find it strange Anandtech only runs ONE thing (sony vegas) that can actually be used to make money? And it sucks on NV when Adobe rocks. What the heck is going on here? Nobody at anandtech knows how to use Adobe products?
    "Last, in our C++ AMP benchmark we see the GTX 780 Ti take the top spot for an NVIDIA card, but like so many of our earlier compute tests it will come up short versus AMD’s best cards."

    LOL...Gee, maybe if you actually ran some REAL STUFF and pit AMD vs. NV (CUDA..DUH!) we'd find out some truth ;) What are you afraid of anandtech? I guess I should just paste this into every article with OpenCL benchmarks here ;) Maybe if they start losing even more traffic (down since 660ti article last sept, about in half, worse since AMD portal probably and AMD personal visit to ONLY this site...LOL), they will start telling it like it is and start running cuda vs. AMD.

    FYI Titan has full DP and 6GB. Come back and say that junk when you try to render something big. Come back when anandtech starts running CUDA vs. AMD. Until then, quit drinking anandtech/amd koolaide ;)
  • hero4hire - Sunday, November 10, 2013 - link

    Hail corporate!

    Cuda!!
    Whatever you're getting paid needs to get cut for each ;) LOL ROFL and CAPS!!!! You're not being taken seriously when you sound like a 14 girl texting their bffs

    Hail corporate!
  • SBTech86 - Thursday, November 7, 2013 - link

    nvidia thinks we r dumb
  • gordon151 - Thursday, November 7, 2013 - link

    Would they be wrong :)?
  • firewall597 - Thursday, November 7, 2013 - link

    Did you even read this review? CF pooped all over SLI in most scenarios.
  • jigglywiggly - Thursday, November 7, 2013 - link

    why are the benchmarks not including any older cards? even a 670...

Log in

Don't have an account? Sign up now