NVIDIA GeForce GTX 680 Review: Retaking The Performance Crown
by Ryan Smith on March 22, 2012 9:00 AM EST“How do you follow up on Fermi?” That’s the question we had going into NVIDIA’s press briefing for the GeForce GTX 680 and the Kepler architecture earlier this month. With Fermi NVIDIA not only captured the performance crown for gaming, but they managed to further build on their success in the professional markets with Tesla and Quadro. Though it was a very clearly a rough start for NVIDIA, Fermi ended up doing quite well in the end.
So how do you follow up on Fermi? As it turns out, you follow it up with something that is in many ways more of the same. With a focus on efficiency, NVIDIA has stripped Fermi down to the core and then built it back up again; reducing power consumption and die size alike, all while maintaining most of the aspects we’ve come to know with Fermi. The end result of which is NVIDIA’s next generation GPU architecture: Kepler.
Launching today is the GeForce GTX 680, at the heart of which is NVIDIA’s new GK104 GPU, based on their equally new Kepler architecture. As we’ll see, not only has NVIDIA retaken the performance crown with the GeForce GTX 680, but they have done so in a manner truly befitting of their drive for efficiency.
GTX 680 | GTX 580 | GTX 560 Ti | GTX 480 | |
Stream Processors | 1536 | 512 | 384 | 480 |
Texture Units | 128 | 64 | 64 | 60 |
ROPs | 32 | 48 | 32 | 48 |
Core Clock | 1006MHz | 772MHz | 822MHz | 700MHz |
Shader Clock | N/A | 1544MHz | 1644MHz | 1401MHz |
Boost Clock | 1058MHz | N/A | N/A | N/A |
Memory Clock | 6.008GHz GDDR5 | 4.008GHz GDDR5 | 4.008GHz GDDR5 | 3.696GHz GDDR5 |
Memory Bus Width | 256-bit | 384-bit | 256-bit | 384-bit |
Frame Buffer | 2GB | 1.5GB | 1GB | 1.5GB |
FP64 | 1/24 FP32 | 1/8 FP32 | 1/12 FP32 | 1/12 FP32 |
TDP | 195W | 244W | 170W | 250W |
Transistor Count | 3.5B | 3B | 1.95B | 3B |
Manufacturing Process | TSMC 28nm | TSMC 40nm | TSMC 40nm | TSMC 40nm |
Launch Price | $499 | $499 | $249 | $499 |
Technically speaking Kepler’s launch today is a double launch. On the desktop we have the GTX 680, based on the GK104 GPU. Meanwhile in the mobile space we have the GT640M, which is based on the GK107 GPU. While NVIDIA is not like AMD in that they don’t announce products ahead of time, it’s a sure bet that we’ll eventually see GK107 move up to the desktop and GK104 move down to laptops in the future.
What you won’t find today however – and in a significant departure from NVIDIA’s previous launches – is Big Kepler. Since the days of the G80, NVIDIA has always produced a large 500mm2+ GPU to serve both as a flagship GPU for their consumer lines and the fundamental GPU for their Quadro and Tesla lines, and have always launched with that big GPU first. At 294mm2 GK104 is not Big Kepler, and while NVIDIA doesn’t comment on unannounced products, somewhere in the bowels of NVIDIA Big Kepler certainly lives, waiting for its day in the sun. As such this is the first NVIDIA launch where we’re not in a position to talk about the ramifications for Tesla or Quadro, or really for that matter what NVIDIA’s peak performance for this generation might be.
Anyhow, we’ll jump into the full architectural details of GK104 in a bit, but let’s quickly talk about the specs first. Unlike Fermi or AMD’s GCN, Kepler is not a brand new architecture. To be sure there are some very important changes, but at a high level the workings of Kepler have not significantly changed compared to Fermi. With Kepler what we’re ultimately looking at is a die shrunk distillation of Fermi, and in the case of GK104 that’s specifically a distillation of GF114 rather than GF110.
Starting from the top, GTX 680 features a fully enabled GK104 GPU – unlike the first generation of Fermi products there are no shenanigans with disabled units here. This means GTX 680 has 1536 CUDA cores, a massive increase from GTX 580 (512) and GTX 560 Ti (384). Note however that NVIDIA has dropped the shader clock with Kepler, opting instead to double the number of CUDA cores to achieve the same effect, so while 1536 CUDA cores is a big number it’s really only twice the number of cores of GF114 as far as performance is concerned. Joining those 1536 CUDA cores are 32 ROPs and 128 texture units; the number of ROPs is effectively unchanged from GF114, while the number of texture units has been doubled. Meanwhile on the memory and cache side of things GTX 680 features a 256-bit memory bus coupled with 512KB of L2 cache.
As for clockspeeds, GTX 680 will introduce a few wrinkles courtesy of Kepler. As we mentioned before, the shader clock is gone in Kepler, with everything now running off of the core clock (or as NVIDIA likes to put it, the graphics clock). At the same time Kepler introduces the Boost Clock – effectively a turbo clock for the GPU – so we still have a 3rd clock to pay attention to. With that said, GTX 680 ships at a base clock of 1006MHz and a boost clock of 1058MHz. On the memory side of things NVIDIA has finally managed to fully hammer out their memory controller, allowing NVIDIA to ship with a memory clock of 6.006GHz.
Taken altogether, on paper GTX 680 has roughly 195% the shader performance, 260% the texture performance, 87% of the ROP performance, and 100% of the memory bandwidth of GTX 580. Or as compared to its more direct ancestor the GTX 560 Ti, GTX 680 has 244% of the shader performance, 244% of the texture performance, 122% of the ROP performance, and 150% of the memory bandwidth of GTX 560 Ti. Compared to GTX 560 Ti NVIDIA has effectively doubled every aspect of their GPU except for ROP performance, which is the one area where NVIDIA believes they already have enough performance.
On the power front, GTX 680 has a few different numbers to contend with. NVIDIA’s official TDP is 195W, though as with the GTX 500 series they still consider this is an average number rather than a true maximum. The second number is the boost target, which is the highest power level that GPU Boost will turbo to; that number is 170W. Finally, while NVIDIA doesn’t publish an official idle TDP, the GTX 680 should have an idle TDP of around 15W. Overall GTX 680 is targeted at a power envelope somewhere between GTX 560 Ti and GTX 580, though it’s closer to the former than the latter.
As for GK104 itself, as we’ve already mentioned GK104 is a smaller than average GPU for NVIDIA, with a die size of 294mm2. This is roughly 89% the size of GF114, or compared to GF110 a mere 56% of the size. Inside that 294mm2 NVIDIA packs 3.5B transistors thanks to TSMC’s 28nm process, only 500M more than GF110 and largely explaining why GK104 is so small compared to GF110. Or to once again make a comparison to GF114, this is 1050M (53%) more than GF114, which makes the fact that GK104 doubles most of GF114’s functional units all the more surprising. With Kepler NVIDIA is going to be heavily focusing on efficiency, and this is one such example of Kepler’s efficiency in action.
Last but not least, let’s talk about pricing and availability. GTX 680 is the successor to GTX 580 and NVIDIA will be pricing it accordingly, with an MSRP of $500. This is the same price that the GTX 580 and GTX 480 launched at back in 2010, and while it’s consistent for an x80 video card it’s effectively a conservative price given GK104’s die size. NVIDIA does need to bring their pricing in at the right point to combat AMD, but they’re in no more of a hurry than AMD to start any price wars, so it’s conservative pricing all around for the time being.
AMD’s competition of course is the recently launched Radeon HD 7970 and 7950. Priced at $550 and $450, the GTX 680 sits right in between them in terms of pricing. However with regard to gaming performance the GTX 680 is generally more than a match for the 7970, which is going to leave AMD in a tough spot. AMD’s partners do have factory overclocked cards, but those only close the performance gap at the cost of an even wider price gap. NVIDIA has priced the GTX 680 to undercut the 7970, and that’s exactly what will be happening today.
As for availability, we’re told that it should be similar to past high end video card launches, which is to say it will be touch and go. As with any launch NVIDIA has been stockpiling cards but it’s still a safe bet that GTX 680 will sell out in the first day. Beyond the initial launch it’s not clear whether NVIDIA will be able to keep up with demand over the next month or so. NVIDIA has been fairly forthcoming to their investors about how 28nm production is going, and while yields have been acceptable TSMC doesn’t have enough wafers to satisfy all of their customers at once, so NVIDIA is still getting fewer wafers than they’d like. Until very recently AMD’s partners have had a difficult time keeping the 7970 in stock, and it’s likely it will be the same story for NVIDIA’s partners.
404 Comments
View All Comments
jospoortvliet - Thursday, March 22, 2012 - link
Seeing on other sites, the AMD does overclock better than the NVIDIA card - and the difference in power usage in every day scenario's is that NVIDIA uses a few more watts in idle and a few less under load.I'd agree with my dutch hardware.info site which concludes that the two cards are incredibly close and that price should determine what you'd buy.
A quick look shows that at least in NL, the AMD is about 50 bucks cheaper so unless NVIDIA lowers their price, the 7970 continues to be the better buy.
Obviously, AMD has higher costs with the bigger die so NVIDIA should have higher margins. If only they weren't so late to market...
Let's see what the 7990 and NVIDIA's answer to that will do; and what the 8000 and 700 series will do and when they will be released. NVIDIA will have to make sure they don't lag behind AMD anymore, this is hurting them...
theartdude - Thursday, March 22, 2012 - link
Late to market? with Battlefield DLC, Diablo III, MechWarrier Online (and many more titles approaching), this is the PERFECT TIME for an upgrade, btw, my computer is begging for an upgrade right now, just in time for summer-time LAN parties.CeriseCogburn - Tuesday, March 27, 2012 - link
GTX680 overclocks to 1,280 out of the box for an average easy attempt...http://www.newegg.com/Product/Product.aspx?Item=N8...
See the feedback bro.
7970 makes it to 1200 if it's very lucky.
Sorry, another lie is 7970 oc's better.
CeriseCogburn - Tuesday, March 27, 2012 - link
So you're telling me the LIGHTNING amd card is cheaper ? LOLFurther, if you don't get that exact model you won't get the overclocks, and they got a pathetic 100 on the nvidia, which noobs surpass regularly, then they used 2dmark 11 which has amd tessellation driver cheating active.... (apparently they are clueless there as well).
Furthermore, they declared the Nvidia card 10% faster overall- well worth the 50 bucks difference for your generic AMD card no Overclocked LIghtning further overclocked with the special vrm's onboard and much more expensive... then not game tested but benched in amd cheater ware 3dmark 11 tess cheat.
Reaper_17 - Thursday, March 22, 2012 - link
i agree,blanarahul - Tuesday, March 27, 2012 - link
Mr. AMD Fan Boy then you should compare how was AMD doing it since since the HD 5000 Series.6970= 880 MHz
GTX 580=772 MHz
Is it a fair comparison?
GTX 480=702 MHz
HD 5870=850 Mhz
Is it a fair compaison?
According to your argument the NVIDIA cards were at a disadvantage since the AMD cards were always clocked higher. But still the NVIDIA cards were better.
And now that NVIDIA has taken the lead in clock speeds you are crying like a baby that NVIDIA built a souped up overclocked GK104.
First check the facts. Plus the HD 8000 series aren't gonna come so early.
CeriseCogburn - Friday, April 6, 2012 - link
LOL+1
Tell 'em bro !
(fanboys and fairness don't mix)
Sabresiberian - Thursday, March 22, 2012 - link
Yah, I agree here. Clearly, once again, your favorite game and the screen size (resolution) you run at are going to be important factors in making a wise choice.;)
Concillian - Thursday, March 22, 2012 - link
"... but he's correct. The 680 does dominate in nearly every situation and category."Except some of the most consistently and historically demanding games (Crysis Warhead and Metro 2033) it doesn't fare so well compared to the AMD designs. What does this mean if the PC gaming market ever breaks out of it's console port funk?
I suppose it's unlikely, but it indicates it handles easy loads well (loads that can often be handled by a lesser card,) but when it comes to the most demanding resolutions and games, it loses a lot of steam compared to the AMD offering, to the point where it goes from a >15% lead in games that don't need it (Portal 2, for example) to a 10-20% loss in Crysis Warhead at 2560x.
That it struggles in what are traditionally the most demanding games is worrisome, but, I suppose as long as developers continue pumping out the relatively easy to render console ports, it shouldn't pose any major issues.
Eugene86 - Thursday, March 22, 2012 - link
Yes, because people are really buying both the 7970 and GTX680 to play Crysis Warhead at 2560x.... :eyeroll:Nobody cares about old, unoptimized games like that. How about you take a look at the benchmarks that actually, realistically, matter. Look at the benches for Battlefield 3, which is a game that people are actually playing right now. The GTX680 kills the 7970 with about 35% higher frame rates, according to the benchmarks posted in this review.
THAT is what actually matters and that is why the GTX680 is a better card than the 7970.