“How do you follow up on Fermi?” That’s the question we had going into NVIDIA’s press briefing for the GeForce GTX 680 and the Kepler architecture earlier this month. With Fermi NVIDIA not only captured the performance crown for gaming, but they managed to further build on their success in the professional markets with Tesla and Quadro. Though it was a very clearly a rough start for NVIDIA, Fermi ended up doing quite well in the end.

So how do you follow up on Fermi? As it turns out, you follow it up with something that is in many ways more of the same. With a focus on efficiency, NVIDIA has stripped Fermi down to the core and then built it back up again; reducing power consumption and die size alike, all while maintaining most of the aspects we’ve come to know with Fermi. The end result of which is NVIDIA’s next generation GPU architecture: Kepler.

Launching today is the GeForce GTX 680, at the heart of which is NVIDIA’s new GK104 GPU, based on their equally new Kepler architecture. As we’ll see, not only has NVIDIA retaken the performance crown with the GeForce GTX 680, but they have done so in a manner truly befitting of their drive for efficiency.

GTX 680 GTX 580 GTX 560 Ti GTX 480
Stream Processors 1536 512 384 480
Texture Units 128 64 64 60
ROPs 32 48 32 48
Core Clock 1006MHz 772MHz 822MHz 700MHz
Shader Clock N/A 1544MHz 1644MHz 1401MHz
Boost Clock 1058MHz N/A N/A N/A
Memory Clock 6.008GHz GDDR5 4.008GHz GDDR5 4.008GHz GDDR5 3.696GHz GDDR5
Memory Bus Width 256-bit 384-bit 256-bit 384-bit
Frame Buffer 2GB 1.5GB 1GB 1.5GB
FP64 1/24 FP32 1/8 FP32 1/12 FP32 1/12 FP32
TDP 195W 244W 170W 250W
Transistor Count 3.5B 3B 1.95B 3B
Manufacturing Process TSMC 28nm TSMC 40nm TSMC 40nm TSMC 40nm
Launch Price $499 $499 $249 $499

Technically speaking Kepler’s launch today is a double launch. On the desktop we have the GTX 680, based on the GK104 GPU. Meanwhile in the mobile space we have the GT640M, which is based on the GK107 GPU. While NVIDIA is not like AMD in that they don’t announce products ahead of time, it’s a sure bet that we’ll eventually see GK107 move up to the desktop and GK104 move down to laptops in the future.

What you won’t find today however – and in a significant departure from NVIDIA’s previous launches – is Big Kepler. Since the days of the G80, NVIDIA has always produced a large 500mm2+ GPU to serve both as a flagship GPU for their consumer lines and the fundamental GPU for their Quadro and Tesla lines, and have always launched with that big GPU first. At 294mm2 GK104 is not Big Kepler, and while NVIDIA doesn’t comment on unannounced products, somewhere in the bowels of NVIDIA Big Kepler certainly lives, waiting for its day in the sun. As such this is the first NVIDIA launch where we’re not in a position to talk about the ramifications for Tesla or Quadro, or really for that matter what NVIDIA’s peak performance for this generation might be.

Anyhow, we’ll jump into the full architectural details of GK104 in a bit, but let’s quickly talk about the specs first. Unlike Fermi or AMD’s GCN, Kepler is not a brand new architecture. To be sure there are some very important changes, but at a high level the workings of Kepler have not significantly changed compared to Fermi. With Kepler what we’re ultimately looking at is a die shrunk distillation of Fermi, and in the case of GK104 that’s specifically a distillation of GF114 rather than GF110.

Starting from the top, GTX 680 features a fully enabled GK104 GPU – unlike the first generation of Fermi products there are no shenanigans with disabled units here. This means GTX 680 has 1536 CUDA cores, a massive increase from GTX 580 (512) and GTX 560 Ti (384). Note however that NVIDIA has dropped the shader clock with Kepler, opting instead to double the number of CUDA cores to achieve the same effect, so while 1536 CUDA cores is a big number it’s really only twice the number of cores of GF114 as far as performance is concerned. Joining those 1536 CUDA cores are 32 ROPs and 128 texture units; the number of ROPs is effectively unchanged from GF114, while the number of texture units has been doubled. Meanwhile on the memory and cache side of things GTX 680 features a 256-bit memory bus coupled with 512KB of L2 cache.

As for clockspeeds, GTX 680 will introduce a few wrinkles courtesy of Kepler. As we mentioned before, the shader clock is gone in Kepler, with everything now running off of the core clock (or as NVIDIA likes to put it, the graphics clock). At the same time Kepler introduces the Boost Clock – effectively a turbo clock for the GPU – so we still have a 3rd clock to pay attention to. With that said, GTX 680 ships at a base clock of 1006MHz and a boost clock of 1058MHz. On the memory side of things NVIDIA has finally managed to fully hammer out their memory controller, allowing NVIDIA to ship with a memory clock of 6.006GHz.

Taken altogether, on paper GTX 680 has roughly 195% the shader performance, 260% the texture performance, 87% of the ROP performance, and 100% of the memory bandwidth of GTX 580. Or as compared to its more direct ancestor the GTX 560 Ti, GTX 680 has 244% of the shader performance, 244% of the texture performance, 122% of the ROP performance, and 150% of the memory bandwidth of GTX 560 Ti. Compared to GTX 560 Ti NVIDIA has effectively doubled every aspect of their GPU except for ROP performance, which is the one area where NVIDIA believes they already have enough performance.

On the power front, GTX 680 has a few different numbers to contend with. NVIDIA’s official TDP is 195W, though as with the GTX 500 series they still consider this is an average number rather than a true maximum. The second number is the boost target, which is the highest power level that GPU Boost will turbo to; that number is 170W. Finally, while NVIDIA doesn’t publish an official idle TDP, the GTX 680 should have an idle TDP of around 15W. Overall GTX 680 is targeted at a power envelope somewhere between GTX 560 Ti and GTX 580, though it’s closer to the former than the latter.

As for GK104 itself, as we’ve already mentioned GK104 is a smaller than average GPU for NVIDIA, with a die size of 294mm2. This is roughly 89% the size of GF114, or compared to GF110 a mere 56% of the size. Inside that 294mm2 NVIDIA packs 3.5B transistors thanks to TSMC’s 28nm process, only 500M more than GF110 and largely explaining why GK104 is so small compared to GF110. Or to once again make a comparison to GF114, this is 1050M (53%) more than GF114, which makes the fact that GK104 doubles most of GF114’s functional units all the more surprising. With Kepler NVIDIA is going to be heavily focusing on efficiency, and this is one such example of Kepler’s efficiency in action.

Last but not least, let’s talk about pricing and availability. GTX 680 is the successor to GTX 580 and NVIDIA will be pricing it accordingly, with an MSRP of $500. This is the same price that the GTX 580 and GTX 480 launched at back in 2010, and while it’s consistent for an x80 video card it’s effectively a conservative price given GK104’s die size. NVIDIA does need to bring their pricing in at the right point to combat AMD, but they’re in no more of a hurry than AMD to start any price wars, so it’s conservative pricing all around for the time being.

AMD’s competition of course is the recently launched Radeon HD 7970 and 7950. Priced at $550 and $450, the GTX 680 sits right in between them in terms of pricing. However with regard to gaming performance the GTX 680 is generally more than a match for the 7970, which is going to leave AMD in a tough spot. AMD’s partners do have factory overclocked cards, but those only close the performance gap at the cost of an even wider price gap. NVIDIA has priced the GTX 680 to undercut the 7970, and that’s exactly what will be happening today.

As for availability, we’re told that it should be similar to past high end video card launches, which is to say it will be touch and go. As with any launch NVIDIA has been stockpiling cards but it’s still a safe bet that GTX 680 will sell out in the first day. Beyond the initial launch it’s not clear whether NVIDIA will be able to keep up with demand over the next month or so. NVIDIA has been fairly forthcoming to their investors about how 28nm production is going, and while yields have been acceptable TSMC doesn’t have enough wafers to satisfy all of their customers at once, so NVIDIA is still getting fewer wafers than they’d like. Until very recently AMD’s partners have had a difficult time keeping the 7970 in stock, and it’s likely it will be the same story for NVIDIA’s partners.

The Kepler Architecture: Fermi Distilled
POST A COMMENT

405 Comments

View All Comments

  • blppt - Thursday, March 22, 2012 - link

    Wondering if you guys could also add a benchmark for one the current crop of 1ghz core 7970s that are available now (if you've tested any). Otherwise, great review. Reply
  • tipoo - Thursday, March 22, 2012 - link

    With everything being said by Nvidia, I thought this would be a Geforce 8k series class jump, while its really nothing close to that and trades blows with AMDs 3 month old card. GCN definitely had headroom so I can see lower priced, higher clocked AMD cards coming out soon to combat this. Still, I'm glad this will bring things down to sane prices. Reply
  • MarkusN - Thursday, March 22, 2012 - link

    Well to be honest, this wasn't supposed to be Nvidias successor to the GTX 580 anyway. This graphics card replaced the GTX 560 Ti, not the GTX 580. GK 110 will replace the GTX 580, even if you can argue that the GTX 680 is now their high-end card, it's just a replacement for the GTX 560 Ti so I can just dream about the performance of the GTX 780 or whatever they're going to call it. ;) Reply
  • tipoo - Thursday, March 22, 2012 - link

    I didn't know that, thanks. Ugh, even more confusing naming schemes. Reply
  • Articuno - Thursday, March 22, 2012 - link

    If this is supposed to replace the 560 Ti then why does it cost $500 and why was it released before the low-end parts instead of before the high-end parts? Reply
  • MarkusN - Thursday, March 22, 2012 - link

    It costs that much because Nvidia realized that it outperforms/trades blows with the HD 7970 and saw an opportunity to make some extra cash, which basically sucks for us consumers. There are those that say that the GTX 680 is cheaper and better than the HD 7970 and think it costs just the right amount, but as usual it's us, the customers, that are getting the shaft again. This card should've been around $300-350 in my opinion, no matter if it beats the HD 7970. Reply
  • coldpower27 - Thursday, March 22, 2012 - link

    Nah, they aren't obligated to give more then what the market will bear, no sense in starting a price war when they can have much fatter margins, it beats the 7970 already it's just enough.

    Now the ball is in AMD's court let's see if they can drop prices to compete $450 would be a nice start, but $400 is necessary to actually cause significant competition.
    Reply
  • CeriseCogburn - Friday, March 23, 2012 - link

    This whole thing is so nutso but everyone is saying it.
    Let's take a thoughtful sane view...
    The GTX580 flagship was just $500, and a week or two ago it was $469 or so.
    In what world, in what release, in the past let's say ten years even, has either card company released their new product with $170 or $200 off their standard flagship price when it was standing near $500 right before the release ?
    The answer is it has never, ever happened, not even close, not once.
    With the GTX580 at $450, there's no way a card 40% faster is going to be dropped in at $300, no matter what rumor Charlie Demejerin at Semi0-Accurate has made up from thin air up as an attack on Nvidia, a very smart one for not too bright people it seems.
    Please, feel free to tell me what flagship has ever dropped in cutting nearly $200 off the current flagship price ?
    Any of you ?!?
    Reply
  • Lepton87 - Thursday, March 22, 2012 - link

    Because nVidia decided to screw its costumer and nickle and dime them. That's why. All because 7970 underperformed and nv could get away with it. Reply
  • JarredWalton - Thursday, March 22, 2012 - link

    Or: Because NVIDIA and AMD and Intel are all businesses, and when you launch a hot new product and lots of people are excited to get one, you sell at a price premium for as long as you can. Then supply equals demand and then exceeds demand and that's when you start dropping prices. 7970 didn't underperform; people just expected/wanted more. Realistically, we're getting to the point where doubling performance with a process shrink isn't going to happen, and even 50% improvements are rare. 7970 and 680 are a reflection of that fact. Reply

Log in

Don't have an account? Sign up now