As our regular readers are well aware, NVIDIA’s 28nm supply constraints have proven to be a constant thorn in the side of the company. Since Q2 the message in financial statements has been clear: NVIDIA could be selling more GPUs if they had access to more 28nm capacity. As a result of this capacity constraint they have had to prioritize the high-profit mainstream mobile and high-end desktop markets above other consumer markets, leaving holes in their product lineups. In the intervening time they have launched products like the GK104-based GeForce GTX 660 Ti to help bridge that gap, but even that still left a hole between $100 and $300.

Now nearly 6 months after the launch of the first Kepler GPUs – and 9 months after the launch of the first 28nm GPUs – NVIDIA’s situation has finally improved to the point where they can finish filling out the first iteration of the Kepler GPU family. With GK104 at the high-end and GK107 at the low-end, the task of filling out the middle falls to NVIDIA’s latest GPU: GK106.

As given away by the model number, GK106 is designed to fit in between GK104 and GK107. GK106 offers a more modest collection of functional blocks in exchange for a smaller die size and lower power consumption, making it a perfect fit for NVIDIA’s mainstream desktop products. Even so, we have to admit that until a month ago we weren’t quite sure whether there would even be a GK106 since NVIDIA has covered so much of their typical product lineup with GK104 and GK107, leaving open the possibility of using those GPUs to also cover the rest. So the arrival of GK106 comes as a pleasant surprise amidst what for the last 6 months has been a very small GPU family.

GK106’s launch vehicle will be the GeForce GTX 660, the central member of NVIDIA’s mainstream video card lineup. GTX 660 is designed to come in between GTX 660 Ti and GTX 650 (also launching today), bringing Kepler and its improved performance down to the same $230 price range that the GTX 460 launched at nearly two years ago. NVIDIA has had a tremendous amount of success with the GTX 560 and GTX 460 families, so they’re looking to maintain this momentum with the GTX 660.

  GTX 660 Ti GTX 660 GTX 650 GT 640
Stream Processors 1344 960 384 384
Texture Units 112 80 32 32
ROPs 24 24 16 16
Core Clock 915MHz 980MHz 1058MHz 900MHz
Shader Clock N/A N/A N/A N/A
Boost Clock 980MHz 1033MHz N/A N/A
Memory Clock 6.008GHz GDDR5 6.008GHz GDDR5 5GHz GDDR5 1.782GHz DDR3
Memory Bus Width 192-bit 192-bit 128-bit 128-bit
VRAM 2GB 2GB 1GB/2GB 2GB
FP64 1/24 FP32 1/24 FP32 1/24 FP32 1/24 FP32
TDP 150W 140W 64W 65W
GPU GK104 GK106 GK107 GK107
Transistor Count 3.5B 2.54B 1.3B 1.3B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Launch Price $299 $229 $109 $99

Diving right into the guts of things, the GeForce GTX 660 will be utilizing a fully enabled GK106 GPU. A fully enabled GK106 in turn is composed of 5 SMXes – arranged in an asymmetric 3 GPC configuration – along with 24 ROPs, 3 64bit memory controllers, and 384KB of L2 cache. Design-wise this basically splits the difference between the 8 SMX + 32 ROP GK104 and the 2 SMX + 16 ROP GK107. This also means that GTX 660 ends up looking a great deal like a GTX 660 Ti with fewer SMXes.

Meanwhile the reduction in functional units has had the expected impact on die size and transistor count, with GK106 packing 2.54B transistors into 214mm2. This also means that GK106 is only 2mm2 larger than AMD’s Pitcairn GPU, which sets up a very obvious product showdown.

In breaking down GK106, it’s interesting to note that this is the first time since 2008’s G9x family of GPUs that NVIDIA’s consumer GPU has had this level of consistency. The 200 series was split between 3 different architectures (G9x, GT200, and GT21x), and the 400/500 series was split between Big Fermi (GF1x0) and Little Fermi (GF1x4/1x6/1x8). The 600 series on the other hand is architecturally consistent from top to bottom in all respects, which is why NVIDIA’s split of the GTX 660 series between GK104 and GK106 makes no practical difference. As a result GK104, GK106, and GK107 all offer the same Kepler family features – such as the NVENC hardware H.264 encoder, VP5 video decoder, FastHDMI support, TXAA anti-aliasing, and PCIe 3.0 connectivity – with only the number of functional units differing.

As GK106’s launch vehicle, GTX 660 will be the highest performing implementation of GK106 that we expect to see. NVIDIA is setting the reference clocks for the GTX 660 at 980MHz for the core and 6GHz for the memory, the second to only the GTX 680 in core clockspeed and still the same common 6GHz memory clockspeed we’ve seen across all of NVIDIA’s GDDR5 desktop Kepler parts this far. Compared to GTX 660 Ti this means that on paper GTX 660 has around 76% of the shading and texturing performance of the GTX 660 Ti, 80% of the rasterization performance, 100% of the memory bandwidth, and a full 107% of the ROP performance.

These figures mean that the performance of the GTX 660 relative to the GTX 660 Ti is going to be heavily dependent on shading and rasterization. Shader-heavy games will suffer the most while memory bandwidth-bound and ROP-bound games are likely to perform very similarly between the two video cards. Interestingly enough this is effectively opposite the difference between the GTX 670 and GTX 660 Ti, where the differences between the two of those cards were all in memory bandwidth and ROPs. So in scenarios where GTX 660 Ti’s configuration exacerbated GK104’s memory bandwidth limitations GTX 660 should emerge relatively unscathed.

On the power front, GTX 660 has power target of 115W with a TDP of 140W. Once again drawing a GTX 660 Ti comparison, this puts the TDP of the GTX 660 at only 10W lower than its larger sibling, but the power target is a full 19W lower. In practice power consumption on the GTX 600 series has been much more closely tracking the power target than it has the TDP, so as we’ll see the GTX 660 is often pulling 20W+ less than the GTX 660 Ti. This lower level of power consumption also means that the GTX 660 is the first GTX 600 product to only require 1 supplementary PCIe power connection.

Moving on, for today’s launch NVIDIA is once again going all virtual, with partners being left to their own designs. However given that this is the first GK106 part and that partners have had relatively little time with the GPU, in practice partners are using NVIDIA’s PCB designs with their own coolers – many of which have been lifted from their GTX 660 Ti designs – meaning that all of the cards being launched today are merely semi-custom as opposed to some fully custom designs like we saw with the GTX 660 Ti. This means that though there’s going to be a wide range designs with respect to cooling, all of today’s launch cards will be extremely consistent with regard to clockspeeds and power delivery.

Like the GTX 660 Ti launch, partners have the option of going with either 2GB or 3GB of RAM, with the former once more taking advantage of NVIDIA’s asymmetrical memory controller functionality. For partners that do offer cards in both memory capacities we’re expecting most partners to charge $30-$40 more for the extra 1GB of RAM.

NVIDIA has set the MSRP on the GTX 660 at $229, which NVIDIA’s partners will be adhering to almost to a fault. Of the 3 cards we’re looking at in our upcoming companion GTX 660 launch roundup article, every last card is going for $229 despite the fact that every last card is also factory overclocked. Because NVIDIA does not provide an exhaustive list of cards and prices it’s not possible to say for sure just what the retail market will look like ahead of time, but at this point it looks like most $229 cards will be shipping with some kind of factory overclock. This is very similar to how the GTX 560 launch played out, though if it parallels the GTX 560 launch close enough then reference-clocked cards will still be plentiful in time.

At $229 the GTX 660 is going to be coming in just under AMD’s Radeon HD 7870. AMD’s official MSRP on the 7870 is $249, but at this point in time the 7870 is commonly available for $10 cheaper at $239 after rebate. Meanwhile the 2GB 7850 will be boxing in the GTX 660 in from the other side, with the 7850 regularly found at $199. Like we saw with the GTX 660 Ti launch, these prices are no mistake by AMD, with AMD once again having preemptively cut prices so that NVIDIA doesn’t undercut them at launch. It’s also worth noting that NVIDIA will not be extending their Borderlands 2 promotion to the GTX 660, so this is $229 without any bundled games, whereas AMD’s Sleeping Dogs promotion is still active for the 7870.

Finally, along with the GTX 660 the GK107-based GTX 650 is also launching today at $109. For the full details of that launch please see our GTX 650 companion article. Supplies of both cards are expected to be plentiful.

Summer 2012 GPU Pricing Comparison
AMD Price NVIDIA
Radeon HD 7950 $329  
  $299 GeForce GTX 660 Ti
Radeon HD 7870 $239  
  $229 GeForce GTX 660
Radeon HD 7850 $199  
Radeon HD 7770 $109 GeForce GTX 650
Radeon HD 7750 $99 GeForce GT 640

 

Meet The GeForce GTX 660
Comments Locked

147 Comments

View All Comments

  • raghu78 - Thursday, September 13, 2012 - link

    Without competition there is no reason for lower pricing. Do you think Nvidia would have cut prices on the GTX 280 if the HD 4870 was not a fantastic performer at less than half the launch price of GTX 280. AMD made Nvidia look silly with their price / performance. Without competition you can see Intel dictate pricing in the CPU market. are you so naive that you believe any company will willingly give away profits and margins when there is no competition.You only need to look back when Nvidia milked the market with its Geforce 8800 Ultra because AMD flopped with R600 aka HD 2900XT. 850 bucks for a single GPU card.

    http://www.anandtech.com/show/2222
  • chizow - Friday, September 14, 2012 - link

    Sorry I can't fully agree with that statement. As the article mentions, industry leaders must still compete with themselves in order to continue moving product. For years Intel has continued to excel and innovate without any real competition from AMD but now they are starting to feel the hit to their sales as their pace of innovation has slowed in recent years.

    AMD made a mistake with their 4870 pricing, they went for market share rather than margins and admitted as much in the RV770 Story here on Anandtech. But all they have to show for that effort is quarter after quarter and year after year of unprofitability. They've since done their best to reverse their fortunes by continuously increasing the asking prices on their top tier SKUs, they chose an incredibly poor time to step into "Nvidia Flagship" pricing territory with Tahiti.

    If anything, Tahiti's lackluster performance and high price tag relative to 40nm parts enabled Nvidia to offer their midrange ASIC (GK104) as a flagship part. Only now has the market begun to correct itself as it became clear the asking price on 28nm could not justify the asking prices as the differences in performance between 28nm and 40nm parts became indistinguishable. And who led that charge? Nvidia with Kepler. AMD simply piggy-backed price and performance of 40nm which is why you see the huge drops in MSRP since launch for AMD parts.

    Bringing the discussion full circle, Nvidia knows full well they are competing with themselves even if you take AMD out of the picture, which is why they compare the GTX 660 to the GTX 460 and 8800GT. They fully understand they need to offer compelling increases in performance at the same price points, or the same performance at much cheaper prices (GTX 660 compared to GTX 570) or there is no incentive for their users to upgrade.
  • Ananke - Thursday, September 13, 2012 - link

    Today's AMD prices are so-so OK, especially considering the street prices and bundles.
    This GTX660 is priced a little too high, this should've been the GTX670 launch price. The 660 is worth to me around $189 today. I don't understand why people pay premium fro the name. I understand that you may want better driver support under Linux, but for the Windows gamer there is no reason.

    The AMD 7870 is still better buy for the money today.

    While many people with very old hardware may jump in at this price level, I will pass and wait for the AMD8xxx series. We are almost there :).

    The last two years have been very disappointing in the hardware arena. :(
  • rarson - Friday, September 14, 2012 - link

    Yeah, "we've been over this before." Back then you didn't get it, and you still don't because you're not examining the situation critically and making a rational argument, you're just posting fanboy nonsense. AMD's 28nm parts were expensive because:

    1. They were the first 28nm parts available.
    2. 28nm process was expensive (even Nvidia admits that the cost to shrink has been higher and slower-ramping than previous shrinks).
    3. Wafers were constrained (SoC manufacturers were starting to compete for wafers; this is additional demand that AMD and Nvidia didn't usually have to compete for).
    4. When you have limited supply and you want to make money, which is the entire point of running a business, then you have to price higher to avoid running out of stock too quickly and sitting around with your thumb up your ass waiting for supply to return before you can sell anything. That's exactly what happened when Nvidia launched the 680. Stock was nonexistent for months.

    The fact of the matter is that pricing is determined by a lot more things than just performance and you refuse to accept this. That is why you do not run a business.
  • chizow - Friday, September 14, 2012 - link

    And once again, you're ignoring historical facts and pricing metrics from the exact same IHVs and fab (TSMC):

    1) 28nm offered the lowest increase in price and performance of any previous generation in the last 10 years. To break this down for you, if what you said was actually true about new processes (its not), then 28nm increase in performance would've been the expected 50-100% increase you would expect from 100% of the asking price relative to previous generation. Except it wasn't, it was only 30-40% for 100% of the price relative to Nvidia's parts, and in AMD's case, it was more like +50% for 150% of the asking price compared to last-gen AMD parts. That is clearly asking more for less relative to last-gen parts.

    2) Getting into the economics of each wafer, Nvidia would've been able to offset any wafer constraints due to the fact GK104's midrange ASIC size was *MUCH* smaller at ~300mm^2 compared to the usual 500mm^2 from their typical flagship ASICs. This clearly manifested itself in Nvidia's last 2 quarters since GK104 launched where they've enjoyed much higher than usual profit margins. So once again, even if they had the same number of wafer's allocated at 28nm launch as they did at 40nm or 55nm or 65nm, they would still have more chips per wafer. So yes, while the 680 was supply constrained (artificial, imo), the subsequent 670, 660Ti and 660 launches clearly did not.

    3) Its obvious you're not much of an economist, financier, hell, even good with simple arithmetic, so stop trying to play armchair CEO. Here are the facts: AMD cards have lost 30-40% of their value in the last 3-4 months, all because Kepler has rebalanced the market to where it should've been from the outset. If that sounds reasonable to you then you probably consider Facebook's IPO a resounding success.

    4) Tahiti parts were a terrible purchase at launch and only now are they even palatable after 3 significant price drops forced by the launch of their Kepler counterparts. The answer to why they were a terribl purchase is obvious. They offered too little improvement for similar asking prices relative to 40nm parts. Who in their right mind would defend a 7870 offering GTX 570 performance at GTX 570 prices some 20 months after the 570 launched? Oh right, Rarson would....
  • rarson - Tuesday, September 18, 2012 - link

    1. There's no such thing as "pricing metrics." Prices are NOT determined by past prices! You are a such a moron. THESE ARE NEW PARTS! They use a NEW PROCESS! They cost more! GET OVER IT!

    2. "Getting into the economics of each wafer"

    You are not allowed to talk about economics. You have already aptly demonstrated that you don't have a clue when it comes to economics. So any time you use the word, I'm automatically ignoring everything that comes after it.

    3. Everything you said next to the number 3 has absolutely nothing to do with my comment and isn't even factually correct.

    4. Everything you said next to the number 4 has absolutely nothing to do with my comment and isn't even factually correct.
  • chizow - Tuesday, September 18, 2012 - link

    1. Nonsense, you obviously have no background in business or economics, EVERYTHING has pricing metrics for valuation or basis purposes. What do you think the stock markets, cost and financial accounting fundamentals are based upon? Valuation that predominantly uses historical data and performance numbers for forward looking performance EXPECTATIONS. Seriously, just stop typing, every line you type just demonstrates the stupidity behind your thought processes.

    2. Sounds like deflection, you brought fab process pricing into the mix, the fact remains Nvidia can crank out almost 4x as many GK104 for each GF100/110 chip from a single TSMC 300mm wafer (this is just simple arithmetic, which I know you suck at) and their margins have clearly demonstrated this (this is on their financial statements, which I know you don't understand). Whatever increase in cost from 28nm is surely offset by this fact in my favor (once again demonstrated by Nvidia's increased margins from Kepler).

    3 and 4 are factually correct even though they have nothing to do with your inane remarks, just run the numbers. Or maybe that's part of the problem, since you still seem to think GTX 570/6970 performance at GTX 570/6970 prices some 18 months later is some phenomenal deal that everyone should sidegrade to.

    Fact: AMD tried to sell their new 28nm cards at 100% of the performance and 100% of the price of existing 40nm parts that had been on the market for 15-18 months. These parts lost ~30% of their value in the subsequent 6 months since Kepler launched. Anyone who could not see this happening deserved everything they got, congratulations Rarson. :)
  • CeriseCogburn - Thursday, November 29, 2012 - link

    Only he didn't get anything. He was looking to scrape together a 6850 a few weeks back.
  • MySchizoBuddy - Thursday, September 13, 2012 - link

    So nvidia choose not to compare the 660 with 560 but with 460. Why is that?
  • Ryan Smith - Thursday, September 13, 2012 - link

    I would have to assume because the 660 would be so close to the 560 in performance, and because very few mainstream gamers are on a 1-year upgrade cycle. If you picked up a 560 in 2011 you've very unlikely to grab a 660 in 2012.

Log in

Don't have an account? Sign up now