Meet The GeForce GTX 660

For virtual launches it’s often difficult for us to acquire reference clocked cards since NVIDIA doesn’t directly sample the press with reference cards, and today’s launch of the GeForce GTX 660 launch is one of those times. The problem stems from the fact that NVIDIA’s partners are hesitant to offer reference clocked cards to the press since they don’t want to lose to factory overclocked cards in benchmarks, which is an odd (but reasonable) concern.

For today’s launch we were able to get a reference clocked card, but in order to do so we had to agree not to show the card or name the partner who supplied the card. As it turns out this isn’t a big deal since the card we received is for all practical purposes identical to NVIDIA’s reference GTX 660, which NVIDIA has supplied pictures of. So let’s take a look at the “reference” GTX 660.

The reference GTX 660 is in many ways identical to the GTX 670, which comes as no great surprise given the similar size of their PCBs, which in turn allows NVIDIA to reuse the same cooler with little modification. Like the GTX 670, the reference GTX 660 is 9.5” long, with the PCB itself composing just 6.75” of that length while the blower and its housing composes the rest. The size of retail cards will vary between these two lengths as partners like EVGA will be implementing their own blowers similar to NVIDIA’s, while other partners like Zotac will be using open air coolers not much larger than the reference PCB itself.

Breaking open one of our factory overclocked GTX 660 (specifically, our EVGA 660 SC using the NV reference PCB), we can see that while the GTX 670 and GTX 660 are superficially similar on the outside, the PCB itself is quite different. The biggest change here is that while the 670 PCB made the unusual move of putting the VRM circuitry towards the front of the card, the GTX 660 PCB once more puts it on the far side. With the GTX 670 this was a design choice to get the GTX 670 PCB down to 6.75”, whereas with the GTX 660 it requires so little VRM circuitry in the first place that it’s no longer necessary to put that circuitry at the front of the card to find the necessary space.

Looking at the GK106 GPU itself, we can see that not only is the GPU smaller than GK104, but the entire GPU package itself has been reduced in size. Meanwhile, not that it has any functional difference, but GK106 is a bit more rectangular than GK104.

Moving on to the GTX 660’s RAM, we find something quite interesting. Up until now NVIDIA and their partners have regularly used Hynix 6GHz GDDR5 memory modules, with that specific RAM showing up on every GTX 680, GTX 670, and GTX 660 Ti we’ve tested. The GTX 660 meanwhile is the very first card we’ve seen that’s equipped with Samsung’s 6GHz GDDR5 memory modules, marking the first time we’ve seen non-Hynix memory on a GeForce GTX 600 card. Truth be told, though it has no technical implications we’ve seen so many Hynix equipped cards from both AMD and NVIDIA that it’s refreshing to see that there is in fact more than one GDDR5 supplier in the marketplace.

For the 2GB GTX 660, NVIDIA has outfit the card with 8 2Gb memory modules, 4 on the front and 4 on the rear. Oddly enough there aren’t any vacant RAM pads on the 2GB reference PCB, so it’s not entirely clear what partners are doing for their 3GB cards; presumably there’s a second reference PCB specifically built to house the 12 memory modules needed for 3GB cards.

Elsewhere we can find the GTX 660’s sole PCIe power socket on the rear of the card, responsible for supplying the other 75W the card needs. As for the front of the card, here we can find the card’s one SLI connector, which like previous generation mainstream video cards supports up to 2-way SLI.

Finally, looking at display connectivity we once more see the return of NVIDIA’s standard GTX 600 series display configuration. The reference GTX 660 is equipped with 1 DL-DVI-D port, 1 DL-DVI-I port, 1 full size HDMI 1.4 port, and 1 full size DisplayPort 1.2. Like GK104 and GK107, GK106 can drive up to 4 displays, meaning all 4 ports can be put into use simultaneously.

The GeForce GTX 660 Review Just What Is NVIDIA’s Competition & The Test
Comments Locked

147 Comments

View All Comments

  • raghu78 - Thursday, September 13, 2012 - link

    Without competition there is no reason for lower pricing. Do you think Nvidia would have cut prices on the GTX 280 if the HD 4870 was not a fantastic performer at less than half the launch price of GTX 280. AMD made Nvidia look silly with their price / performance. Without competition you can see Intel dictate pricing in the CPU market. are you so naive that you believe any company will willingly give away profits and margins when there is no competition.You only need to look back when Nvidia milked the market with its Geforce 8800 Ultra because AMD flopped with R600 aka HD 2900XT. 850 bucks for a single GPU card.

    http://www.anandtech.com/show/2222
  • chizow - Friday, September 14, 2012 - link

    Sorry I can't fully agree with that statement. As the article mentions, industry leaders must still compete with themselves in order to continue moving product. For years Intel has continued to excel and innovate without any real competition from AMD but now they are starting to feel the hit to their sales as their pace of innovation has slowed in recent years.

    AMD made a mistake with their 4870 pricing, they went for market share rather than margins and admitted as much in the RV770 Story here on Anandtech. But all they have to show for that effort is quarter after quarter and year after year of unprofitability. They've since done their best to reverse their fortunes by continuously increasing the asking prices on their top tier SKUs, they chose an incredibly poor time to step into "Nvidia Flagship" pricing territory with Tahiti.

    If anything, Tahiti's lackluster performance and high price tag relative to 40nm parts enabled Nvidia to offer their midrange ASIC (GK104) as a flagship part. Only now has the market begun to correct itself as it became clear the asking price on 28nm could not justify the asking prices as the differences in performance between 28nm and 40nm parts became indistinguishable. And who led that charge? Nvidia with Kepler. AMD simply piggy-backed price and performance of 40nm which is why you see the huge drops in MSRP since launch for AMD parts.

    Bringing the discussion full circle, Nvidia knows full well they are competing with themselves even if you take AMD out of the picture, which is why they compare the GTX 660 to the GTX 460 and 8800GT. They fully understand they need to offer compelling increases in performance at the same price points, or the same performance at much cheaper prices (GTX 660 compared to GTX 570) or there is no incentive for their users to upgrade.
  • Ananke - Thursday, September 13, 2012 - link

    Today's AMD prices are so-so OK, especially considering the street prices and bundles.
    This GTX660 is priced a little too high, this should've been the GTX670 launch price. The 660 is worth to me around $189 today. I don't understand why people pay premium fro the name. I understand that you may want better driver support under Linux, but for the Windows gamer there is no reason.

    The AMD 7870 is still better buy for the money today.

    While many people with very old hardware may jump in at this price level, I will pass and wait for the AMD8xxx series. We are almost there :).

    The last two years have been very disappointing in the hardware arena. :(
  • rarson - Friday, September 14, 2012 - link

    Yeah, "we've been over this before." Back then you didn't get it, and you still don't because you're not examining the situation critically and making a rational argument, you're just posting fanboy nonsense. AMD's 28nm parts were expensive because:

    1. They were the first 28nm parts available.
    2. 28nm process was expensive (even Nvidia admits that the cost to shrink has been higher and slower-ramping than previous shrinks).
    3. Wafers were constrained (SoC manufacturers were starting to compete for wafers; this is additional demand that AMD and Nvidia didn't usually have to compete for).
    4. When you have limited supply and you want to make money, which is the entire point of running a business, then you have to price higher to avoid running out of stock too quickly and sitting around with your thumb up your ass waiting for supply to return before you can sell anything. That's exactly what happened when Nvidia launched the 680. Stock was nonexistent for months.

    The fact of the matter is that pricing is determined by a lot more things than just performance and you refuse to accept this. That is why you do not run a business.
  • chizow - Friday, September 14, 2012 - link

    And once again, you're ignoring historical facts and pricing metrics from the exact same IHVs and fab (TSMC):

    1) 28nm offered the lowest increase in price and performance of any previous generation in the last 10 years. To break this down for you, if what you said was actually true about new processes (its not), then 28nm increase in performance would've been the expected 50-100% increase you would expect from 100% of the asking price relative to previous generation. Except it wasn't, it was only 30-40% for 100% of the price relative to Nvidia's parts, and in AMD's case, it was more like +50% for 150% of the asking price compared to last-gen AMD parts. That is clearly asking more for less relative to last-gen parts.

    2) Getting into the economics of each wafer, Nvidia would've been able to offset any wafer constraints due to the fact GK104's midrange ASIC size was *MUCH* smaller at ~300mm^2 compared to the usual 500mm^2 from their typical flagship ASICs. This clearly manifested itself in Nvidia's last 2 quarters since GK104 launched where they've enjoyed much higher than usual profit margins. So once again, even if they had the same number of wafer's allocated at 28nm launch as they did at 40nm or 55nm or 65nm, they would still have more chips per wafer. So yes, while the 680 was supply constrained (artificial, imo), the subsequent 670, 660Ti and 660 launches clearly did not.

    3) Its obvious you're not much of an economist, financier, hell, even good with simple arithmetic, so stop trying to play armchair CEO. Here are the facts: AMD cards have lost 30-40% of their value in the last 3-4 months, all because Kepler has rebalanced the market to where it should've been from the outset. If that sounds reasonable to you then you probably consider Facebook's IPO a resounding success.

    4) Tahiti parts were a terrible purchase at launch and only now are they even palatable after 3 significant price drops forced by the launch of their Kepler counterparts. The answer to why they were a terribl purchase is obvious. They offered too little improvement for similar asking prices relative to 40nm parts. Who in their right mind would defend a 7870 offering GTX 570 performance at GTX 570 prices some 20 months after the 570 launched? Oh right, Rarson would....
  • rarson - Tuesday, September 18, 2012 - link

    1. There's no such thing as "pricing metrics." Prices are NOT determined by past prices! You are a such a moron. THESE ARE NEW PARTS! They use a NEW PROCESS! They cost more! GET OVER IT!

    2. "Getting into the economics of each wafer"

    You are not allowed to talk about economics. You have already aptly demonstrated that you don't have a clue when it comes to economics. So any time you use the word, I'm automatically ignoring everything that comes after it.

    3. Everything you said next to the number 3 has absolutely nothing to do with my comment and isn't even factually correct.

    4. Everything you said next to the number 4 has absolutely nothing to do with my comment and isn't even factually correct.
  • chizow - Tuesday, September 18, 2012 - link

    1. Nonsense, you obviously have no background in business or economics, EVERYTHING has pricing metrics for valuation or basis purposes. What do you think the stock markets, cost and financial accounting fundamentals are based upon? Valuation that predominantly uses historical data and performance numbers for forward looking performance EXPECTATIONS. Seriously, just stop typing, every line you type just demonstrates the stupidity behind your thought processes.

    2. Sounds like deflection, you brought fab process pricing into the mix, the fact remains Nvidia can crank out almost 4x as many GK104 for each GF100/110 chip from a single TSMC 300mm wafer (this is just simple arithmetic, which I know you suck at) and their margins have clearly demonstrated this (this is on their financial statements, which I know you don't understand). Whatever increase in cost from 28nm is surely offset by this fact in my favor (once again demonstrated by Nvidia's increased margins from Kepler).

    3 and 4 are factually correct even though they have nothing to do with your inane remarks, just run the numbers. Or maybe that's part of the problem, since you still seem to think GTX 570/6970 performance at GTX 570/6970 prices some 18 months later is some phenomenal deal that everyone should sidegrade to.

    Fact: AMD tried to sell their new 28nm cards at 100% of the performance and 100% of the price of existing 40nm parts that had been on the market for 15-18 months. These parts lost ~30% of their value in the subsequent 6 months since Kepler launched. Anyone who could not see this happening deserved everything they got, congratulations Rarson. :)
  • CeriseCogburn - Thursday, November 29, 2012 - link

    Only he didn't get anything. He was looking to scrape together a 6850 a few weeks back.
  • MySchizoBuddy - Thursday, September 13, 2012 - link

    So nvidia choose not to compare the 660 with 560 but with 460. Why is that?
  • Ryan Smith - Thursday, September 13, 2012 - link

    I would have to assume because the 660 would be so close to the 560 in performance, and because very few mainstream gamers are on a 1-year upgrade cycle. If you picked up a 560 in 2011 you've very unlikely to grab a 660 in 2012.

Log in

Don't have an account? Sign up now