OC: Power, Temperature, & Noise

Before wrapping things up, we wanted to quickly take a look at the overclocking potential of the GTX 660. As the first GK106 product GTX 660 should give us some idea as to how capable GK106 is at overclocking, though like GK104 we’re eventually at the mercy of NVIDIA’s locked voltages and limited power target control.

In its rawest form, GTX 660 will have two things going against it for overclocking. First and foremost, as the highest clocked GK106 part it’s already starting out at a fairly high clockspeed – 980MHz for reference cards, and upwards of 1050MHz for factory overclocked cards – so there may not be a great deal of overclocking headroom left to exploit. Furthermore because NVIDIA is keeping the power consumption of the card low (it needs to stay under 150W max), the maximum power target is the lowest we’ve seen for any GTX 600 card yet: it’s a mere 110%. As a result even if we can hit a large GPU clock offset, there may not be enough power headroom available to let the GPU regularly reach those speeds.

Memory overclocking on the other hand looks much better. With the same memory controllers and the same spec’d RAM as on the other high-end GTX 600 cards, there’s no reason to believe that the GTX 660 shouldn’t be able to hit equally high memory clocks, which means 6.5GHz+ is a reasonable goal.

GeForce GTX 660 Overclocking
  Ref GTX 660 EVGA GTX 660 SC Zotac GTX 660 Gigabyte GTX 660 OC
Shipping Core Clock 980MHz 1046MHz 993MHz 1033MHz
Shipping Max Boost Clock 1084MHz 1123MHz 1110MHz 1123MHz
Shipping Memory Clock 6GHz 6GHz 6GHz 6GHz
Shipping Max Boost Voltage 1.175v 1.175v 1.162v 1.175v
         
Overclock Core Clock 1080MHz 1096MHz 1093MHz 1083MHz
Overclock Max Boost Clock 1185MHz 1174MHz 1215MHz 1174MHz
Overclock Memory Clock 6.7GHz 6.9GHz 6.7GHz 6.5GHz
Overclock Max Boost Voltage 1.175v 1.175v 1.162v 1.175v

Throwing in our factory overclocked cards from our companion roundup, our core overclocking experience was remarkably consistent. The difference in the max boost clock between the slowest and fastest card was a mere 41MHz, with the Zotac card being a clear outlier compared to the rest of our cards. This comes as no great surprise since all of these launch cards are using the NVIDIA reference PCB, so there’s little room at this moment for overclocking innovation.

Memory overclocking is as volatile as ever, with a 400MHz spread between our best and worst cards. Again with the use of the reference PCB (and the same Samsung RAM), memory overclocking is entirely the luck of the draw.

For the moment at least GTX 660 overclocking looks to be on a level playing field due to all partners using the same PCB. For overclockers the choice of a card will come down to pricing, what cooler they prefer, and any preference in vendor.

The end result of all of this is that at best we’re seeing 100MHz overclocks (going by the max boost clock), which represents roughly a 10% overclock. Coupling this with a good memory overclock and the 10% increase in the power target will result in around a 10% increase in performance, which isn’t shabby but also is the same kind of shallow overclocking potential that we’ve seen on cards like the GTX 670 and GTX 660 Ti. All told the GTX 660 isn’t a poor overclocker – 10% more performance for free is nothing to sneeze at – but it’s also not going to enamor itself with hardware overclockers who like to chase 20% or more.

Moving on to our performance charts, we’re going to once again start with power, temperature, and noise, before moving on to gaming performance. Due to popular demand we’ll also be including overclocking results with just a 110% power target so that you can see the impact of adjusting the power target separately from the clock offsets.

With a 110% power target we should be seeing an 11W-14W increase in power consumption, which is indeed roughly what we’re seeing at the wall after accounting for PSU inefficiencies. In Metro this is just enough of a difference to erase most of the GTX 660’s power consumption advantage over the GTX 660 Ti, though the GTX 660 still draws marginally less power than the stock 7870. Meanwhile under OCCT the GTX 660 now draws more power than the 7870, but still is still drawing over 20W less than the stock GTX 660 Ti.

Our increased power consumption pushes temperatures up by another 2-3C. This is nothing a blower can’t handle, let alone an open-air cooler.

Interestingly enough, despite the increase in power consumption and temperatures, overclocking has almost no impact on noise. In the worst case scenario our GTX 660’s increased its fan speed by all of 2%, which increases noise by less than 1dB. As a result the amount of noise generated by the overclocked GTX 660 is practically identical to that generated by the stock GTX 660, and still below the reference 7870.

Power, Temperature, & Noise OC: Gaming Performance
Comments Locked

147 Comments

View All Comments

  • raghu78 - Thursday, September 13, 2012 - link

    Without competition there is no reason for lower pricing. Do you think Nvidia would have cut prices on the GTX 280 if the HD 4870 was not a fantastic performer at less than half the launch price of GTX 280. AMD made Nvidia look silly with their price / performance. Without competition you can see Intel dictate pricing in the CPU market. are you so naive that you believe any company will willingly give away profits and margins when there is no competition.You only need to look back when Nvidia milked the market with its Geforce 8800 Ultra because AMD flopped with R600 aka HD 2900XT. 850 bucks for a single GPU card.

    http://www.anandtech.com/show/2222
  • chizow - Friday, September 14, 2012 - link

    Sorry I can't fully agree with that statement. As the article mentions, industry leaders must still compete with themselves in order to continue moving product. For years Intel has continued to excel and innovate without any real competition from AMD but now they are starting to feel the hit to their sales as their pace of innovation has slowed in recent years.

    AMD made a mistake with their 4870 pricing, they went for market share rather than margins and admitted as much in the RV770 Story here on Anandtech. But all they have to show for that effort is quarter after quarter and year after year of unprofitability. They've since done their best to reverse their fortunes by continuously increasing the asking prices on their top tier SKUs, they chose an incredibly poor time to step into "Nvidia Flagship" pricing territory with Tahiti.

    If anything, Tahiti's lackluster performance and high price tag relative to 40nm parts enabled Nvidia to offer their midrange ASIC (GK104) as a flagship part. Only now has the market begun to correct itself as it became clear the asking price on 28nm could not justify the asking prices as the differences in performance between 28nm and 40nm parts became indistinguishable. And who led that charge? Nvidia with Kepler. AMD simply piggy-backed price and performance of 40nm which is why you see the huge drops in MSRP since launch for AMD parts.

    Bringing the discussion full circle, Nvidia knows full well they are competing with themselves even if you take AMD out of the picture, which is why they compare the GTX 660 to the GTX 460 and 8800GT. They fully understand they need to offer compelling increases in performance at the same price points, or the same performance at much cheaper prices (GTX 660 compared to GTX 570) or there is no incentive for their users to upgrade.
  • Ananke - Thursday, September 13, 2012 - link

    Today's AMD prices are so-so OK, especially considering the street prices and bundles.
    This GTX660 is priced a little too high, this should've been the GTX670 launch price. The 660 is worth to me around $189 today. I don't understand why people pay premium fro the name. I understand that you may want better driver support under Linux, but for the Windows gamer there is no reason.

    The AMD 7870 is still better buy for the money today.

    While many people with very old hardware may jump in at this price level, I will pass and wait for the AMD8xxx series. We are almost there :).

    The last two years have been very disappointing in the hardware arena. :(
  • rarson - Friday, September 14, 2012 - link

    Yeah, "we've been over this before." Back then you didn't get it, and you still don't because you're not examining the situation critically and making a rational argument, you're just posting fanboy nonsense. AMD's 28nm parts were expensive because:

    1. They were the first 28nm parts available.
    2. 28nm process was expensive (even Nvidia admits that the cost to shrink has been higher and slower-ramping than previous shrinks).
    3. Wafers were constrained (SoC manufacturers were starting to compete for wafers; this is additional demand that AMD and Nvidia didn't usually have to compete for).
    4. When you have limited supply and you want to make money, which is the entire point of running a business, then you have to price higher to avoid running out of stock too quickly and sitting around with your thumb up your ass waiting for supply to return before you can sell anything. That's exactly what happened when Nvidia launched the 680. Stock was nonexistent for months.

    The fact of the matter is that pricing is determined by a lot more things than just performance and you refuse to accept this. That is why you do not run a business.
  • chizow - Friday, September 14, 2012 - link

    And once again, you're ignoring historical facts and pricing metrics from the exact same IHVs and fab (TSMC):

    1) 28nm offered the lowest increase in price and performance of any previous generation in the last 10 years. To break this down for you, if what you said was actually true about new processes (its not), then 28nm increase in performance would've been the expected 50-100% increase you would expect from 100% of the asking price relative to previous generation. Except it wasn't, it was only 30-40% for 100% of the price relative to Nvidia's parts, and in AMD's case, it was more like +50% for 150% of the asking price compared to last-gen AMD parts. That is clearly asking more for less relative to last-gen parts.

    2) Getting into the economics of each wafer, Nvidia would've been able to offset any wafer constraints due to the fact GK104's midrange ASIC size was *MUCH* smaller at ~300mm^2 compared to the usual 500mm^2 from their typical flagship ASICs. This clearly manifested itself in Nvidia's last 2 quarters since GK104 launched where they've enjoyed much higher than usual profit margins. So once again, even if they had the same number of wafer's allocated at 28nm launch as they did at 40nm or 55nm or 65nm, they would still have more chips per wafer. So yes, while the 680 was supply constrained (artificial, imo), the subsequent 670, 660Ti and 660 launches clearly did not.

    3) Its obvious you're not much of an economist, financier, hell, even good with simple arithmetic, so stop trying to play armchair CEO. Here are the facts: AMD cards have lost 30-40% of their value in the last 3-4 months, all because Kepler has rebalanced the market to where it should've been from the outset. If that sounds reasonable to you then you probably consider Facebook's IPO a resounding success.

    4) Tahiti parts were a terrible purchase at launch and only now are they even palatable after 3 significant price drops forced by the launch of their Kepler counterparts. The answer to why they were a terribl purchase is obvious. They offered too little improvement for similar asking prices relative to 40nm parts. Who in their right mind would defend a 7870 offering GTX 570 performance at GTX 570 prices some 20 months after the 570 launched? Oh right, Rarson would....
  • rarson - Tuesday, September 18, 2012 - link

    1. There's no such thing as "pricing metrics." Prices are NOT determined by past prices! You are a such a moron. THESE ARE NEW PARTS! They use a NEW PROCESS! They cost more! GET OVER IT!

    2. "Getting into the economics of each wafer"

    You are not allowed to talk about economics. You have already aptly demonstrated that you don't have a clue when it comes to economics. So any time you use the word, I'm automatically ignoring everything that comes after it.

    3. Everything you said next to the number 3 has absolutely nothing to do with my comment and isn't even factually correct.

    4. Everything you said next to the number 4 has absolutely nothing to do with my comment and isn't even factually correct.
  • chizow - Tuesday, September 18, 2012 - link

    1. Nonsense, you obviously have no background in business or economics, EVERYTHING has pricing metrics for valuation or basis purposes. What do you think the stock markets, cost and financial accounting fundamentals are based upon? Valuation that predominantly uses historical data and performance numbers for forward looking performance EXPECTATIONS. Seriously, just stop typing, every line you type just demonstrates the stupidity behind your thought processes.

    2. Sounds like deflection, you brought fab process pricing into the mix, the fact remains Nvidia can crank out almost 4x as many GK104 for each GF100/110 chip from a single TSMC 300mm wafer (this is just simple arithmetic, which I know you suck at) and their margins have clearly demonstrated this (this is on their financial statements, which I know you don't understand). Whatever increase in cost from 28nm is surely offset by this fact in my favor (once again demonstrated by Nvidia's increased margins from Kepler).

    3 and 4 are factually correct even though they have nothing to do with your inane remarks, just run the numbers. Or maybe that's part of the problem, since you still seem to think GTX 570/6970 performance at GTX 570/6970 prices some 18 months later is some phenomenal deal that everyone should sidegrade to.

    Fact: AMD tried to sell their new 28nm cards at 100% of the performance and 100% of the price of existing 40nm parts that had been on the market for 15-18 months. These parts lost ~30% of their value in the subsequent 6 months since Kepler launched. Anyone who could not see this happening deserved everything they got, congratulations Rarson. :)
  • CeriseCogburn - Thursday, November 29, 2012 - link

    Only he didn't get anything. He was looking to scrape together a 6850 a few weeks back.
  • MySchizoBuddy - Thursday, September 13, 2012 - link

    So nvidia choose not to compare the 660 with 560 but with 460. Why is that?
  • Ryan Smith - Thursday, September 13, 2012 - link

    I would have to assume because the 660 would be so close to the 560 in performance, and because very few mainstream gamers are on a 1-year upgrade cycle. If you picked up a 560 in 2011 you've very unlikely to grab a 660 in 2012.

Log in

Don't have an account? Sign up now