Power, Temperature, & Noise

As always, we’re wrapping up our look at a video card’s stock performance with a look at power, temperature, and noise. Unlike GTX 660 Ti, which was a harvested GK104 GPU, GTX 660 is based on the brand-new GK106 GPU, which will have interesting repercussions for power consumption. Scaling down a GPU by disabling functional units often has diminishing returns, so GK106 will effectively “reset” NVIDIA’s position as far as power consumption goes. As a reminder, NVIDIA’s power target here is a mere 115W, while their TDP is 140W.

GeForce GTX 660 Series Voltages
Ref GTX 660 Ti Load Ref GTX 660 Ti Idle Ref GTX 660 Load Ref GTX 660 Idle
1.175v 0.975v 1.175v 0.875v

Stopping to take a quick look at voltages, even with a new GPU nothing has changed. NVIDIA’s standard voltage remains at 1.175v, the same as we’ve seen with GK104. However idle voltages are much lower, with the GK106 based GTX 660 idling at 0.875v versus 0.975v for the various GK104 desktop cards. As we’ll see later, this is an important distinction for GK106.

Up next, before we jump into our graphs let’s take a look at the average core clockspeed during our benchmarks. Because of GPU boost the boost clock alone doesn’t give us the whole picture, we’ve recorded the clockspeed of our GTX 660 during each of our benchmarks when running it at 1920x1200 and computed the average clockspeed over the duration of the benchmark

GeForce GTX 600 Series Average Clockspeeds
  GTX 670 GTX 660 Ti GTX 660
Max Boost Clock 1084MHz 1058MHz 1084MHz
Crysis 1057MHz 1058MHz 1047MHz
Metro 1042MHz 1048MHz 1042MHz
DiRT 3 1037MHz 1058MHz 1054MHz
Shogun 2 1064MHz 1035MHz 1045MHz
Batman 1042MHz 1051MHz 1029MHz
Portal 2 988MHz 1041MHz 1033MHz
Battlefield 3 1055MHz 1054MHz 1065MHz
Starcraft II 1084MHz N/A 1080MHz
Skyrim 1084MHz 1045MHz 1084MHz
Civilization V 1038MHz 1045MHz 1067MHz

With an official boost clock of 1033MHz and a maximum boost of 1084MHz on our GTX 660, we see clockspeeds regularly vary between the two points. For the most part our average clockspeeds are slightly ahead of NVIDIA’s boost clock, while in CPU-heavy workloads (Starcraft II, Skyrim), we can almost sustain the maximum boost clock. Ultimately this means that the GTX 660 is spending most of its time near or above 1050MHz, which will have repercussions when it comes to overclocking.

Starting as always with idle power we immediately see an interesting outcome: GTX 660 has the lowest idle power usage. And it’s not just a one or two watt either, but rather a 6W (all the wall) difference between the GTX 660 and both the Radeon HD 7800 series and the GTX 600 series. All of the current 28nm GPUs have offered refreshingly low idle power usage, but with the GTX 660 we’re seeing NVIDIA cut into what was already a relatively low idle power usage and shrink it even further.

NVIDIA’s claim is that their idle power usage is around 5W, and while our testing methodology doesn’t allow us to isolate the video card, our results corroborate a near-5W value. The biggest factors here seem to be a combination of die size and idle voltage; we naturally see a reduction in idle power usage as we move to smaller GPUs with fewer transistors to power up, but also NVIDIA’s idle voltage of 0.875v is nearly 0.1v below GK104’s idle voltage and 0.075v lower than GT 640 (GK107)’s idle voltage. The combination of these factors has pushed the GTX 660’s idle power usage to the lowest point we’ve ever seen for a GPU of this size, which is quite an accomplishment. Though I suspect the real payoff will be in the mobile space, as even with Optimus mobile GPUs have to spend some time idling, which is another opportunity to save power.

At this point the only area in which NVIDIA doesn’t outperform AMD is in the so-called “long idle” scenario, where AMD’s ZeroCore Power technology gets to kick in. 5W is nice, but next-to-0W is even better.

Moving on to load power consumption, given NVIDIA’s focus on efficiency with the Kepler family it comes as no great surprise that NVIDIA continues to hold the lead when it comes to load power consumption. The gap between GTX 660 and 7870 isn’t quite as large as the gap we saw between GTX 680 and 7970 but NVIDIA still has a convincing lead here, with the GTX 660 consuming 23W less at the wall than the 7870. This puts the GTX 660 at around the power consumption of the 7850 (a card with a similar TDP) or the GTX 460. On AMD’s part, Pitcairn is a more petite (and less compute-heavy) part than Tahiti, which means AMD doesn’t face nearly the disparity as they do on the high-end.

OCCT on the other hand has the GTX 660 and 7870 much closer, thanks to AMD’s much more aggressive throttling through PowerTune. This is one of the only times where the GTX 660 isn’t competitive with the 7850 in some fashion, though based on our experience our Metro results are more meaningful than our OCCT results right now.

As for idle temperatures, there are no great surprises. A good blower can hit around 30C in our testbed, and that’s exactly what we see.

Temperatures under Metro look good enough; though despite their power advantage NVIDIA can’t keep up with the blower-equipped 7800 series.  At the risk of spoiling our noise results, the 7800 series doesn’t do significantly worse for noise so it’s not immediately clear why the GTX 660 is 6C warmer here. Our best guess would be that the GTX 660’s cooler just quite isn’t up to the potential of the 7800 series’ reference cooler.

OCCT actually closes the gap between the 7870 and the GTX 660 rather than widening it, which is the opposite of what we would expect given our earlier temperature data. Reaching the mid-70s neither card is particularly cool, but both are still well below their thermal limits, meaning there’s plenty of thermal headroom to play with.

Last but not least we have our noise tests, starting with idle noise. Again there are no surprises here; the GTX 660’s blower is solid, producing no more noise than any other standard blower we’ve seen.

While the GTX 660 couldn’t beat the 7870 on temperatures under Metro, it can certainly beat the 7870 when it comes to noise. The difference isn’t particularly great – just 1.4dB – but every bit adds up, and 47.4dB is historically very good for a blower. However the use of a blower on the GTX 660 means that NVIDIA still can’t match the glory of the GTX 560 Ti or GTX 460; for that we’ll have to take a look at retail cards with open air coolers.

Similar to how AMD’s temperature lead eroded with OCCT, AMD’s slight loss in load noise testing becomes a much larger gap under OCCT. A 4.5dB difference is now solidly in the realm of noticeable, and further reinforces the fact that the GTX 660 is the quieter card under both normal and extreme situations.

We’ll be taking an in-depth look at some retail cards later today with our companion retail card article, but with those results already in hand we can say that despite the use of a blower the “reference” GTX 660 holds up very well. Open air coolers can definitely beat a blower with the usual drawbacks (that heat has to go somewhere), but when a blower is only hitting 47dB, you already have a fairly quiet card. So even a reference GTX 660 (as unlikely as it is to appear in North America) looks good all things considered.

Synthetics OC: Power, Temperature, & Noise
Comments Locked

147 Comments

View All Comments

  • raghu78 - Thursday, September 13, 2012 - link

    Without competition there is no reason for lower pricing. Do you think Nvidia would have cut prices on the GTX 280 if the HD 4870 was not a fantastic performer at less than half the launch price of GTX 280. AMD made Nvidia look silly with their price / performance. Without competition you can see Intel dictate pricing in the CPU market. are you so naive that you believe any company will willingly give away profits and margins when there is no competition.You only need to look back when Nvidia milked the market with its Geforce 8800 Ultra because AMD flopped with R600 aka HD 2900XT. 850 bucks for a single GPU card.

    http://www.anandtech.com/show/2222
  • chizow - Friday, September 14, 2012 - link

    Sorry I can't fully agree with that statement. As the article mentions, industry leaders must still compete with themselves in order to continue moving product. For years Intel has continued to excel and innovate without any real competition from AMD but now they are starting to feel the hit to their sales as their pace of innovation has slowed in recent years.

    AMD made a mistake with their 4870 pricing, they went for market share rather than margins and admitted as much in the RV770 Story here on Anandtech. But all they have to show for that effort is quarter after quarter and year after year of unprofitability. They've since done their best to reverse their fortunes by continuously increasing the asking prices on their top tier SKUs, they chose an incredibly poor time to step into "Nvidia Flagship" pricing territory with Tahiti.

    If anything, Tahiti's lackluster performance and high price tag relative to 40nm parts enabled Nvidia to offer their midrange ASIC (GK104) as a flagship part. Only now has the market begun to correct itself as it became clear the asking price on 28nm could not justify the asking prices as the differences in performance between 28nm and 40nm parts became indistinguishable. And who led that charge? Nvidia with Kepler. AMD simply piggy-backed price and performance of 40nm which is why you see the huge drops in MSRP since launch for AMD parts.

    Bringing the discussion full circle, Nvidia knows full well they are competing with themselves even if you take AMD out of the picture, which is why they compare the GTX 660 to the GTX 460 and 8800GT. They fully understand they need to offer compelling increases in performance at the same price points, or the same performance at much cheaper prices (GTX 660 compared to GTX 570) or there is no incentive for their users to upgrade.
  • Ananke - Thursday, September 13, 2012 - link

    Today's AMD prices are so-so OK, especially considering the street prices and bundles.
    This GTX660 is priced a little too high, this should've been the GTX670 launch price. The 660 is worth to me around $189 today. I don't understand why people pay premium fro the name. I understand that you may want better driver support under Linux, but for the Windows gamer there is no reason.

    The AMD 7870 is still better buy for the money today.

    While many people with very old hardware may jump in at this price level, I will pass and wait for the AMD8xxx series. We are almost there :).

    The last two years have been very disappointing in the hardware arena. :(
  • rarson - Friday, September 14, 2012 - link

    Yeah, "we've been over this before." Back then you didn't get it, and you still don't because you're not examining the situation critically and making a rational argument, you're just posting fanboy nonsense. AMD's 28nm parts were expensive because:

    1. They were the first 28nm parts available.
    2. 28nm process was expensive (even Nvidia admits that the cost to shrink has been higher and slower-ramping than previous shrinks).
    3. Wafers were constrained (SoC manufacturers were starting to compete for wafers; this is additional demand that AMD and Nvidia didn't usually have to compete for).
    4. When you have limited supply and you want to make money, which is the entire point of running a business, then you have to price higher to avoid running out of stock too quickly and sitting around with your thumb up your ass waiting for supply to return before you can sell anything. That's exactly what happened when Nvidia launched the 680. Stock was nonexistent for months.

    The fact of the matter is that pricing is determined by a lot more things than just performance and you refuse to accept this. That is why you do not run a business.
  • chizow - Friday, September 14, 2012 - link

    And once again, you're ignoring historical facts and pricing metrics from the exact same IHVs and fab (TSMC):

    1) 28nm offered the lowest increase in price and performance of any previous generation in the last 10 years. To break this down for you, if what you said was actually true about new processes (its not), then 28nm increase in performance would've been the expected 50-100% increase you would expect from 100% of the asking price relative to previous generation. Except it wasn't, it was only 30-40% for 100% of the price relative to Nvidia's parts, and in AMD's case, it was more like +50% for 150% of the asking price compared to last-gen AMD parts. That is clearly asking more for less relative to last-gen parts.

    2) Getting into the economics of each wafer, Nvidia would've been able to offset any wafer constraints due to the fact GK104's midrange ASIC size was *MUCH* smaller at ~300mm^2 compared to the usual 500mm^2 from their typical flagship ASICs. This clearly manifested itself in Nvidia's last 2 quarters since GK104 launched where they've enjoyed much higher than usual profit margins. So once again, even if they had the same number of wafer's allocated at 28nm launch as they did at 40nm or 55nm or 65nm, they would still have more chips per wafer. So yes, while the 680 was supply constrained (artificial, imo), the subsequent 670, 660Ti and 660 launches clearly did not.

    3) Its obvious you're not much of an economist, financier, hell, even good with simple arithmetic, so stop trying to play armchair CEO. Here are the facts: AMD cards have lost 30-40% of their value in the last 3-4 months, all because Kepler has rebalanced the market to where it should've been from the outset. If that sounds reasonable to you then you probably consider Facebook's IPO a resounding success.

    4) Tahiti parts were a terrible purchase at launch and only now are they even palatable after 3 significant price drops forced by the launch of their Kepler counterparts. The answer to why they were a terribl purchase is obvious. They offered too little improvement for similar asking prices relative to 40nm parts. Who in their right mind would defend a 7870 offering GTX 570 performance at GTX 570 prices some 20 months after the 570 launched? Oh right, Rarson would....
  • rarson - Tuesday, September 18, 2012 - link

    1. There's no such thing as "pricing metrics." Prices are NOT determined by past prices! You are a such a moron. THESE ARE NEW PARTS! They use a NEW PROCESS! They cost more! GET OVER IT!

    2. "Getting into the economics of each wafer"

    You are not allowed to talk about economics. You have already aptly demonstrated that you don't have a clue when it comes to economics. So any time you use the word, I'm automatically ignoring everything that comes after it.

    3. Everything you said next to the number 3 has absolutely nothing to do with my comment and isn't even factually correct.

    4. Everything you said next to the number 4 has absolutely nothing to do with my comment and isn't even factually correct.
  • chizow - Tuesday, September 18, 2012 - link

    1. Nonsense, you obviously have no background in business or economics, EVERYTHING has pricing metrics for valuation or basis purposes. What do you think the stock markets, cost and financial accounting fundamentals are based upon? Valuation that predominantly uses historical data and performance numbers for forward looking performance EXPECTATIONS. Seriously, just stop typing, every line you type just demonstrates the stupidity behind your thought processes.

    2. Sounds like deflection, you brought fab process pricing into the mix, the fact remains Nvidia can crank out almost 4x as many GK104 for each GF100/110 chip from a single TSMC 300mm wafer (this is just simple arithmetic, which I know you suck at) and their margins have clearly demonstrated this (this is on their financial statements, which I know you don't understand). Whatever increase in cost from 28nm is surely offset by this fact in my favor (once again demonstrated by Nvidia's increased margins from Kepler).

    3 and 4 are factually correct even though they have nothing to do with your inane remarks, just run the numbers. Or maybe that's part of the problem, since you still seem to think GTX 570/6970 performance at GTX 570/6970 prices some 18 months later is some phenomenal deal that everyone should sidegrade to.

    Fact: AMD tried to sell their new 28nm cards at 100% of the performance and 100% of the price of existing 40nm parts that had been on the market for 15-18 months. These parts lost ~30% of their value in the subsequent 6 months since Kepler launched. Anyone who could not see this happening deserved everything they got, congratulations Rarson. :)
  • CeriseCogburn - Thursday, November 29, 2012 - link

    Only he didn't get anything. He was looking to scrape together a 6850 a few weeks back.
  • MySchizoBuddy - Thursday, September 13, 2012 - link

    So nvidia choose not to compare the 660 with 560 but with 460. Why is that?
  • Ryan Smith - Thursday, September 13, 2012 - link

    I would have to assume because the 660 would be so close to the 560 in performance, and because very few mainstream gamers are on a 1-year upgrade cycle. If you picked up a 560 in 2011 you've very unlikely to grab a 660 in 2012.

Log in

Don't have an account? Sign up now