Although GTX 770 is already a very high clocked part for GK104, we still wanted to put it through its paces when it comes to overclocking. Of particular interest here is actually memory overclocking, as this is the first video card shipping with 7GHz GDDR5 standard. This will let us poke at things to see just how far both the RAM itself and NVIDIA’s memory controller can go.

Meanwhile the switch to GPU Boost 2.0 for GTX 770 is going to change the overclocking process somewhat compared to GTX 680 and GTX 670. Overvolting introduces marginally higher voltages and boost bins to play with, while on the other hand the removal of power targets in favor of TDP means that we only get 106% – an extra 14W – to play with in TDP limited scenarios. Thankfully as we’ve seen we’re generally not TDP limited on GTX 770 at stock, which means our effective headroom should be greater than that.

GeForce GTX 770 Overclocking
  Stock Overclocked
Core Clock 1046MHz 1146MHz
Boost Clock 1085MHz 1185MHz
Max Boost Clock 1136MHz 1241MHz
Memory Clock 7GHz 8GHz
Max Voltage 1.2 1.212v

We’re actually a bit surprised we were able to get another 100MHz out of the GPU itself. Even without the extra overvoltage boost bin, we’re still pushing 1200MHz+ on 1.2v, which is doing rather well for GK104. Of course this is only a 9% increase in the GPU clockspeed, which is going to pale in comparison to parts like GTX 670 and GTX 780, each of which can do 20%+ due to their lower clockspeeds. So there’s some overclocking headroom in GTX 770, but as to be expected it's not a lot.

More interesting however is the memory overclock. We’ve been able to put another 1GHz on 6GHz GTX 680 cards in the past, and with the 7GHz base GTX 770 we’ve been able to pull off a similar overclock, pushing our GTX 770 to an 8GHz memory clock. The fact that NVIDIA’s memory controller can pull this off is nothing short of impressive; we had expected there to be some headroom, but another 14% is beyond our expectations. At this clockspeed the GTX 770 has a full 256GB/sec of memory bandwidth, 33% more than both a stock GTX 680 and the 384-bit GTX 580. Of course we’ll see if GTX 770 can put that bandwidth to good use.

The end result of our overclocking efforts nets a very consistent 9%-12% increase in performance across our games. 9% is the upper bound for improvements due to GPU overclocking, so anything past that means we’re also benefitting from the extra memory bandwidth. We aren’t picking up a ton of performance from memory bandwidth as far as we can tell, but it does pay off and is worth pursuing, even with the GTX 770’s base memory clock of 7GHz.

Overall overclocking can help close the gap between the GTX 770 and 7970GE in some games, and extend it in others. But 10% won’t completely close the gap on the GTX 780; at best it can halve it. GTX 780’s stock performance is simply not attainable without the much more powerful GK110 GPU.

Moving on to power consumption, we can see that the 106% TDP limit keeps power usage from jumping up by too much. In Battlefield 3 this is a further 12W at the wall, and 21W at the wall with FurMark. In games this means our power usage at the wall is still below GTX 780, though we’ve equaled it under FurMark.

The fan curve for GTX 770 appears to be identical to that of GTX 780. Which is to say the fan significantly ramps up around 84C, keeping temperatures in the low-to-mid 80s even though GPU Boost 2.0 is allowed to go up to 95C.

Finally for fan noise, we see a small increase under Battlefield 3, and no change under FurMark. 1.5dB louder under Battlefield 3 puts noise levels on par with the GTX 780, sacrificing some of GTX 770’s abnormally quiet acoustics, but still keeping noise below the 50dB level. Or to put this another way, the performance gains for overclocking aren’t particularly high, but then again neither is the cost of overclocking in terms of noise.

Power, Temperature, & Noise Final Thoughts
Comments Locked

117 Comments

View All Comments

  • Enkur - Thursday, May 30, 2013 - link

    Why is there a picture of Xbox One in the article when its mentioned nowhere.
  • Razorbak86 - Thursday, May 30, 2013 - link

    The 2GB Question & The Test

    "The wildcard in all of this will be the next-generation consoles, each of which packs 8GB of RAM, which is quite a lot of RAM for video operations even after everything else is accounted for. With most PC games being ports of console games, there’s a decent risk of 2GB cards being undersized when used with high resolutions and the highest quality art assets. The worst case scenario is only that these highest quality assets may not be usable at playable performance, but considering the high performance of every other aspect of GTX 770 that would be a distinct and unfortunate bottleneck."
  • kilkennycat - Thursday, May 30, 2013 - link

    NONE of the release offerings (May 30)of the GTX770 on Newegg have the Titan cooler !!!! Regardless of the pictures in this article and on the GTX7xx main page on Newegg. And no bundled software to "ease the pain" and perhaps help mentally deaden the fan noise..... this product takes more power than the GTX680. Early buyers beware... !!
  • geok1ng - Thursday, May 30, 2013 - link

    "Having 2GB of RAM doesn’t impose any real problems today, but I’m left to wonder for how much longer that’s going to be true. The wildcard in all of this will be the next-generation consoles, each of which packs 8GB of RAM, which is quite a lot of RAM for video operations even after everything else is accounted for. With most PC games being ports of console games, there’s a decent risk of 2GB cards being undersized when used with high resolutions and the highest quality art assets. "

    Last week a noob posted something like that on the 780 review. It was decimated by a slew of tech geeks comments afterward. I am surprised to see the same kind of reasoning on a text written by an AT expert.

    All AT reviewers by now know that next console will be using an APU from AMD that will have the graphic muscle (almost) comparable to a 6670 ( 5670 in PS4 case thanks to GDDR5) . So what Mr. Ryan Smith is stating is that a "8GB" 6670 can perform better than a 2GB 770 in video operations?

    I am well aware that Mr Ryan Smith is over-qualified to help AT readers revisit this old legend of graphics memory :
    How little is too little?

    And please let us not starting flaming about memory usage- most modern OSs and gaming engines use available RAM dinamically, so if one sees a game use 90%+ of available graphics memory does not imply , at all, that such game would run faster if we double the graphics memory. The opposite is often the true.

    As soon as 4GB versions of the 770 launch AT should pit these versions against the 2GB 770 and the 3GB 7970. Or we could go back months ago and re-read tests done when the 4GB versions of the 680 came out- only at triple screen resolutions and insane levels of AA would we see any theoretical advantage of 3-4Gb over 2GB, which is largely unpractical since most games can't run at these resolutions and AA with a single card anyway.

    I think NVDIA did it right (again): 2GB is enough for today and we wont see next gen consoles running triple screen resolutions at 16xAA+. 2Gb means less BoM, which is good for profit and price competition and less energy consumption which is good for card temps and max Oc results.
  • Enkur - Thursday, May 30, 2013 - link

    I cant believe AT is mixing up unified graphics and system memory on consoles with dedicated RAM of the graphics card. doesnt make sense.
  • Egg - Thursday, May 30, 2013 - link

    PS4 has 8GB of GDDR5 and a GPU somewhat close to a 7850. I don't know where you got your facts from.
  • geok1ng - Thursday, May 30, 2013 - link

    Just to start the flaming war- next consoles will not run in monolithic GPUs, but in twin jaguar cores. So when you see those 768/1152 GPU cores numbers, remember these are "crossfired" cores. And in both consoles the GPU is running at a mere 800Mhz, hence the comparison with the 5670/6670, 480 shaders cards@ 800Mhz.
    It is widely accepted that console games are developed using the lowest common denominator, in this case, the Xbox One DDR3 memory. Even if we take the huge assumption that dual jaguar cores running in tandem can work similar to a 7850 -1024 cores at 860Mhz- in a PS4 ( which is a huge leap of faith looking back to ho badly AMD fared in previous crossfires attempts using integrated GPU like these jaguar cores) that turns out to be the same:

    Do an 8GB 7850 gives us better graphical results than a 2GB 770, for any gaming application in the foreseeable future?

    Don't 4k on me please: both consoles will be using HDMI, not DisplayPort. and no, they wont be able to drive games across 3 screens. This "next gen-consoles will have more Video RAM than high GPUs in PCs, so their games will be better" is reminding of the old days of "1gb DDr2 cards are better than 256Mb DDr3 cards for future games" scam.
  • Ryan Smith - Thursday, May 30, 2013 - link

    We're aware of the difference. A good chunk of that unified memory is going to be consumed by the OS, the application, and other things that typically reside on the CPU in a PC. But we're still expecting games to be able to load 3GB+ in assets, which would be a problem for 2GB cards.
  • iEATu - Thursday, May 30, 2013 - link

    Why are you guys using FXAA in benchmarks as high end as these? Especially for games like BF3 where you have FPS over 100. 4x AA for 1080p and 2x for 1440p. No question those look better than FXAA...
  • Ryan Smith - Thursday, May 30, 2013 - link

    In BF3 we're testing both FXAA and MSAA. Otherwise most of our other tests are MSAA, except for Crysis 3 which is FXAA only for performance reasons.

Log in

Don't have an account? Sign up now