Overclocking: When Headroom Exceeds Clockspeed Limits

Last but not least we have our customary look at overclocking performance. With all 3 of our cards being based on the same reference design, we expect to see some relatively consistent results between the cards. At the same time NVIDIA has told us that GTX 750 has some very interesting overclocking properties, and boy they weren’t kidding.

On a quick note, as a GPU Boost 2.0 product, overclocking on the GTX 750 series is not any different than on other GTX 700 series cards. It’s still based on offset overclocking, with the user adjusting offsets for the final overclock. But with that said there are two things to point out. The first is that the power target is limited to 100% on all cards. Because these are sub-75W cards, NVIDIA is not allowing anyone to exceed the card’s default TDP, so you only have as much power to play with as you started with. Second of all, none of our cards had available overvoltage bins. Apparently some cards do, but ours did not, so our voltage bins maxed out at the default bins you see listed.

Finally, all 3 cards have a maximum clock offset of 135MHz. This will be an important fact in a little bit.

GeForce GTX 750 Series Overclocking
  GTX 750 Ti (Ref) Zotac GTX 750 Ti Zotac GTX 750
Shipping Core Clock 1020MHz 1033MHz 1033MHz
Shipping Max Boost Clock 1150MHz 1175MHz 1163MHz
Shipping Memory Clock 5.4GHz 5.4GHz 5.0GHz
Shipping Max Boost Voltage 1.168v 1.137v 1.187v
       
Overclock Core Clock 1155MHz 1168MHz 1168MHz
Overclock Max Boost Clock 1285MHz 1310MHz 1298MHz
Overclock Memory Clock 6.3GHz 6.1GHz 6.0GHz
Overclock Max Boost Voltage 1.168v 1.137v 1.187v

As we can quickly see, two patterns emerge. The first is that with every card equipped with 6GHz memory (though we remain unsure which mode the Zotac GTX 750’s is in), each and every card hits at least 6GHz, and sometimes a bit more. With the 128-bit memory bus generally providing the biggest bottleneck for GM107, the fact that there is 12%+ overclocking headroom here is going to be very helpful in feeding the tiny beast that is GM107.

More significantly however is the core overclock. We maxed out every single one. Every card, from the NVIDIA reference card to the Zotac cards, had no trouble overclocking by the full 135MHz to their respective maximum overclocks. The Zotac GTX 750 Ti, having the highest maximum boost clock by default, is technically the winner here at 1310MHz. But at this point everyone is a winner. Going by the maximum boost clock, every card is capable of an 11% core overclock, to go with that tasty 12% memory overclock.

The fact of the matter is that this is not something we normally encounter. Sub-75W cards are not poor overclockers, but they’re not usually strong overclockers either, which is why a 135MHz offset limit makes sense at first glance. But it’s clear that NVIDIA underestimated their own overclocking potential here when setting the specifications for these cards, as there’s seeming some headroom left untapped. Without additional offset room it’s impossible to say just how much more overclocking headroom remains – it may not be very much – but there should be room for at least some additional overclocking.

At this point with cards already in the pipeline we’ll have to take a look at individual cards and see what manufacturers have set their offset limits at. If they have followed NVIDIA’s specifications, then they’ll be equally limited. But hopefully with the launch now behind them, NVIDIA’s partners can work with NVIDIA on making greater offsets available on newer batches of cards.

Metro: Last Light - 1920x1080 - High Quality

Company of Heroes 2 - 1920x1080 - High Quality + Low AA

Company of Heroes 2 - Min. Frame Rate - 1920x1080 - High Quality + Low AA

Bioshock Infinite - 1920x1080 - Ultra Quality + DDoF

Battlefield 4 - 1920x1080 - High Quality

Crysis 3 - 1920x1080 - Medium Quality + FXAA

Depending on the game being used, the benefits from overclocking range from 9% to 12%, roughly in-line with our overclocks. For the GTX 750 this is sometimes enough to catch the stock clocked R7 260X, but even with this overclock the GTX 750 Ti will still generally trail the R7 265.

Load Power Consumption - Crysis 3

Load Power Consumption - FurMark

Load GPU Temperature - Crysis 3

Load GPU Temperature - FurMark

Load Noise Levels - Crysis 3

Load Noise Levels - FurMark

On the other hand, because of the hard TDP limit of 100%, this extra performance is relatively cheap. Video card power consumption moves by only a few watts, and then a few watts of CPU time on top of that. For all practical purposes overclocking can extend NVIDIA’s already incredible performance-per-watt ratio by another 10% with no meaningful impact on noise. Given the consistency of overclocking headroom we’ve seen in our GTX 750 series samples, this is one of those scenarios where overclocking is going to be a reasonable and (relatively) fool proof action to take.

Power, Temperature, & Noise Final Words
Comments Locked

177 Comments

View All Comments

  • TheinsanegamerN - Tuesday, February 18, 2014 - link

    Look at sapphire's 7750. superior in every way to the 6570, and is single slot low profile. and overclocks like a champ.
  • dj_aris - Tuesday, February 18, 2014 - link

    Sure but it's cooler is kind of loud. Definitely NOT a silent HTPC choice. Maybe a LP 750 would be better.
  • evilspoons - Tuesday, February 18, 2014 - link

    Thanks for pointing that out. None of my local computer stores sell that, but I took a look on MSI's site and sure enough, there it is. They also seem to have an updated version of the same card being sold as an R7 250, although I'm not sure there's any real difference or if it's just a new sticker on the same GPU. Clock speeds, PCB design, and heat sink are the same, anyway.
  • Sabresiberian - Tuesday, February 18, 2014 - link

    I'm hoping the power efficiency means the video cards at the high end will get a performance boost because they are able to cram more SMMs on the die than SMXs were used in Kepler solutions. This of course assumes the lower power spec means less heat as well.

    I do think we will see a significant performance increase when the flagship products are released.

    As far as meeting DX11.1/11.2 standards - it would be interesting to hear from game devs how much this effects them. Nvidia has never been all that interested in actually meeting all the requirements for Microsoft to give them official status for DX versions, but that doesn't mean the real-world visual quality is reduced. In the end what I care about is visual quality; if it causes them to lose out compared to AMD's offerings, I will jump ship in a heartbeat. So far that hasn't been the case though.
  • Krysto - Tuesday, February 18, 2014 - link

    Yeah, I'm hoping for a 10 Teraflops Titan, so I can get to pair with my Oculus Rift next year!
  • Kevin G - Tuesday, February 18, 2014 - link

    nVidia has been quite aggressive with the main DirectX version. They heavily pushed DX10 back in day with the Geforce 8000/9000 series. They do tend to de-emphassize smaller updates like 8.1, 10.1, 11.1 and 11.2. This is partially due to their short life spans on the market before the next major update arrives.

    I do expect this to have recently changed as Windows it is moving to rapid release schedule and it'll be increasingly important to adopt these smaller iterations.
  • kwrzesien - Tuesday, February 18, 2014 - link

    Cards on Newegg are showing DirectX 11.2 in the specs list along with OpenGL 4.4. Not that I trust this more than the review - we need to find out more.
  • JDG1980 - Tuesday, February 18, 2014 - link

    The efficiency improvements are quite impressive considering that they're still on 28nm. TDP is low enough that AIBs should be able to develop fanless versions of the 750 Ti.

    The lack of HDMI 2.0 support is disappointing, but understandable, considering that it exists virtually nowhere. (Has the standard even been finalized yet?) But we need to get there eventually. How hard will it be to add this feature to Maxwell in the future? Does it require re-engineering the GPU silicon itself, or just re-designing the PCB with different external components?

    Given the increasing popularity of cryptocoin mining, some benchmarks on that might have been useful. I'd be interested to know if Maxwell is any more competitive in the mining arena than Kepler was. Admittedly, no one is going to be using a GPU this small for mining, but if it is competitive on a per-core basis, it could make a big difference going forward.
  • xenol - Tuesday, February 18, 2014 - link

    I'm only slightly annoyed that NVIDIA released this as a 700 series and not an 800 series.
  • DanNeely - Tuesday, February 18, 2014 - link

    I suspect that's an indicator that we shouldn't expect the rest of the Maxwell line to launch in the immediate future.

Log in

Don't have an account? Sign up now