Power, Temperature, & Noise

As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.

GeForce GTX 750 Series Voltages
Ref GTX 750 Ti Boost Voltage Zotac GTX 750 Ti Boost Voltage Zotac GTX 750 Boost Voltage
1.168v 1.137v 1.187v

For those of you keeping track of voltages, you’ll find that the voltages for GM107 as used on the GTX 750 series is not significantly different from the voltages used on GK107. Since we’re looking at a chip that’s built on the same 28nm process as GK107, the voltages needed to drive it to hit the desired frequencies have not changed.

GeForce GTX 750 Series Average Clockspeeds
  Ref GTX 750 Ti Zotac GTX 750 Ti Zotac GTX 750
Max Boost Clock
1150MHz
1175MHz
1162MHz
Metro: LL
1150MHz
1172MHz
1162MHz
CoH2
1148MHz
1172MHz
1162MHz
Bioshock
1150MHz
1175MHz
1162MHz
Battlefield 4
1150MHz
1175MHz
1162MHz
Crysis 3
1149MHz
1174MHz
1162MHz
Crysis: Warhead
1150MHz
1175MHz
1162MHz
TW: Rome 2
1150MHz
1175MHz
1162MHz
Hitman
1150MHz
1175MHz
1162MHz
GRID 2
1150MHz
1175MHz
1162MHz
Furmark
1006MHz
1032MHz
1084MHz

Looking at average clockspeeds, we can see that our cards are essentially free to run at their maximum boost bins, well above their base clockspeed or even their official boost clockspeed. Because these cards operate at such a low TDP cooling is rendered a non-factor in our testbed setup, with all of these cards easily staying in the 60C or lower range, well below the 80C thermal throttle point that GPU Boost 2.0 uses.

As such they are limited only by TDP, which as we can see does make itself felt, but is not a meaningful limitation. Both GTX 750 Ti cards become TDP limited at times while gaming, but only for a refresh period or two, pulling the averages down just slightly. The Zotac GTX 750 on the other hand has no such problem (the power savings of losing an SMX), so it stays at 1162MHz throughout the entire run.

Idle Power Consumption

Load Power Consumption - Crysis 3

Load Power Consumption - FurMark

Idle GPU Temperature

Load GPU Temperature - Crysis 3

Load GPU Temperature - FurMark

Idle Noise Levels

Load Noise Levels - Crysis 3

Load Noise Levels - FurMark

Compute Overclocking: When Headroom Exceeds Clockspeed Limits
Comments Locked

177 Comments

View All Comments

  • EdgeOfDetroit - Tuesday, February 18, 2014 - link

    The EVGAs have Displayport, but they might be the only ones. I ordered the Superclocked 750 Ti with the $5 rebate from Newegg because it had a DisplayPort and the competitors did not.
  • Death666Angel - Tuesday, February 18, 2014 - link

    "the 760 has been out for almost a year now and is an older process" -> Still the same 28nm process for the 760 and 750 alike. :)
  • MrPoletski - Tuesday, February 18, 2014 - link

    This jump in cache for 128k to 2mb... I wonder what that does for cryptocurrency mining?
  • The Von Matrices - Tuesday, February 18, 2014 - link

    Unless the integer shift operation has been improved, not much.
  • g101 - Tuesday, February 18, 2014 - link

    Nothing, nividia is fundamentally deficient with integer compute, these are architectural decisions that NVidia made in hopes of squeezing out slightly better FPS. Think: anti-gpgpu, or more of a classic asic.

    So no, this arc isn't going to change their position with regards to the actual algorithms. Perhaps there will be a moderate increase in sCrypt sha2 performance (due to the memory-hard nature of that implementation), however, nvidia's extreme (and sometimes intentional) incompetence with gpgpu leads me to believe that they still do not understand that GPGPU is the reason AMD's cards are above MSRP. It's not due to one specific hashing function, it's due to their superiority in over 11 specific functions, superior general opencl performance and comparatively greater performance for many SP compute intensive CUDA applications. For instance, cross-comparison between cuda and opencl raycasting yields some very interesting results, with the opencl/AMD solutions outperforming cuda 2:1, often with greater accuracy.

    CUDA is easy, NVidia has zero compute advantage beyond 'ease'.
  • oleguy682 - Tuesday, February 18, 2014 - link

    AMD receives nothing for their cards being sold over MSRP. Their channel partners likely have agreements in place for this generation of processors that is locked in at a specific price or price range. Perhaps if they signed new partners, or revised their processors substantially enough to warrant a new agreement, they can take advantage of the higher-than-MSRP situation, but I doubt it. And even the ASUS and Gigabytes of the world are likely unable to capitalize much on the demand. At best, they are able to sell boards to retailers as fast as they come off the line.

    Only the Neweggs are profiting handsomely off of this.
  • HighTech4US - Wednesday, February 19, 2014 - link

    Von and g101 you are both wrong as Maxwell has now greatly improved integer compute. Check out the following review page from Tom's:

    http://www.tomshardware.com/reviews/geforce-gtx-75...

    Quote: Historically, Nvidia's cards came up short against competing Radeons, which is why you see R9 290X boards selling for $700 and up. But the Maxwell architecture's improvements allow the 60 W GeForce GTX 750 Ti to outperform the 140 W GeForce GTX 660 and approach AMD's 150 W Radeon R7 265, which just launched, still isn't available yet, but is expected to sell for the same $150. On a scale of performance (in kH/s) per watt, that puts Nvidia way out ahead of AMD. Today, four GM107-based cards in a mining rig should be able to outperform a Radeon R9 290X for less money, using less power.
  • Yojimbo - Wednesday, February 19, 2014 - link

    Which is good for NVidia, maybe just lucky. Increasing gamer market share in exchange for some short-term profits is probably a good trade-off for Nvidia. If AMD can't maintain their market share, they'll have less muscle behind their Mantle initiative.
  • hpvd - Tuesday, February 18, 2014 - link

    Does this first small Maxwell brings Support for Unified Virtual Memory Management IN HARDWARE? If yes: would be really interesting to see how efficient it could work...
    details see:
    http://www.anandtech.com/show/7515/nvidia-announce...
  • willis936 - Tuesday, February 18, 2014 - link

    I would like very much to see a comparison of GM107 in SLI to other $300 graphics card options. Us 560 Ti owners are in a tough position because it's upgradin' time and there's no decent, quiet solution. SLI is still a bit of a hack and from what I can tell can be more of a compatibility headache than a performance gain. These cards may be the exception though.

Log in

Don't have an account? Sign up now