GeForce 700M Models and Specifications

With that brief introduction out of the way, here are the specs of the now announced 700M family. If I had to guess, I expect we’ll see revised high-end 700M parts sometime later this year based on tweaked GK106 and GK104 chips—like maybe a GTX 780M that has the performance of the GTX 680MX but in the power envelope of the GTX 680M—but we’ll have to wait and see what happens.

  GeForce GT 750M GeForce GT 745M GeForce GT 740M
GPU and Process 28nm GK107 or GK106 28nm GK107 28nm GK107
CUDA Cores 384 384 384
GPU Clock Up to 967MHz
plus Boost
Up to 837MHz
plus Boost
Up to 980MHz
plus Boost
Memory Eff. Clock Up to 5.0GHz Up to 5.0GHz Up to 5.0GHz
Memory Bus Up to 128-bit Up to 128-bit Up to 128-bit
Memory Bandwidth Up to 80GB/s Up to 80GB/s Up to 80GB/s
Memory Up to 2GB GDDR5
or DDR3
Up to 2GB GDDR5
or DDR3
Up to 2GB GDDR5
or DDR3

Compared to the previous generation GTX 660M, GT 650M, GT645M, and GT 640M (not to mention the GT 640M LE), the new chips all have the same core set of features but now with GPU Boost 2.0 and higher memory clocks. I wish NVIDIA would just drop support for DDR3 on their higher end chips, and likewise the “up to” clauses aren’t really helpful, but they’re both necessary evils thanks to working with OEMs that sometimes have slightly different requirements. Overall, performance of these new 700M parts should be up 15-25% relative to the previous models, thanks to higher GPU and memory clock speeds.

You’ll note that the core clocks appear to be a little crazy, but this is based largely on how the OEMs choose to configure a specific laptop. With both GDDR5 and DDR3 variants available, NVIDIA wants to keep performance of chips in the same name within 10% of each other. Thus, we could see a GT 740M with 2.5GHz GDDR5 and a moderate core clock, another GT 740M with 2.0GHz GDDR5 and a slightly higher core clock, and a third variant with 1800MHz DDR3 but matched to a 980MHz core clock. Presumably, most (all?) currently planned GT 750M and GT 745M laptops are using GDDR5 memory, and thus we don’t see the higher core clocks. As for the Boost clocks, in practice that can increase the GPU core speed 15% or more over the normal value, with most games realizing a 10-15% performance thanks to the increase.

One final item of interest is that while the GT 750M appears to have a similar configuration to the other GPUs—384 cores, 128-bit memory interface—at least in the chip shots provided the GT 750M uses a different GPU core. Based on the appearance in the above images, the GT 750M uses GK106, only it’s what would be called a “floor sweeper” model: any GK106 chip with too many defective cores to be used elsewhere can end up configured basically the same as GK107. Presumably, there will also be variants that use GK107 (or potentially GK208, just like the other parts), but NVIDIA wouldn’t confirm or deny this.

  GeForce GT 735M GeForce GT 730M GeForce GT 720M GeForce 710M
GPU and Process 28nm GK208 28nm GK208 28nm Fermi 28nm Fermi
CUDA Cores 384 384 96 96
GPU Clock Up to 889MHz
plus Boost
Up to 719MHz
plus Boost
Up to 938MHz
with Boost
Up to 800MHz
with Boost
Memory Eff. Clock Up to 2.0GHz Up to 2.0GHz Up to 2.0GHz Up to 1.8GHz
Memory Bus Up to 64-bit Up to 64-bit Up to 64-bit Up to 64-bit
Memory Bandwidth 32GB/s 32GB/s 32GB/s 32GB/s
Memory Up to 2GB DDR3 Up to 2GB DDR3 Up to 2GB DDR3 Up to 2GB DDR3

Moving on to the lower end of the 700M range, we have the GT 730M and 710M that have already shown up in a few laptops. Joining them are GT 735M and GT 720M, which are similar chips with higher clocks. All of these chips have 64-bit memory interfaces and that will obviously curtail performance a bit, but NVIDIA is targeting Ultrabooks and other thin form factors here so performance and thermals need to be kept in balance; more on this in a moment.

The GT 735M and 730M at least are “new” parts that we haven’t seen previously in the Kepler family. The word is that some OEMs were after more economical alternatives than even the GT 640M LE, and the option to go with a 64-bit interface opens up some new markets. It’s basically penny pinching on the part of the OEMs, but we’ve complained about BoM price saving measures plenty so we won’t get into it here. NVIDIA did mention that they’ve spent some additional time tuning the drivers for performance over a 64-bit bus on these chips, and their primary competition in the iGPU market is going to be HD 4000 running on a ULV chip—and in the near future, HD 4600 with Haswell. They'll also compete with AMD APUs and dGPUs, obviously, but NVIDIA is more interested in trying to show laptop vendors and users what they gain by adding an NVIDIA dGPU to an Intel platform.

Introducing the NVIDIA GeForce 700M Family Performance Expectations and Closing Thoughts
Comments Locked

91 Comments

View All Comments

  • StevoLincolnite - Monday, April 1, 2013 - link

    Not really. Overclocking is fine if you know what you're doing.
    Years ago I had a Pentium M 1.6ghz notebook with a Mobility Radeon 9700 Pro.
    Overclocked that processor to 2.0ghz+ and the Graphics card core clock was almost doubled.
    Ran fine for years, eventually the screen on it died due to sheer age, but I'm still using it as file server hooked up to an old monitor still to this day, with about a half dozen external drives hanging off it.
  • JarredWalton - Monday, April 1, 2013 - link

    Hence the "with very few exceptions". You had a top-end configuration and overclocked it, but that was years ago. Today with Turbo Boost the CPUs are already pushing the limits most of the time in laptops (and even in desktops unless you have extreme cooling). GPUs are doing the same now with GPU Boost 2.0 (and AMD has something similar, more or less). But if you have a high-end Clevo, you can probably squeeze an extra 10-20% from overclocking (YMMV).

    But if we look at midrange offerings with GT 640M LE...well, does anyone really think an Acer M5 Ultrabook is going to handle the thermal load or power load of a GPU that's running twice as fast as spec over the long haul? Or what about a Sony VAIO S 13.3" and 15.5" -- we're talking about Sony, who is usually so worried about form that they underclock GPUs to keep their laptops from overheating. Hint: any laptop that's really thin isn't going to do well with GPU or CPU overclocking! I know there was a Win7 variant of the Sony VAIO S that people overclocked (typically 950MHz was the maximum anyone got stable), but that was also with the fans set to "Performance".

    Considering the number of laptops I've seen where dust buildup creates serious issues after six months, you're taking a real risk. The guys who are pushing 950MHz overclocks on 640M LE are also the same people that go and buy ultra-high-end desktops and do extreme overclocking, and when they kill a chip it's just business as usual. Again, I reiterate that I have seen enough issues with consumer laptops running hot, especially when they're over a year old, that I suggest restraint with laptop overclocking. You can do it, but don't cry to NVIDIA or the laptop makers when your laptop dies!
  • transphasic - Monday, April 1, 2013 - link

    Totally agreed. I had a Clevo/Sager Laptop with the 9800m GTX in it, and after only two years, it died, due to the Nvidia GPU getting fried to a crisp. The heat build-up from internal dust accumulation was what destroyed my $2700 dollar laptop after only 2 years of use.
    Ironically, I was thinking about overclocking it prior to it dying on me. In looking back, good thing I didn't do it. Overclocking is risky, and the payoffs are just not worth it, unless you are ready to take the expensive financial risks involved.
  • Drasca - Tuesday, April 2, 2013 - link

    I've got a Clevo x7200 and I just cleaned out a wall of dust after discovering it was thermal throttling hard core. I've got to hand it to the internals and cooling of this thing though, it was still running like a champ.

    This thing's massive cooling is really nice.

    I can stably overclock the 485m GPU from 575 Mhz to 700Mhz without playing with voltages. No signifigant difference in temps, especially compared to when it was throttling. Runs at 61C.

    I love the cooling solution on this thing.
  • whyso - Monday, April 1, 2013 - link

    It depends really. As long as you don't touch voltage the temperature does not rise much. I have a 660m and it reaches 1085/2500 without any problems (ANIC rating of 69%). Overclocked vs non overclocked is basically a 2 degree difference (72 vs 74 degrees). Better than a stock 650 desktop.

    Also considering virtually every 660m I have seen boost up to 950/2500 from 835/2000 I don't think the 750m is going to be any upgrade. Many 650m have a boost of 835 core so there really is no upgrade there either (maybe 5-10%). GK107 is fine with 64 GB/sec bandwidth.
  • whyso - Monday, April 1, 2013 - link

    Whoops sorry didn't see the 987 clocks, nice jump there.
  • JarredWalton - Monday, April 1, 2013 - link

    Funny thing is that in reading comments on some of the modded VBIOS stuff for the Sony VAIO S, the modder say, "The Boost clock doesn't appear to be working properly so I just set it to the same value..." Um, think please Mr. Modder. The Boost clock is what the GPU is able to hit when certain temperature and power thresholds are not exceeded; if you overclock, you've likely inherently gone beyond what Boost is designed to do.

    Anyway, a 2C difference for a 660M isn't a big deal, but you're also looking at a card with a default 900MHz clock, so you went up in clocks by 20% and had a 3% temperature increase (and no word on fan speed). Going from 500MHz to 950MHz is likely going to be more strenuous on the system and components.
  • damianrobertjones - Monday, April 1, 2013 - link

    "and their primary competition in the iGPU market is going to be HD 4000 running on a ULV chip!"

    Wouldn't that be the HD 4600? Also it's a shame that no-one really states the HD4000 with something like Vengeance ram which improves performance
  • HisDivineOrder - Monday, April 1, 2013 - link

    So if the "core hardware" is the same from Boost 1 and 2, then nVidia should go on and make Boost 2.0 be something we all can enable in the driver.

    Or... are they trying to get me to upgrade to new hardware to activate a feature my card is already fully capable of supporting? Haha, nVidia, you so crazy.
  • JarredWalton - Monday, April 1, 2013 - link

    There may be some minor difference in the core hardware (some extra temperature or power sensors?), but I'd be shocked if NVIDIA offered an upgrade to Boost 1.0 users via drivers -- after all, it looks like half of the performance increase from 700M is going to come from Boost 2.0!

Log in

Don't have an account? Sign up now