GeForce 700M Models and Specifications

With that brief introduction out of the way, here are the specs of the now announced 700M family. If I had to guess, I expect we’ll see revised high-end 700M parts sometime later this year based on tweaked GK106 and GK104 chips—like maybe a GTX 780M that has the performance of the GTX 680MX but in the power envelope of the GTX 680M—but we’ll have to wait and see what happens.

  GeForce GT 750M GeForce GT 745M GeForce GT 740M
GPU and Process 28nm GK107 or GK106 28nm GK107 28nm GK107
CUDA Cores 384 384 384
GPU Clock Up to 967MHz
plus Boost
Up to 837MHz
plus Boost
Up to 980MHz
plus Boost
Memory Eff. Clock Up to 5.0GHz Up to 5.0GHz Up to 5.0GHz
Memory Bus Up to 128-bit Up to 128-bit Up to 128-bit
Memory Bandwidth Up to 80GB/s Up to 80GB/s Up to 80GB/s
Memory Up to 2GB GDDR5
or DDR3
Up to 2GB GDDR5
or DDR3
Up to 2GB GDDR5
or DDR3

Compared to the previous generation GTX 660M, GT 650M, GT645M, and GT 640M (not to mention the GT 640M LE), the new chips all have the same core set of features but now with GPU Boost 2.0 and higher memory clocks. I wish NVIDIA would just drop support for DDR3 on their higher end chips, and likewise the “up to” clauses aren’t really helpful, but they’re both necessary evils thanks to working with OEMs that sometimes have slightly different requirements. Overall, performance of these new 700M parts should be up 15-25% relative to the previous models, thanks to higher GPU and memory clock speeds.

You’ll note that the core clocks appear to be a little crazy, but this is based largely on how the OEMs choose to configure a specific laptop. With both GDDR5 and DDR3 variants available, NVIDIA wants to keep performance of chips in the same name within 10% of each other. Thus, we could see a GT 740M with 2.5GHz GDDR5 and a moderate core clock, another GT 740M with 2.0GHz GDDR5 and a slightly higher core clock, and a third variant with 1800MHz DDR3 but matched to a 980MHz core clock. Presumably, most (all?) currently planned GT 750M and GT 745M laptops are using GDDR5 memory, and thus we don’t see the higher core clocks. As for the Boost clocks, in practice that can increase the GPU core speed 15% or more over the normal value, with most games realizing a 10-15% performance thanks to the increase.

One final item of interest is that while the GT 750M appears to have a similar configuration to the other GPUs—384 cores, 128-bit memory interface—at least in the chip shots provided the GT 750M uses a different GPU core. Based on the appearance in the above images, the GT 750M uses GK106, only it’s what would be called a “floor sweeper” model: any GK106 chip with too many defective cores to be used elsewhere can end up configured basically the same as GK107. Presumably, there will also be variants that use GK107 (or potentially GK208, just like the other parts), but NVIDIA wouldn’t confirm or deny this.

  GeForce GT 735M GeForce GT 730M GeForce GT 720M GeForce 710M
GPU and Process 28nm GK208 28nm GK208 28nm Fermi 28nm Fermi
CUDA Cores 384 384 96 96
GPU Clock Up to 889MHz
plus Boost
Up to 719MHz
plus Boost
Up to 938MHz
with Boost
Up to 800MHz
with Boost
Memory Eff. Clock Up to 2.0GHz Up to 2.0GHz Up to 2.0GHz Up to 1.8GHz
Memory Bus Up to 64-bit Up to 64-bit Up to 64-bit Up to 64-bit
Memory Bandwidth 32GB/s 32GB/s 32GB/s 32GB/s
Memory Up to 2GB DDR3 Up to 2GB DDR3 Up to 2GB DDR3 Up to 2GB DDR3

Moving on to the lower end of the 700M range, we have the GT 730M and 710M that have already shown up in a few laptops. Joining them are GT 735M and GT 720M, which are similar chips with higher clocks. All of these chips have 64-bit memory interfaces and that will obviously curtail performance a bit, but NVIDIA is targeting Ultrabooks and other thin form factors here so performance and thermals need to be kept in balance; more on this in a moment.

The GT 735M and 730M at least are “new” parts that we haven’t seen previously in the Kepler family. The word is that some OEMs were after more economical alternatives than even the GT 640M LE, and the option to go with a 64-bit interface opens up some new markets. It’s basically penny pinching on the part of the OEMs, but we’ve complained about BoM price saving measures plenty so we won’t get into it here. NVIDIA did mention that they’ve spent some additional time tuning the drivers for performance over a 64-bit bus on these chips, and their primary competition in the iGPU market is going to be HD 4000 running on a ULV chip—and in the near future, HD 4600 with Haswell. They'll also compete with AMD APUs and dGPUs, obviously, but NVIDIA is more interested in trying to show laptop vendors and users what they gain by adding an NVIDIA dGPU to an Intel platform.

Introducing the NVIDIA GeForce 700M Family Performance Expectations and Closing Thoughts
Comments Locked

91 Comments

View All Comments

  • HisDivineOrder - Monday, April 1, 2013 - link

    Yeah. I kinda figured. Still, if it's the same, then it'd be foolish not to ask.

    I knew when I heard about Boost 2.0 in Titan that all that time spent discussing it was going to mean it would show up in Kepler refresh products relatively soon afterward. I wouldn't be surprised to see nVidia refresh even the desktop high end with something like that. Minor changes, including slightly higher clocks and a newer Boost.

    Even a "minor change" would probably be enough to ruin AMD's next six months.
  • Guspaz - Monday, April 1, 2013 - link

    I've been using GeForce Experience, and have some comments. It's definitely a timesaver, and it's nice to be able to "just click a button" and not have to worry about tweaking the detailed settings (although it's nice to still be able to if I want to override something). I find that the settings it picks generally do run at a good framerate on my rig. It also makes driver updates easier, since it presents you with notification of new drivers (even betas), gives you a nice list of changes in each version, and makes install a one-click affair (it downloads/installs inside the app).

    Downsides? First, it doesn't support very many games. This is understandable since supporting a game means they need to have setting profiles for every one of their cards, but also a whole lot of other configurations such as different CPUs and different monitor resolutions. Unless there is some sort of dynamic algorithm involved, that would be an enormous number of potential configs per game. Still, the limited game support is unfortunate. Second, the app will continually notify you that new optimized settings are available, even when the new settings it downloaded are not for any game you have installed. So it keeps telling me there are new settings, but when I go into the app to check, there are no updates for any of my games.
  • Wolfpup - Monday, April 1, 2013 - link

    I hadn't heard of this program, and have to say it's kind of a cool idea. Heck, *I* don't always like messing around with sometimes vaguely settings in games, I think for the average user this could be really cool, and does indeed help make it more console like.
  • HisDivineOrder - Monday, April 1, 2013 - link

    I like that they went in and started supporting prioritizing resolution. So instead of just abstractly telling me to change my 2560x1600 to 1920x1200/1080, they leave it at 2560x1600 now. That's good.

    Plus, their latest release notes also said they were adding SLI support, which is great.

    The thing that I think this program lacks is the option to set certain settings that you want to be true regardless and then have the program adjust to recommend specs around certain "givens" that you won't ever change.

    For example, I'm not a big fan of AA unless there is ABSOLUTELY no performance setting that can't be turned all the way up. I can imagine some people might want AA at all costs because jaggies just bug them.

    I think we should both have the option to prioritize for the setting we want. I'd also love it if we had a program like Geforce Experience that let us alter the settings for a game before we entered it and also served as a launcher (much as Geforce Experience does), but I think instead of just doing the defaults, we should have the option to select the settings, choose the "Optimal" as determined by nVidia, and also the option to do the tweaking from right inside the Geforce Experience interface.

    And if I'm adding in wish list items, I'd love it if nVidia would integrate SMAA and FXAA into the program. Hell, I think I'd really prefer it if Geforce Experience would serve a similar function to SweetFX except in an official setting kinda way. So we could tweak the game from Geforce Experience in addition to just it serving as a simple optimizer.

    It could come with an "Advanced" mode. I think a quick launch and/or settings for the control panel might be nice to help us move between different UI's, from adjusting the game to adjusting the profiles to adjusting the settings of the video card. Maybe it should be the same interface with different tabs to interact with each element.

    And... nah. That's it.
  • cjb110 - Tuesday, April 2, 2013 - link

    mmm, I don't like it seems to push the pretty too much hurt performance no end.
    Now it might be that its looking at GPU only, in which case...duh pointless.

    Nice idea, but needs to be a bit more balanced in its options!
  • tviceman - Monday, April 1, 2013 - link

    Jared, can you confirm or not if these parts have the same (or very similar) power envelope as their like-named 600 series parts that are being replaced?
  • JarredWalton - Monday, April 1, 2013 - link

    My understanding is that they do have the same approximate power envelopes. However, keep in mind that NVIDIA doesn't disclose notebook power TDPs -- they simply say that they work with each OEM to provide what the OEM desires. Thus, two laptops with GT 750M could potentially have TDPs as much as 5-10W apart (or not -- I don't know how much of a difference we're likely to see).
  • Einy0 - Monday, April 1, 2013 - link

    I hate this up to crap for specs. This leaves way too much wiggle room for OEMs to under clock the chips to fit a certain cooling profile. This messes with performance way too much. There should be clearly defined specifications for each GPU model. The typical consumer doesn't understand that the bigger number doesn't mean faster. It doesn't make sense that you pay more for a higher end part only to have it nerfed down to the OEM's cooling solution, etc...
  • JarredWalton - Monday, April 1, 2013 - link

    NVIDIA's policy is that a GPU has to score within 10% of the "stock" GPU in order to have the same model name, so a GT 650M with DDR3 can't be more than 10% off the performance of a GT 650M with GDDR5. Of course, there's a catch: the 10% margin is measured with 3DMark Vantage "Performance" defaults, which aren't nearly as meaningful as using a suite of games for the testing. So basically, I'm with you: it sucks.
  • random2 - Monday, April 1, 2013 - link

    ... but again considering Windows 8 almost requires a touchscreen to really be useful that’s expected ...

    Really? Once this OS is set to boot to the desktop, it's a great little OS for those of us who don't run tablets or touch panels.

Log in

Don't have an account? Sign up now