Performance Expectations

In their presentation and FAQ, NVIDIA provided estimates of performance relative to Ivy Bridge Core i5 ULV with HD 4000. Before we get to those numbers, we want to quickly set the stage for what NVIDIA is showing. Despite the similarity in name and features, as we discovered last year the ULV chips tend to run into TDP limits when you’re trying to hit both the CPU and the iGPU. The CPU cores for instance can use around 12-15W and a full load on the iGPU can create another 10-15W; stuff both into a 17W TDP and you’re going to get throttling, which is exactly what happens.

Looking at HD 4000 performance with Core i5 ULV and Core i7 quad-core, you can see that the quad-core part is anywhere from 0% to around 60% faster. On average it’s 35% faster at our 2012 “Value” settings and 26% faster at our 2012 “Mainstream” settings. As for the 700M performance relative to Core i5 ULV, NVIDIA provides the following estimates based on benchmarks at moderate detail and 1366x768 in Battlefield 3, Crysis 2, Just Cause 2, DiRT 3, and F1 2011:

Besides the above slide, NVIDIA provided some performance estimates using results from the 3DMark 11 Performance benchmark, and the results are even more heavily in favor of NVIDIA. In their FAQ, NVIDIA states that even the lowly GeForce 710M is three times faster than ULV HD 4000, while the GT 720M is 3.3x faster, GT 730M and 735M are 4.8X faster (hmmm…do we really need GT 735M?), GT 740M is 5.3X faster, GT 745M is 5.8x faster, and GT 750M is 6.3x faster. Of course, those number are from NVIDIA, going up against the much slower ULV variant of Ivy Bridge, and using 3DMark 11—which isn’t quite as important as actual gaming performance.

I suspect the GT3 and GT3e configurations of Haswell will be substantially faster than IVB’s HD 4000 and may come close to the lower end of NVIDIA’s range…at least on the standard voltage Haswell chips. For ULV, I’ve heard a performance estimates that GT3 Haswell will be 30%-50% faster than GT2 IVB, and GT3e could be roughly twice as fast, but that should still leave NVIDIA with a healthy lead. Anyway, we’d suggest taking all of these numbers with a grain of salt for now. The real comparison for most is going to be Haswell and 700M, and while we have a pretty good idea where 700M and HD 4000 performance fall (since the 700M parts are Kepler and Fermi updates), Haswell’s iGPU is likely to be a different beast.

Closing Thoughts

On the whole, Kepler has been amazingly successful for NVIDIA, particularly in the mobile world. The bar for midrange mobile dGPUs was raised significantly with the GT 640M LE and above parts typically offering anywhere from 25% to 75% better performance than the previous generation, and that was accomplished along with reducing power use. It was NVIDIA’s version of Intel’s Core 2 launch, and the vast majority of notebooks with dGPUs seem to be using NVIDIA hardware these days. Much of that can also be attributed to NVIDIA’s driver team, where Optimus support and usability still trumps AMD’s Enduro alternative. AMD is still working to improve their drivers, but they're still not at the same level as NVIDIA's mobile drivers.

Not surprisingly, it looks like every laptop with an NVIDIA dGPU these days also comes with Optimus support, and NVIDIA says they’ll be in three times as many Ultrabooks and ultraportables in 2013 compared to 2012—which isn’t too hard, since off the top of my head the only two Ultrabooks with NVIDIA dGPUs I can name are the Acer M5 and the ASUS UX32VD. NVIDIA also says they have over 30 design wins for touchscreen laptops, but again considering Windows 8 almost requires a touchscreen to really be useful that’s expected. We will likely see a limited number of laptops launching with Ivy Bridge CPUs and 700M dGPUs over the coming weeks, with ASUS specifically listed in NVIDIA’s 700M FAQ with their X450 (GT 740M) and N46 (GT 740M as well); Lenovo is also a launch day partner with several options: Y400 with GT 750M, and Z400/Z500 with GT 740M.

The real launch is likely to coincide with Intel’s Haswell update later in Q2 2013. When that comes along, we're likely to see some additional 700M updates from NVIDIA on the high end (again, echoing what happened with the 600M and 680M launches). Just don't count on seeing a mobile variant of Titan/GK110 for a while yet; I'd peg that level of performance as something we won't see in laptops until we have two more process shrinks under our belts (i.e. when TSMC is at 16nm).

GeForce 700M Models and Specifications
Comments Locked

91 Comments

View All Comments

  • HisDivineOrder - Monday, April 1, 2013 - link

    Yeah. I kinda figured. Still, if it's the same, then it'd be foolish not to ask.

    I knew when I heard about Boost 2.0 in Titan that all that time spent discussing it was going to mean it would show up in Kepler refresh products relatively soon afterward. I wouldn't be surprised to see nVidia refresh even the desktop high end with something like that. Minor changes, including slightly higher clocks and a newer Boost.

    Even a "minor change" would probably be enough to ruin AMD's next six months.
  • Guspaz - Monday, April 1, 2013 - link

    I've been using GeForce Experience, and have some comments. It's definitely a timesaver, and it's nice to be able to "just click a button" and not have to worry about tweaking the detailed settings (although it's nice to still be able to if I want to override something). I find that the settings it picks generally do run at a good framerate on my rig. It also makes driver updates easier, since it presents you with notification of new drivers (even betas), gives you a nice list of changes in each version, and makes install a one-click affair (it downloads/installs inside the app).

    Downsides? First, it doesn't support very many games. This is understandable since supporting a game means they need to have setting profiles for every one of their cards, but also a whole lot of other configurations such as different CPUs and different monitor resolutions. Unless there is some sort of dynamic algorithm involved, that would be an enormous number of potential configs per game. Still, the limited game support is unfortunate. Second, the app will continually notify you that new optimized settings are available, even when the new settings it downloaded are not for any game you have installed. So it keeps telling me there are new settings, but when I go into the app to check, there are no updates for any of my games.
  • Wolfpup - Monday, April 1, 2013 - link

    I hadn't heard of this program, and have to say it's kind of a cool idea. Heck, *I* don't always like messing around with sometimes vaguely settings in games, I think for the average user this could be really cool, and does indeed help make it more console like.
  • HisDivineOrder - Monday, April 1, 2013 - link

    I like that they went in and started supporting prioritizing resolution. So instead of just abstractly telling me to change my 2560x1600 to 1920x1200/1080, they leave it at 2560x1600 now. That's good.

    Plus, their latest release notes also said they were adding SLI support, which is great.

    The thing that I think this program lacks is the option to set certain settings that you want to be true regardless and then have the program adjust to recommend specs around certain "givens" that you won't ever change.

    For example, I'm not a big fan of AA unless there is ABSOLUTELY no performance setting that can't be turned all the way up. I can imagine some people might want AA at all costs because jaggies just bug them.

    I think we should both have the option to prioritize for the setting we want. I'd also love it if we had a program like Geforce Experience that let us alter the settings for a game before we entered it and also served as a launcher (much as Geforce Experience does), but I think instead of just doing the defaults, we should have the option to select the settings, choose the "Optimal" as determined by nVidia, and also the option to do the tweaking from right inside the Geforce Experience interface.

    And if I'm adding in wish list items, I'd love it if nVidia would integrate SMAA and FXAA into the program. Hell, I think I'd really prefer it if Geforce Experience would serve a similar function to SweetFX except in an official setting kinda way. So we could tweak the game from Geforce Experience in addition to just it serving as a simple optimizer.

    It could come with an "Advanced" mode. I think a quick launch and/or settings for the control panel might be nice to help us move between different UI's, from adjusting the game to adjusting the profiles to adjusting the settings of the video card. Maybe it should be the same interface with different tabs to interact with each element.

    And... nah. That's it.
  • cjb110 - Tuesday, April 2, 2013 - link

    mmm, I don't like it seems to push the pretty too much hurt performance no end.
    Now it might be that its looking at GPU only, in which case...duh pointless.

    Nice idea, but needs to be a bit more balanced in its options!
  • tviceman - Monday, April 1, 2013 - link

    Jared, can you confirm or not if these parts have the same (or very similar) power envelope as their like-named 600 series parts that are being replaced?
  • JarredWalton - Monday, April 1, 2013 - link

    My understanding is that they do have the same approximate power envelopes. However, keep in mind that NVIDIA doesn't disclose notebook power TDPs -- they simply say that they work with each OEM to provide what the OEM desires. Thus, two laptops with GT 750M could potentially have TDPs as much as 5-10W apart (or not -- I don't know how much of a difference we're likely to see).
  • Einy0 - Monday, April 1, 2013 - link

    I hate this up to crap for specs. This leaves way too much wiggle room for OEMs to under clock the chips to fit a certain cooling profile. This messes with performance way too much. There should be clearly defined specifications for each GPU model. The typical consumer doesn't understand that the bigger number doesn't mean faster. It doesn't make sense that you pay more for a higher end part only to have it nerfed down to the OEM's cooling solution, etc...
  • JarredWalton - Monday, April 1, 2013 - link

    NVIDIA's policy is that a GPU has to score within 10% of the "stock" GPU in order to have the same model name, so a GT 650M with DDR3 can't be more than 10% off the performance of a GT 650M with GDDR5. Of course, there's a catch: the 10% margin is measured with 3DMark Vantage "Performance" defaults, which aren't nearly as meaningful as using a suite of games for the testing. So basically, I'm with you: it sucks.
  • random2 - Monday, April 1, 2013 - link

    ... but again considering Windows 8 almost requires a touchscreen to really be useful that’s expected ...

    Really? Once this OS is set to boot to the desktop, it's a great little OS for those of us who don't run tablets or touch panels.

Log in

Don't have an account? Sign up now