Back to Article

  • tipoo - Thursday, May 30, 2013 - link

    I don't see anything about power consumption or TDP here, if the same architecture is used but with higher clocks for the performance gains, will they suck more power? Reply
  • JarredWalton - Thursday, May 30, 2013 - link

    NVIDIA doesn't disclose TDPs on notebook parts, mostly because that's up to the laptop OEM. One OEM might go for slightly higher performance with a higher TDP, while another OEM could use the same part (in theory the "same") at lower clocks to work with their cooling design. According to NVIDIA, the GTX 780M should be at roughly the same 100W TDP as the GTX 680M, but take that with a grain of salt -- I suspect it uses more power, but how much more is difficult to say. Certainly, NVIDIA (and AMD) are more familiar with 28nm now, so perhaps some tuning was all they needed to reduce power use even more. Reply
  • Zandros - Thursday, May 30, 2013 - link

    I saw 122 W claimed for the 780M, which makes sense to me compared with the 100 W for the 680M. (Source: )

    Looking forward to the review. Would be interesting if you could throw in a desktop 660Ti/7870 or something as well to serve as a reference.
  • tipoo - Friday, May 31, 2013 - link

    That's what I'm wondering, if some tweaking offset the higher clock speeds, where the power consumption will fall. Reply
  • huaxshin - Friday, May 31, 2013 - link

    Anandtech should mention the reason why Nvidia manage to be able to keep the GTX 780M TDP to 100W like the GTX 680M despite having more cores and higher clocks, is because the GTX 780M is running on lower voltage than GTX 680M.

    GTX 780M: 0.887V for the highest clocks
    GTX 680M: 0.900V for the highest clocks

    That is why GTX 680MX have 122W TDP, and GTX 780M does not.

  • Zandros - Friday, May 31, 2013 - link

    Interesting, thanks. I'm assuming the clock speeds for the higher performance states for the 780M are wrong in that screenshot? Reply
  • huaxshin - Saturday, June 01, 2013 - link

    Yes the 780M states are wrong except P8. GPU Shark didn`t read all the information because we didn`t have the correct vbios :) Reply
  • Khenglish - Saturday, June 01, 2013 - link

    those voltages are incorrect. 680m will run up to .987V on a default BIOS. It will also run at .9V at times when there is a light workload on reduced clocks.

    The .987V is not boost. 680m never had boost implemented.
  • Khenglish - Saturday, June 01, 2013 - link

    Use gpu-z or nvidia inspector to read the clocks and voltages instead. Reply
  • huaxshin - Sunday, June 02, 2013 - link

    Ah, after revisiting my data you are indeed correct. I have to check 780M to compare again

    GTX 680M:
  • huaxshin - Sunday, June 02, 2013 - link

    Figured it out.

    Idle clock: 0.8370V for both
    Boost clock: 0.9870V for 680M and 0.981V for 780M.
  • Zink - Thursday, May 30, 2013 - link

    There is an update at the bottom of the article listing that some GPU runs BioShock Infinite at 41.5 fps on your enthusiast settings. Did you purposely leave out what GPU it is because of an NDA? From that score it is probably the GTX 765M or GTX 770M I'm guessing. Reply
  • junky77 - Friday, May 31, 2013 - link

    the 780M - see the address Reply
  • cmikeh2 - Friday, May 31, 2013 - link

    What address? It sounds like they're referring to the 770m to me... Reply
  • JarredWalton - Friday, May 31, 2013 - link

    Dustin has a GTX 780M in house for review; it's an MSI notebook with Haswell, which we can't talk about just yet.... ;-) Reply
  • tipoo - Friday, May 31, 2013 - link


    URL bar
  • whyso - Thursday, May 30, 2013 - link

    Cmon AT 660m specs are wrong. Almost every version supports boost which runs at 950/2500. Reply
  • JarredWalton - Friday, May 31, 2013 - link

    All of the GPUs in the GTX family support varying levels of boost, which we have not attempted to list in the table. The 835MHz clock is the base for the GTX 660M; without user overclocking I'd be interested to know who is running GTX 660M at 2.5GHz GDDR5. Anyway, all of the clocks are basically guidelines and NVIDIA works with OEMs on a case by case basis.
  • Assimilator87 - Thursday, May 30, 2013 - link

    So the desktop GK104 only comes with 2GB in its stock configuration, but they slap on 4GB for the laptop model!? Reply
  • Meaker10 - Friday, May 31, 2013 - link

    Yes becuase in nitebooks more vram sells along with them heing higher margin products

    770 desktop = 400 dollar, 780m = 650 dollars
  • nixtish - Friday, May 31, 2013 - link

    Do you think this will make it into the next gen rMBP ? Reply
  • shompa - Friday, May 31, 2013 - link

    No. Apple have never used a high powered GPU in their laptops. Its not "elegant". They prefer thin and long battery time. (And rMBP really needs a SLI 780. Even at non gaming the 15 inch rMBP is sluggish when you use non native resolution since OSX renders the picture at 4 times the resolution before converting to the non-native resolution. We are talking upwards 8 megapixels for a 1920x1200 resolution)

    This is why I beg Alienware to release a Retinal display laptop with SLI.
  • xinthius - Saturday, June 01, 2013 - link

    Do you own a rMBP, because I do, and it is not slugging at non-native resolutions. Fact of the day for you. Reply
  • CadentOrange - Friday, May 31, 2013 - link

    If Haswell's GT3e performance is of a similar level to the nVidia 650m, it's little wonder that nVidia needed to dramatically improve the performance of it's midrange parts. That 660m -> 760m jump should provide a substantial boost. What I'd like to see are the benchmarks between the GT3e and the 750m, as the 750m appears to be the 650m with a minor clock speed bump. Reply
  • shompa - Friday, May 31, 2013 - link

    But at what power is GT3e? I read 65watts. Not to many ultrabooks that can use that. Reply
  • Flunk - Friday, May 31, 2013 - link

    These new Nvidia mobile chips are fantastic, but I can't shake the feeling that Nvidia purposely underclocked their previous generation so that they could release new chips with the same architecture this year. I'm gotten my 650m up to 1150/1400 (GDDR5, bios hack) and it runs quite cool and stable. Reply
  • Khenglish - Friday, May 31, 2013 - link

    Even the 680m will overclock well over 30% on stock voltage. The most ridiculous overclocker is the 675mx. The default is 600mhz and people have clocked them over 1100mhz with moderate voltage increases.

    What I find interesting is that ES 780ms that people have gotten can clock far higher on the memory than what a 680m can do, even though they have the same memory chips and with the 680m's memory raised up to the 780m's memory voltage. The highest 680m memory I've seen is around 5350 effective, while someone has already ran benchmarks on a 780m over 6000.
  • shompa - Friday, May 31, 2013 - link

    Actually Nvidia was very lucky that their midrange GPU was so fast. Remember that Titan was the real GTX 680. It was released almost one year after its intended release date.
    Nvidia didn't "underclock" their previos generation. They even had to fuse of parts since yields where not high enough. With one year time and chip redesign (I think this is the 5 stepping/spin of the GK104) they could enable all clusters + they underwolted a bit + GPU boost 2 to get upwards 30% gains.

    If you want to talk about a company that have held back: Intel. Since 2006 they have had no competition. Every single chip for the last 5 years have clocked 4ghz+. Still you cant buy an official 4ghz chip. Why? Intel have no competition.

    Nvidia have competition. Thats why they cant hold back (and as I wrote: Nvidia was very lucky with the GK104 that it could clock so high. A mid card chip that could be sold as a 680GTX. GK110 takes almost double die area = cost almost twice for Nvidia to produce. Thats why Titan/GTX 780 are so expensive. This will be solved with the next TSMC shrink to 20 nm)
  • jasonelmore - Saturday, June 01, 2013 - link

    and then nvidia will make a bigger gpu with more cores and do the same thing all over again. 480 was supposed to be 580, titan was supposed to be gtx 680 and so on. Reply
  • cstring625 - Friday, May 31, 2013 - link

    Is the table wrong? Earlier it was said that the gtx 780 would use the gk110 Reply
  • cstring625 - Friday, May 31, 2013 - link

    nevermind... I guess this is the gtx 780m... Reply
  • huaxshin - Friday, May 31, 2013 - link

    This makes absolutely no sense at all.
    960 cores @ 700MHz
    1536 cores @ 849MHz
    = 16% more FPS? (41.5FPS/35.6FPS).

    Theoretically the GTX 780M should produce 2x as much FPS.
    Please go over your test methods before releasing a review that does not tell the truth. The numbers you gave us cannot be correct.

    "BioShock Infinite is able to produce 41.5 fps at our enthusiast settings with the GTX 780M, which are 1080p and all the settings dialed up. The outgoing GTX 675MX produced only 35.6 fps, while HD 7970M currently gets 45.3 fps."
  • huaxshin - Friday, May 31, 2013 - link

    I forgot to mention that the specs posted is from GTX 675MX and GTX 780M Reply
  • JarredWalton - Friday, May 31, 2013 - link

    The GTX numbers are from Dustin while the 7970M numbers come from my testing. However, we're looking at three different laptops so it's possible other factors are at play -- different CPUs for one. Based on GPU bandwidth and compute, the difference should be much larger. I suspect at max settings we're hitting bandwidth harder than compute, so a 2X increase is unlikely, but even then we should see 30-40%. Regardless, Dustin should have the full MSI laptop review early next week. Reply
  • huaxshin - Friday, May 31, 2013 - link

    If you guys don`t share your methods and don`t use the same maps and the same areas while testing, the review is going to be highly skewed in whoever get tested in the least demanding areas, From the FPS numbers you gave us, its pretty clear that that 7970M got the lucky straw.

    That you used a 3610QM with 7970M and Haswell (4700MQ) with 780M doesn`t matter at all. So the system difference is a moot point. Look up CPU scaling from Bioshock Infinte and you will see that wether you run @ 3.0GHz or 4.5Ghz they still produce same FPS with the same GPU.

    Again, please share your test methods with Dustin and do a proper analysis. Thats all I`m asking. 15% is not remotely accurate with the right testing :)
  • JarredWalton - Saturday, June 01, 2013 - link

    Of course we share our testing methods. I have a full document going over every benchmark and how to run it. Plus, Bioshock has a built-in benchmark, so it's pretty hard to mess that up. Best guess right now: maybe he had old drivers on the 670MX. I know he has a sinus infection and has been busy meeting with several OEMs over the past couple of days, plus trying to write up the MSI review before the NDA. Check back Monday and hopefully all will be made clear. Reply
  • huaxshin - Saturday, June 01, 2013 - link

    Thank you for replying Jarred. I like the articles Anandtech make for notebooks, and the community which is pretty big. Previously there wasn`t much reviews to find from the top hardware, but the focus seems to have shifted more towards it.

    I have a request for future reviews: You should add overclocking analysis, measure power consumption from the GPU and not the system as whole (that goes for all reviews), GPU heat idle and gaming (especially important).

    MSI Afterburner could help you a long way with overclocking plus heat measurements. GPU power however I guess you need some special hardware to measure it which may not be possible to do with a notebook.

    Hoping that the review on Monday will be good.

Log in

Don't have an account? Sign up now