NVIDIA GeForce 500M Mobile Graphics Introduction

The difference between NVIDIA and AMD on the desktop can seem a bit blurry in places, but in notebooks it's night and day. That's not a measure of quality so much as a measure of a radical difference in both features and performance.

While NVIDIA leverages major benefits like a better mobile driver program and Optimus graphics switching technology, as well as corner cases with PhysX, CUDA, and 3D Vision (note that 3D notebooks using AMD hardware are also available), it's also where the most creative marketing is liable to surface. Once you realize what desktop GPUs are powering what notebook models, you begin to appreciate both just how dire the entry level on the desktop still is and how nutty NVIDIA's mobile branding has been.

Even though NVIDIA does have advantages (particularly Optimus across the entire 500M line), their lineup can seem downright bizarre compared to AMD's bloated one, and the specs for the GT 555M are honestly something profane. NVIDIA's also still leveraging roughly the same chips that were introduced with the entire Fermi line, though their progress at least isn't anywhere near as sluggish as the 9000M/100M/200M/300M era.

Another important difference is that while AMD and Intel's graphics hardware employ a single clock domain for the chip itself, NVIDIA's chips have had separate core and shader domains for some time now. As a result, there's a "core clock" that will refer to roughly everything on the GPU that isn't a shader or "CUDA core," and a "shader clock" that refers to the clocks of the "CUDA cores."

NVIDIA GeForce GT 520M/520MX
48 CUDA Cores, 8 TMUs, 4 ROPs, Core Clocks: 740MHz/900MHz (520M/520MX), Shader Clocks: 1480MHz/1800MHz (520M/520MX)
64-bit Memory Bus, DDR3, Effective Memory Clocks: 1.6GHz/1.8GHz (520M/520MX)
Desktop Counterpart: GeForce GT 520 (GF119)

One of the major places NVIDIA has beefed up their mobile line is a general lack of 64-bit memory buses. There wasn't a single one of those performance bottlenecks in the primary 400M line, but look everybody, it's back! The GT 520M and 520MX occupy the same space as the Mobility Radeon HD 6300 and 6400 series, as a dedicated chip for corner cases. It's also slower than the GT 420M it replaces, which had both double the CUDA cores and double the memory bus width. Basically inadequate for any kind of gaming, the 520s don't offer anything over the Sandy Bridge IGP that you don't get just by virtue of having NVIDIA hardware. (No review available.)

NVIDIA GeForce GT 525M/540M/550M
96 CUDA Cores, 16 TMUs, 4 ROPs, Core Clocks: 600MHz/672MHz/740MHz (525M/540M/550M), Shader Clocks: 1200MHz/1344MHz/1480MHz (525M/540M/550M)
128-bit Memory Bus, DDR3, Effective Memory Clocks: 1.8GHZ
Desktop Counterpart: GeForce GT 430 (GF108)

While the GF108 that powers these three largely indistinguishable chips is slightly slower than the already anemic desktop GeForce GT 240 with the same number of shaders, it's a healthy boost for the low-to-mid end. The differences between these three are strictly clock speeds, and anecdotal experience with overclocking mobile NVIDIA chips has generally been very positive, so odds are decent the enterprising end user with the skill for it can probably get the 525M to gain about 25 model points. At about the 540M mark, though, 1600x900 gaming starts becoming a real possibility. The chip is still hampered by the memory bus (and NVIDIA has had a harder time taming GDDR5 than AMD has), but it's an effective midrange solution. (The GeForce GT 425M in the Clevo B5130M review will be slightly slower than a GT 525M; the Dell XPS 15 L502x review has a GeForce GT 540M.)

NVIDIA GeForce GT 555M "A"
96 CUDA Cores, 16 TMUs, 4 ROPs, Core Clock: 753MHz, Shader Clocks: 1506MHz
128-bit Memory Bus, GDDR5, Effective Memory Clocks: 3138MHz
Desktop Counterpart: GeForce GT 440 GDDR5 (GF108)

And this is where NVIDIA's mobile lineup completely loses its mind. The GeForce GT 555M is actually two completely different chips and configurations; the "A" and "B" are our designation. Our "A" configuration is essentially just a souped-up version of the GT 525M/540M/550M, with a higher core clock and the benefit of GDDR5. While NVIDIA lists both versions on their site (though lacking an explanation as to why this split was made), a glance at NewEgg suggests this "A" version is the more common of the two (powering MSI and Lenovo laptops while the "B" version resides almost exclusively in an Alienware.) You can recognize the "A" version by the use of GDDR5, but since it and the "B" version are so bizarrely matched we can't really tell definitively which one would be the faster of the two. (No review available.)

NVIDIA GeForce GT 555M "B"
144 CUDA Cores, 24 TMUs, 24 ROPs, Core Clocks: 590MHz, Shader Clocks: 1180MHz
192-bit Memory Bus, DDR3, Effective Memory Clocks: 1.8GHz
Desktop Counterpart: None (GF106)

The other configuration of the GT 555M is a substantially beefier chip with six times the ROPs, but it operates at lower clocks and lower memory bandwidth due to the use of DDR3 instead of GDDR5. It's essentially a die-harvested version of GF106, and is identifiable by both the use of DDR3 and memory configurations of either 1.5GB of 3GB. It remains inexplicable why NVIDIA decided to use two completely different chips for the GT 555M, but hopefully this makes it a little easier to tell which is which. Raw calculations of pixel and texture fillrate suggest this "B" configuration to be the faster of the two, and as such it's probably the one to look for. Thus far we've only seen it in the Alienware M14x. (No review available.)

NVIDIA GeForce GTX 560M
192 CUDA Cores, 32 TMUs, 24 ROPs, Core Clocks: 775MHz, Shader Clock: 1550MHz
192-bit Memory Bus, GDDR5, Effective Memory Clock: 2.5GHz
Desktop Counterpart: GeForce GTX 550 Ti (GF116)

Admittedly with these clock speeds the GTX 560M probably performs roughly on par with the closely related GeForce GTS 450 (with the only major deficit being memory bandwidth) as opposed to the faster GTX 550 Ti, but it's still a force to be reckoned with in the mobile arena. The GTS 450 slotted in roughly between the desktop HD 5750 and 5770, while the GTX 460M traded blows with the Mobility Radeon HD 5850 and 5870. The extra 100MHz on the core in the 560M is bound to go a long way, and while we hope to get review hardware in soon, it's reasonable to assume it's at least competitive with the 6850M and 6870M if not outright faster. NVIDIA has scored several design wins with the GTX 560M, and it should really be the entry level for the serious mobile gamer, offering a strong balance between thermals and gaming performance. (Will be faster than the previous generation GeForce GTX 460M in our 460M-centric gaming notebook face-off.)

NVIDIA GeForce GTX 570M
336 CUDA Cores, 56 TMUs, 32 ROPs, Core Clock: 535MHz, Shader Clock: 1070MHz
192-bit Memory Bus, GDDR5, Effective Memory Clock: 3GHz
Desktop Counterpart: GeForce GTX 560 (GF114)

While the GTX 570M uses the same chip as the desktop GTX 560, it suffers a nearly 300MHz deficit on the core clock. It's still a strong upgrade from the GTX 560M, but second-fastest mobile GPUs seem to see much less success than their desktop counterparts and tend to be less common than the fastest model available. The 470M it replaces was extremely rare, but the 570M looks far better on paper and has at least one design win from MSI under its belt (as opposed to the largely Clevo-only 470M). (No review available.)

NVIDIA GeForce GTX 580M
384 CUDA Cores, 64 TMUs, 32 ROPs, Core Clock: 620MHz, Shader Clock: 1240MHz
256-bit Memory Bus, GDDR5, Effective Memory Clock: 3GHz
Desktop Counterpart: GeForce GTX 560 Ti (GF114)

Again operating at a substantially reduced clock than its desktop model, the GTX 580M nonetheless is the fastest mobile GPU available. The GTX 485M it replaces was generally about 10% faster on average than the Radeon HD 6970M, and the GTX 580M is a largely incremental update offering a minor increase in core clocks to go along with Optimus support (yes, your eleven pound gaming laptop can now live off of the battery.) But fastest is fastest, and if you want the best this is it...provided your pockets are deep enough. (Will be slightly faster than the previous generation GeForce GTX 485M, reviewed in a Clevo P170HM.)

AMD Radeon HD 6000M Graphics Recommendations and Conclusion
Comments Locked

85 Comments

View All Comments

  • Pirks - Wednesday, July 6, 2011 - link

    Some idiots from Adobe use ancient OpenGL shit instead of proper DX 10/11 APIs, who cares? Don't buy Adobe shit, buy shit that supports DX 10 or 11, that's the solution.

    And Minecraft is such an ugly POS I'm surprised it's not dead yet. Of course such an hideous ugliness would use OpenGL, why am I not surprised.
  • Drizzt321 - Wednesday, July 6, 2011 - link

    Ok, so, full of hate. Minecraft is not MEANT to look like a 2011 super new high res textured with all the bells and whistles and features and such that you get in the latest games. Part of it's charm (for many) is it's decidedly simple looks, simplistic seeming game play, and the world building you can do.

    Uh...and Adobe probably uses OpenGL since they also run on Mac, and are not intended to look or act like games do, but accelerate things that can more efficiently (and quickly) run on the GPU.

    P.S. I know, I shouldn't feed the trolls, but the Minecraft comment really got me with it's hatefulness.
  • Pirks - Wednesday, July 6, 2011 - link

    Simple is one thing and downright freaking hideously ugly is another, you know

    If I were Adobe I'd drop Mac support eons ago, it's such a pain in the butt to deal with ancient ugly OpenGL just 'cause Apple is incapable of using something better like DX 11 or something

    Anyway, I won't touch any mincecraft, adobe, opengl or any other shit like that with a 10 foot pole

    For simple looking games I'd go for proper stuff like MDK, very simple looking but very very far from hideous ugliness of minecraft cubistic shit
  • UMADBRO - Thursday, July 7, 2011 - link

    MDK a "proper" game?!?!?!?1?

    BWAHAHAHAHAHA

    ...

    wait, he's serious?

    ...

    BWAHAHAHAAAHHHHHAHAHA XD

    No seriously, just because you dont like it, doesnt mean all the vile shit you spewed about it is true. And honestly, the MC community will do just fine without the likes of you. LMAO!
  • Pirks - Thursday, July 7, 2011 - link

    Yeah, compared to hideous ugliness of Minecraft, MDK is proper one, simplistic but at the same time doesn't look like your poop
  • UMADBRO - Thursday, July 7, 2011 - link

    You're so full of it. I would say its funny, but actually, its rather sad and pathetic. You go play yurr 14 year old games and keep deluding yourself on whats "proper" or not...
  • Pirks - Friday, July 8, 2011 - link

    Beauty has no age, and same holds true for ugliness. Time will pass and MDK will stay simplistic and at the same time beautiful, while no time will fix cubistic cheap piece of shit look of Minecraft. I have no say in its gameplay, maybe for some people it's interesting (to me it's as boring as Sims and similar girly shit) but its graphics are worse than Digger and original ping pong from 1970. I dunno what could look more shoddy than that, among 3D stuff. Among 2D games there are more hideous titles for sure.
  • Penti - Thursday, July 7, 2011 - link

    Why are you trolling? Every professional video postpro, 3D modeling/animation and imaging app will use OpenGL. Also how could they use something Microsoft hasn't made public, standardized and licensed?

    OpenGL has nothing to do with the look of a game, you use the same tools and typically the same game engine regardless of rendering pipeline and API. The tools where you are crafting the 3D models are fully dependent on OpenGL any way. Not that it matters. And of course any mobile game is GLES.
  • Penti - Thursday, July 7, 2011 - link

    Also you can easily convert shaders between HLSL to GLSL, and you can also use Nvidia's Cg which compiles to either (and also works on some consoles). No really big problem there, all the stuff you need is supported in both API's.

    For simple games you could just go with something like Unity. For that matter consoles are pretty much limited to 2004 era D3D9 graphics. Newest game on OpenGL (on Windows) probably is Brink. Which doesn't look too bad. Performance is still there so it works out. It's here to stay.
  • Pirks - Thursday, July 7, 2011 - link

    I just want Windows software to use proper APIs for Windows, not some legacy Unix shit, that's all

Log in

Don't have an account? Sign up now