Original Link: http://www.anandtech.com/show/4475/anandtech-mobile-graphics-guide-summer-2011
AnandTech Mobile Graphics Guide, Summer 2011by Dustin Sklavos on July 5, 2011 11:07 PM EST
If desktop graphics hardware can be more than a little confusing, deciphering performance of mobile graphics parts can be (and has historically been) an absolute nightmare. Way back in the day it was at least fairly easy to figure out which desktop chip was hiding in which mobile kit, but both AMD and NVIDIA largely severed ties between mobile and desktop branding. They may not want to readily admit that, and in the case of certain models they still pretty heavily rely on the cachet associated with their desktop hardware, but it's by and large true. So to help you make sense of mobile graphics, we present to you the first in what will hopefully be a regular series of guides.
I started putting guides like this one together back at my alma mater NotebookReview, and they've always been pretty well-received. It's really not hard to understand why: while NVIDIA and AMD are usually pretty forthcoming with the specs of their desktop kit, they've historically been pretty cagey about their notebook graphics hardware. As a result, sites like this one have had to sift through information about different laptops, compare notes with other sites and readers, and eventually compile the data. Forums will light up with questions like "can this laptop play xyz?"
Thankfully, the advent of DirectX 11 drastically simplified my job. Whenever shader models or even entire DirectX versions were bifurcated, complication followed suit, but with DirectX 11 pretty much everybody is on board with the same fundamental feature sets, and AMD and NVIDIA both support their respective technologies across the board. Intel remains the odd man out, as you'll see.
We'll break things down into three categories. The first is integrated graphics, which interestingly has gone entirely on-package and even on-die over the past year. It's surprising how fast that change really occurred. Coupled with NVIDIA's exit from the chipset business, we're strictly looking at Intel and AMD here. The second and third are dedicated to AMD and NVIDIA's mobile lines. Wherever possible we'll also link you to a review that demonstrates the performance of the graphics hardware in question. And note that when we talk about the number of shaders, CUDA cores, or EUs on a given part, that these numbers are ONLY comparable to other parts from the same vendor; 92 of NVIDIA's CUDA cores are not comparable to, say, 160 shaders from an AMD Radeon.
"Too Slow to Play" Class: Intel HD Graphics (Arrandale), Intel Atom IGP, AMD Radeon HD 4250
Specs aren't provided because in this case they aren't really needed: none of these integrated graphics parts are going to be good for much more than the odd game of Unreal Tournament 2004. Intel has had a devil of a time getting their IGP act together prior to the advent of Sandy Bridge, while AMD's Radeon HD 3000/3100/3200/4200/4225/4250 core (yes, it's all basically the same core) is really showing its age. Thankfully, outside of Atom's IGP, all of these are on their way out. As for gaming on Atom, there's always the original StarCraft.
Intel HD 3000 (Sandy Bridge)
12 EUs, Core Clock: Varies
With Sandy Bridge, Intel was able to produce an integrated graphics part able to rival AMD and NVIDIA's budget entries. In fact, in our own testing we found the HD 3000 able to largely keep up with AMD's dedicated Radeon HD 6450 and to a lesser extent the 6470, and NVIDIA's current mobile lineup generally doesn't extend that low (likely excepting the GT 520M and GT 520MX). That said, there are still some caveats to the HD 3000: while Intel's questionable driver quality is largely behind it, you may still experience the odd compatibility issue from time to time (when Sandy Bridge dropped, Fallout 3 had an issue), and more punishing games like Mafia II and Metro 2033 will be largely out of its reach. The clocks on the HD 3000 also vary greatly, with a starting clock of 650MHz for mainstream parts, 500MHz for low voltage parts, and just 350MHz for ultra low voltage parts. Turbo clocks get even weirder, ranging anywhere from 900MHz to 1.3GHz depending on the processor model. Still, it's nice to not have to roll your eyes anymore at the suggestion of doing some casual gaming on Intel's integrated hardware. (Sandy Bridge Review)
AMD Radeon HD 6250/6310 (Brazos)
80 Shaders, 8 TMUs, 4 ROPs, Core Clock: 280MHz (6250), 500MHz (6310)
In Brazos, AMD produced a workable netbook-level processor core and grafted last generation's Radeon HD 5450/5470 core onto it. The result is an integrated graphics processor with a decent amount of horsepower for low-end casual gaming, but in some cases it's going to be hamstrung by the comparatively slow Bobcat processor cores. That's perfectly fine, though, as Brazos is generally a more desirable alternative to Atom + NG-ION netbooks, offering more processor performance and vastly superior battery life. Just don't expect to do any but the most casual gaming on a Brazos-powered netbook. (HP dm1z Review)
AMD Radeon HD 6380G/6480G/6520G/6620G (Llano)
160/240/320/400 (6380G/6480G/6520G/6620G) Shaders, 20/16/12 (6480G/6520G/6620G) TMUs, 8/4 (6620G and 6520G/6480G) ROPs, Core Clock: 400-444MHz
Llano isn't out anywhere near in force yet, but we have a good idea of how the 6620G performs and expect the IGP performance to essentially scale down in such a way that the model numbers are fairly appropriate. The long and short of Llano is that the processor half pales in comparison to Sandy Bridge, but the graphics hardware is monstrous. Gamers on an extreme budget are likely to be well-served by picking up a notebook with one of AMD's A6 or A8 processors in it, with Llano promising near-midrange mobile graphics performance. (Llano Mobile Review)
AMD Radeon HD Mobile Graphics Introduction
While on the desktop, AMD's Radeon HD cards are extremely competitive, the notebook space is far more complicated. Mobile Radeons and GeForces are both fairly common with neither one owning the market more aggressively than the other; this is actually an unusual equilibrium, as each generation of notebooks has typically seen a massive swing favoring one vendor over the other.
So what is AMD offering that you can't find on NVIDIA hardware? Arguably superior anti-aliasing and image quality, typically slightly higher performance than competing mobile parts, and support for Eyefinity. You'll find GDDR5 more frequently employed with AMD's chips to help mitigate narrow memory bus widths, too.
The essential problem with Radeons right now is that outside of Eyefinity they're still playing catch-up with NVIDIA's mobile solutions. Performance may be excellent in some cases, but NVIDIA leverages Optimus across their 500M line while support for switchable graphics in the Radeon HD 6000M series is spotty. NVIDIA's Verde mobile graphics driver initiative is also very mature, while support for AMD's mobile graphics driver across vendors is again spotty. That last point isn't entirely AMD's fault: vendors like Toshiba and Sony inexplicably opt out of the program despite the drivers working fine on their hardware. Finally, there are going to be niche cases where NVIDIA's support for CUDA and PhysX are relevant. OpenCL may eventually become the standard, but professional grade applications like Adobe Premiere Pro CS5 and CS5.5 can get a substantial boost from NVIDIA kit (provided you hack the "validated GPU" list to include yours.)
There's one more comparatively problem with AMD's lineup: while NVIDIA took their 500M series (largely an exercise in rebranding) as an opportunity to do some housekeeping, AMD basically integrated the entire Mobility Radeon HD 5000 line into the 6000Ms. Feature-wise this isn't a major issue, but it results in an incredibly bloated mobile lineup, with mobile chips from the Evergreen line occupying the same series as newer chips from the Northern Islands refresh.
AMD Radeon HD 6300M
80 Shaders, 8 TMUs, 4 ROPs, Core Clocks: 500MHz (6330M/6350M) or 750MHz (6370M)
64-bit Memory Bus, DDR3, Effective Memory Clocks: 1.6GHz (6330M/6350M) or 1.8GHz (6350M/6370M)
Desktop Counterpart: Radeon HD 5450 (Cedar)
The 6300M series is the carryover/rebadging of the Mobility Radeon HD 5400 line. This is roughly the same graphics core as is integrated into Brazos, featuring a memory bus that's honestly just too narrow to really handle any serious gaming. With the advent of Sandy Bridge, it's also outclassed by Intel's integrated graphics hardware and as a result remains more of a solution for corner cases where an inexpensive dedicated graphics processor is needed. (No review available, but the Mobility Radeon HD 5470 in the Dell Studio 14 is comparable.)
AMD Radeon HD 6400M
160 Shaders, 8 TMUs, 4 ROPs, Core Clocks: 480MHz-800MHz
64-bit Memory Bus, DDR3 or GDDR5 (6490M only), Effective Memory Clocks: 1.6GHz (DDR3) or 3.2GHz (GDDR5)
Desktop Counterpart: Radeon HD 6450 (Caicos)
Doubling the shader count of Cedar helps the mobile Caicos reach parity with Sandy Bridge's IGP in the 6430M and 6450M and then beat it with the 6470M and GDDR5-equipped 6490M. What the 6400M brings to the table is what AMD as a whole brings to the table compared to Intel's graphics: better game compatibility and Eyefinity multi-monitor support. Hardware with 64-bit memory buses should still be confined to running games at 1366x768, and heavier games are going to be off limits, but the 6400M series should satisfy more casual players. (Toshiba Tecra R850 for the HD 6450M; HP EliteBook 8460p for the HD 6470M.)
AMD Radeon HD 6500M
400 Shaders, 20 TMUs, 8 ROPs, Core Clocks: 500-650MHz
128-bit Memory Bus, DDR3 or GDDR5 (6570M only), Effective Memory Clocks: 1.8GHz (DDR3) or 3.6GHz (GDDR5)
Desktop Counterpart: Radeon HD 5570/5670 (Redwood)
AMD opted to employ a very close derivative of this core for Llano, and it should really be the minimum for gamers looking to play on a Radeon. A GDDR5-equipped model will go a long way towards improving performance at higher resolutions, but generally speaking the 6500M series will at least be fine for pushing settings at 1366x768 and most games at 1600x900. This is a rebadge of the Mobility Radeon 5600/5700 series. (No review available, but the Mobility Radeon HD 5650 in the Compal NBLB2 is comparable.)
AMD Radeon HD 6600M/6700M
480 Shaders, 24 TMUs, 8 ROPs, Core Clocks: 500-725MHz
128-bit Memory Bus, DDR3 or GDDR5, Effective Memory Clocks: 1.6GHz (6630M/6730M) or 1.8GHz (6650M) or 3.6GHz (6750M/6770M)
Desktop Counterpart: Radeon HD 6570/6670 (Turks)
Bifurcating a single chip into two lines and then not even using the class of memory as a signifier is one of the more baffling decisions you'll find in this guide (though the prize has to go to NVIDIA's GT 555M), but AMD did the same thing with the 5600M/5700M series. GDDR5 is always going to be preferable to allow the graphics core to stretch its legs, but generally speaking this is a more minor, incremental improvement on its predecessor than Caicos was on Cedar, and the same rules for the 6500M apply here. (Look at the results for the 6630M in our Llano review.)
AMD Radeon HD 6800M
800 Shaders, 40 TMUs, 16 ROPs, Core Clocks: 575MHz-675MHz
128-bit Memory Bus, DDR3 or GDDR5, Effective Memory Clocks: 1.6GHz (DDR3) or 3.2GHz (6850M GDDR5) or 4GHz (6870M)
Desktop Counterpart: Radeon HD 5770 (Juniper)
The astute reader is going to notice that, once again, AMD has rebranded their last generation, this time the 5800M series. While there are specs for DDR3-powered versions, the GDDR5-based ones are far more common in the wild. That's good, because the 128-bit memory bus is too anemic on its own to feed 800 of AMD's shader cores. Serious gamers are going to want to look at the 6800M as a minimum for gaming at 1080p. It's important to note that the 6800M is still going to be consistently slower than the desktop 5750 and 5770 due to substantially reduced core clocks (the desktop chips start at 700MHz). The 6870M is also just 25MHz slower than the Mobility Radeon HD 5870, so as I mentioned before, these are going to be a solid choice for gamers. (No review available, but the Mobility Radeon HD 5870 in the ASUS G73Jh is comparable.)
AMD Radeon HD 6900M
960 Shaders, 48 TMUs, 32 ROPs, Core Clocks: 580MHz (6950M) or 680MHz (6970M)
256-bit Memory Bus, GDDR5, Effective Memory Clocks: 3.6GHz
Desktop Counterpart: Radeon HD 6850 (Barts)
This is as powerful as it gets on the AMD side. The 6970M is going to be somewhat outclassed by the GTX 580M, but should tangle just fine with the 570M and thoroughly trounce anything slower. Likewise, you're apt to see these employed in a mobile Crossfire solution, leveraging the improvements in Crossfire scaling that AMD brought with the Barts core (along with the rest of Northern Islands.) While it'll never be as fast as a desktop 6850 due to the reduced core clocks, the 6900M series is an extremely potent mobile gaming solution. (Alienware M17x R3 Review)
NVIDIA GeForce 500M Mobile Graphics Introduction
The difference between NVIDIA and AMD on the desktop can seem a bit blurry in places, but in notebooks it's night and day. That's not a measure of quality so much as a measure of a radical difference in both features and performance.
While NVIDIA leverages major benefits like a better mobile driver program and Optimus graphics switching technology, as well as corner cases with PhysX, CUDA, and 3D Vision (note that 3D notebooks using AMD hardware are also available), it's also where the most creative marketing is liable to surface. Once you realize what desktop GPUs are powering what notebook models, you begin to appreciate both just how dire the entry level on the desktop still is and how nutty NVIDIA's mobile branding has been.
Even though NVIDIA does have advantages (particularly Optimus across the entire 500M line), their lineup can seem downright bizarre compared to AMD's bloated one, and the specs for the GT 555M are honestly something profane. NVIDIA's also still leveraging roughly the same chips that were introduced with the entire Fermi line, though their progress at least isn't anywhere near as sluggish as the 9000M/100M/200M/300M era.
Another important difference is that while AMD and Intel's graphics hardware employ a single clock domain for the chip itself, NVIDIA's chips have had separate core and shader domains for some time now. As a result, there's a "core clock" that will refer to roughly everything on the GPU that isn't a shader or "CUDA core," and a "shader clock" that refers to the clocks of the "CUDA cores."
NVIDIA GeForce GT 520M/520MX
48 CUDA Cores, 8 TMUs, 4 ROPs, Core Clocks: 740MHz/900MHz (520M/520MX), Shader Clocks: 1480MHz/1800MHz (520M/520MX)
64-bit Memory Bus, DDR3, Effective Memory Clocks: 1.6GHz/1.8GHz (520M/520MX)
Desktop Counterpart: GeForce GT 520 (GF119)
One of the major places NVIDIA has beefed up their mobile line is a general lack of 64-bit memory buses. There wasn't a single one of those performance bottlenecks in the primary 400M line, but look everybody, it's back! The GT 520M and 520MX occupy the same space as the Mobility Radeon HD 6300 and 6400 series, as a dedicated chip for corner cases. It's also slower than the GT 420M it replaces, which had both double the CUDA cores and double the memory bus width. Basically inadequate for any kind of gaming, the 520s don't offer anything over the Sandy Bridge IGP that you don't get just by virtue of having NVIDIA hardware. (No review available.)
NVIDIA GeForce GT 525M/540M/550M
96 CUDA Cores, 16 TMUs, 4 ROPs, Core Clocks: 600MHz/672MHz/740MHz (525M/540M/550M), Shader Clocks: 1200MHz/1344MHz/1480MHz (525M/540M/550M)
128-bit Memory Bus, DDR3, Effective Memory Clocks: 1.8GHZ
Desktop Counterpart: GeForce GT 430 (GF108)
While the GF108 that powers these three largely indistinguishable chips is slightly slower than the already anemic desktop GeForce GT 240 with the same number of shaders, it's a healthy boost for the low-to-mid end. The differences between these three are strictly clock speeds, and anecdotal experience with overclocking mobile NVIDIA chips has generally been very positive, so odds are decent the enterprising end user with the skill for it can probably get the 525M to gain about 25 model points. At about the 540M mark, though, 1600x900 gaming starts becoming a real possibility. The chip is still hampered by the memory bus (and NVIDIA has had a harder time taming GDDR5 than AMD has), but it's an effective midrange solution. (The GeForce GT 425M in the Clevo B5130M review will be slightly slower than a GT 525M; the Dell XPS 15 L502x review has a GeForce GT 540M.)
NVIDIA GeForce GT 555M "A"
96 CUDA Cores, 16 TMUs, 4 ROPs, Core Clock: 753MHz, Shader Clocks: 1506MHz
128-bit Memory Bus, GDDR5, Effective Memory Clocks: 3138MHz
Desktop Counterpart: GeForce GT 440 GDDR5 (GF108)
And this is where NVIDIA's mobile lineup completely loses its mind. The GeForce GT 555M is actually two completely different chips and configurations; the "A" and "B" are our designation. Our "A" configuration is essentially just a souped-up version of the GT 525M/540M/550M, with a higher core clock and the benefit of GDDR5. While NVIDIA lists both versions on their site (though lacking an explanation as to why this split was made), a glance at NewEgg suggests this "A" version is the more common of the two (powering MSI and Lenovo laptops while the "B" version resides almost exclusively in an Alienware.) You can recognize the "A" version by the use of GDDR5, but since it and the "B" version are so bizarrely matched we can't really tell definitively which one would be the faster of the two. (No review available.)
NVIDIA GeForce GT 555M "B"
144 CUDA Cores, 24 TMUs, 24 ROPs, Core Clocks: 590MHz, Shader Clocks: 1180MHz
192-bit Memory Bus, DDR3, Effective Memory Clocks: 1.8GHz
Desktop Counterpart: None (GF106)
The other configuration of the GT 555M is a substantially beefier chip with six times the ROPs, but it operates at lower clocks and lower memory bandwidth due to the use of DDR3 instead of GDDR5. It's essentially a die-harvested version of GF106, and is identifiable by both the use of DDR3 and memory configurations of either 1.5GB of 3GB. It remains inexplicable why NVIDIA decided to use two completely different chips for the GT 555M, but hopefully this makes it a little easier to tell which is which. Raw calculations of pixel and texture fillrate suggest this "B" configuration to be the faster of the two, and as such it's probably the one to look for. Thus far we've only seen it in the Alienware M14x. (No review available.)
NVIDIA GeForce GTX 560M
192 CUDA Cores, 32 TMUs, 24 ROPs, Core Clocks: 775MHz, Shader Clock: 1550MHz
192-bit Memory Bus, GDDR5, Effective Memory Clock: 2.5GHz
Desktop Counterpart: GeForce GTX 550 Ti (GF116)
Admittedly with these clock speeds the GTX 560M probably performs roughly on par with the closely related GeForce GTS 450 (with the only major deficit being memory bandwidth) as opposed to the faster GTX 550 Ti, but it's still a force to be reckoned with in the mobile arena. The GTS 450 slotted in roughly between the desktop HD 5750 and 5770, while the GTX 460M traded blows with the Mobility Radeon HD 5850 and 5870. The extra 100MHz on the core in the 560M is bound to go a long way, and while we hope to get review hardware in soon, it's reasonable to assume it's at least competitive with the 6850M and 6870M if not outright faster. NVIDIA has scored several design wins with the GTX 560M, and it should really be the entry level for the serious mobile gamer, offering a strong balance between thermals and gaming performance. (Will be faster than the previous generation GeForce GTX 460M in our 460M-centric gaming notebook face-off.)
NVIDIA GeForce GTX 570M
336 CUDA Cores, 56 TMUs, 32 ROPs, Core Clock: 535MHz, Shader Clock: 1070MHz
192-bit Memory Bus, GDDR5, Effective Memory Clock: 3GHz
Desktop Counterpart: GeForce GTX 560 (GF114)
While the GTX 570M uses the same chip as the desktop GTX 560, it suffers a nearly 300MHz deficit on the core clock. It's still a strong upgrade from the GTX 560M, but second-fastest mobile GPUs seem to see much less success than their desktop counterparts and tend to be less common than the fastest model available. The 470M it replaces was extremely rare, but the 570M looks far better on paper and has at least one design win from MSI under its belt (as opposed to the largely Clevo-only 470M). (No review available.)
NVIDIA GeForce GTX 580M
384 CUDA Cores, 64 TMUs, 32 ROPs, Core Clock: 620MHz, Shader Clock: 1240MHz
256-bit Memory Bus, GDDR5, Effective Memory Clock: 3GHz
Desktop Counterpart: GeForce GTX 560 Ti (GF114)
Again operating at a substantially reduced clock than its desktop model, the GTX 580M nonetheless is the fastest mobile GPU available. The GTX 485M it replaces was generally about 10% faster on average than the Radeon HD 6970M, and the GTX 580M is a largely incremental update offering a minor increase in core clocks to go along with Optimus support (yes, your eleven pound gaming laptop can now live off of the battery.) But fastest is fastest, and if you want the best this is it...provided your pockets are deep enough. (Will be slightly faster than the previous generation GeForce GTX 485M, reviewed in a Clevo P170HM.)
Recommendations and Conclusion
So now that we have the nitty-gritty out of the way, how do we break things down? If you're looking strictly at pure performance, parts from either AMD or NVIDIA are going to be suitable for you (budget notwithstanding.) In the interests of fairness we'll include Intel in the pro and con conversation.
First, Intel has the best dedicated video encoding hardware on the market. AMD and NVIDIA both offer solutions that allow you to harness their shaders to substantially accelerate video encoding, but Intel's Quick Sync is best of breed (behind pure CPU-based encoding), offering a healthy improvement in encoding speed while producing the best output short of doing encoding on the CPU itself. It's worth noting, though, that NVIDIA solutions and AMD ones supporting switchable graphics can take advantage of Quick Sync, so you don't necessarily have to tie yourself down to Intel to benefit from it.
If you take video encoding out of the equation, unfortunately AMD isn't quite as strong in terms of feature offerings, boiling down to arguably slightly better image quality and support for Eyefinity (provided the notebook has a DisplayPort.) They do have a hybrid graphics solution similar to Optimus, but availability is spotty and you'll have to research the notebook model you're looking at to see if their switchable graphics are supported. NVIDIA's Optimus on the other hand is pervasive and mature, and their mobile graphics drivers are more widely supported than AMD's. 3D Vision, CUDA, and PhysX are much more niche, with AMD also offering 3D support and materializing in 3D-ready notebooks. If you have a need for CUDA or a desire for PhysX, your graphics vendor has been decided for you.
Knowing what each vendor offers, now we just have to know what to look for.
The netbook or ultraportable gamer is pretty much stuck with either buying a netbook with AMD's E-350 processor or paying through the nose for an Alienware M11x (spoiler alert: heavier than most "netbooks.") That's not a horrible thing as the E-350 has a capable graphics core, but even though the CPU side is faster than dual-core Atom it's still not quite enough to pick up the slack.
Gamers on an extreme budget used to be more or less screwed, but thankfully that's changed. Notebooks with AMD's A6 or A8 processors are going to be your one-stop shop, offering a tantalizing mix of middle-of-the-road CPU performance with remarkably fast integrated graphics hardware. There's a reason AMD refers to the A6 and A8 graphics hardware as "discrete-class" and for once it's not just marketing jargon. If you want to game for under $600, this is the way to go. In fact, it's even a little difficult to recommend spending up for a notebook with anything less than a GeForce GT 540M or Radeon HD 6500M/6600M/6700M unless you really need the faster CPU on top of it. If gaming while on the battery is important to you, then you need to be looking for Llano.
Users looking for a more well-rounded notebook would probably be well served by the aforementioned GeForce GT 540M or Radeon HD 6500M/6600M. These will hang out between about $700 and a grand and notebooks using these chips are going to be fairly mainstream in form factor, so you won't be lugging a land monster around. Be forewarned, though, these GPUs are going to be inadequate for driving games at 1080p and may still struggle at 1600x900.
The serious gamer looking for an affordable machine should be gunning straight for notebooks with NVIDIA's GeForce GTX 560M. This, or AMD's Radeon HD 6800M, will be the bare minimum for gaming comfortably at 1080p, but honestly the GTX 560M is liable to be the sweet spot in offering the very best balance in form factor favoring performance before you start getting into the huge, heavy desktop replacement notebooks.
Finally, for those who money is no object to, just about anything from the Radeon HD 6900M series or the GTX 570M or 580M is going to do the trick, and for the truly excessive users, an SLI or Crossfire notebook will yield dividends.
Update: Intel's engineers took umbrage with our suggestion that Intel's integrated graphics driver quality is still poor, and they were right to do so. While older graphics architectures may still be a bit fraught, Sandy Bridge is an overwhelming improvement. This guide has been updated to reflect that fact.