GTX 480M: Fast but Mixed Feelings

Back when we took the ATI Mobility Radeon HD 5870 and NVIDIA GeForce GTX 285M and pit them against each other, the 5870's victory was met with some disappointment because it just wasn't the Hail Mary we had hoped for. Notebook graphics performance had been stagnating for so long with no competition at the top of the heap, allowing NVIDIA to refresh the G92 an absurd number of times, and yet when ATI finally decided to come out and play, the best they could do was beat the GTX 285M by about 10% on average. ATI didn't deliver a knockout blow; they just flicked NVIDIA behind the ear over and over again. Now with a cut down Fermi chip powering the GeForce GTX 480M, NVIDIA's response is to say "quit it!" and slap at ATI's hands.

The most impressive thing about the 480M isn't its performance; it's the fact that NVIDIA was able to get the sucker into a notebook to begin with. Sure, it's a nine pound notebook cooling a TDP of 100 watts, but credit where credit is due: Fermi isn't exactly well known for being economical with power on the desktop. Really, the GTX 480M raises more questions than it answers.

I suspect most of us agreed when the GeForce GTX 260 and GTX 280 came out that there was no way NVIDIA would ever fit those chips in a notebook, and in some sense we were proven right with refresh after refresh of G92 at the top of the mobile graphics food chain. With mobile Fermi, it looks like NVIDIA more likely chose to remain with a tweaked G92 in order to focus resources elsewhere—i.e. dropping to 55nm to save power and boost clocks over the 65nm original. Obviously, we wouldn't have wanted a trimmed down GT200 chip this late in the game, but cutting down the GF100 to fit into a notebook had to have been far more onerous a task than trying to get a 55nm GT200b die into the same power envelope (or trying to respin GT200b at 40nm). Unfortunately, GT200b doesn't have DX11, so really NVIDIA had no choice. The result is a GF100 die that sips power at idle (relatively speaking) but still guzzles the juice under load. (Not that you'd run a gaming laptop on battery power.)

As for ATI/AMD, they seemed unable to deliver Mobility Radeon HD 4800s in any kind of reasonable quantities, and in general there was a lack of interest. Contrast that against being able to buy an HD 5800 series laptop from a variety of vendors today. They're not the fastest mobile parts any longer, but they are far more affordable. $1500 for the ASUS G73Jh makes the Clevo W880CU look like highway robbery! Go one step further and start asking ATI the same questions. Cypress is a monster to be sure, but it's no more a beast in terms of power and heat than its predecessors, the RV770 and RV790, were. RV770 made it into notebooks, but the best ATI says they can do is trim the clocks on Juniper and call it a day. We're left with a Mobility Radeon HD 5870 that offers a minimal improvement on its predecessor and wondering why a mobile chip based on the superbly economical Radeon HD 5850 isn't making the rounds. If NVIDIA can do a 100W TDP mobile part, AMD should be able to do the same. Certainly trimming Cypress too much has proven in some ways as troublesome as cutting down Fermi has been; the 5830 sports higher thermals and power draw than the 5850, and the GTX 465 landed on the market with a resounding thud, but desktop parts aren't the same battleground as notebooks and 5830 or GTX 465 levels of performance in a notebook would be substantially faster than what we currently have.

Really, NVIDIA got to sit on the top of the mobile GPU heap for far too long. It's good to see competition, and we can only hope that there's more to come from both companies. We're still a generation behind in terms of desktop performance; even if both companies are now using up-to-date parts, the final clock speeds are a far cry from desktop GPUs. What we really want is more of a Conroe style revolution for mobile GPUs where we get up to 25%-50% more performance without increasing power requirements—or even reducing them!—over last generation hardware. Then again, the P4 architecture was so poor that it made Conroe possible.

It's hard to believe there aren't better options for either manufacturer. Was NVIDIA so upset about losing the mobile crown to ATI—even though the margin wasn't that great to begin with—that it was worth curtailing Fermi's performance so brutally? Wouldn't the prudent thing to do have been to let ATI have their cake for the time being and try and push GF104 into laptops? Or would that just be suggesting NVIDIA do the same thing we're accusing ATI of? Like we said, the GeForce GTX 480M raises more questions than it answers, but all of us armchair engineers have to be wondering why mobile graphics aren't improving faster.

Looking at the big picture, the limiting factor on mobile GPUs is power. Desktop cards keep getting faster, sure, but power requirements are generally increasing as well. ATI's 4870 has higher load power than 3870, and 5870 leapfrogs 4870. Likewise, NVIDIA's GTX 285 needed more than the 9800 GTX, and the GTX 480 ups the ante. Move over to notebooks, and we hit the power wall hard. The biggest power bricks are still 240W (give or take), so there's no going over that limit, even if you can dissipate the heat. We've had the same 220-240W power adapters at the high-end for years, and it doesn't look to be changing. 480M may have bumped the TDP up to 100W, but our battery life tests show that it's about the same as the 50W 5870 when it's not under load, and we've had dual-GPU notebooks that use a lot more power than a single 480M. It's not like you're going to load the GPU without plugging in, and at that point it's more a question of whether cooling is sufficient than how much power you need.

Perhaps a simpler way of stating things is that mobile graphics performance isn't increasing very quickly. AMD likely took the existing GTX 285M and did enough testing and research to make sure 5870 was faster by 10%. Now NVIDIA has gone and done the same thing to regain the lead. They pushed the power envelope harder, but that's more a factor of the Fermi design constraints right now. Give them time for revisions and we'll likely see that drop. Ultimately, process technology refinements and tweaked architectures are the primary means of performance improvements, and 25% faster per year looks to be the goal.

Application Performance and Battery Life Closing Thoughts
Comments Locked

46 Comments

View All Comments

  • anactoraaron - Thursday, July 8, 2010 - link

    I really don't care one bit anymore about DX9. Please stop putting this in your testing. I doubt anyone else cares about DX9 numbers anymore... I mean why not put in DX8 numbers too???

    And why are DX9 numbers only shown for nVidia products? Are they asking you to do so?
  • anactoraaron - Thursday, July 8, 2010 - link

    it downright tears past the competition in Far Cry 2 and DiRT 2- yeah and people want to NOT play those in DX11... <sigh>
  • anactoraaron - Thursday, July 8, 2010 - link

    And you say that in DIRT 2 it holds a 25% lead- yeah when comparing the DX9 numbers of the 480M to the DX11 numbers of the mobility 5870. The real difference is actually .1 fps (look at the W880CU-DX11 line)... yep I'm not reading the rest of this article. Not a very well written article...

    btw sorry bout the double post earlier...
  • JarredWalton - Thursday, July 8, 2010 - link

    You're not reading the chart properly. We put DX11 in the appropriate lines and colored then different for a reason. Bright green compares with blue, and dark green compares with purple. The yellow/orange/gold are simply there to show performance at native resolution (with and without DX11).

    In DX9, 480M gets 79.6 vs. 5870 with 59.9. 480M wins by 33%
    In DX11, 480M gets 60.0 vs. 5870 with 48.1. 480M wins by 25%.

    As for including DX9, it's more a case of using something other than DX11 for cards that don't support DX11, as well as a check to see if DX11 helps or hurts performance. DiRT 2 doesn't have a DX10 path, so we drop to DX9 if we don't have DX11 hardware. Another example, in Metro 2033 enabling DX11 results in a MASSIVE performance hit. So much so that on notebooks it's essentially useless unless you run at 1366x768 with a 480M.
  • Dustin Sklavos - Thursday, July 8, 2010 - link

    While it's swell that you don't care about DX9 anymore, the fact is that a substantial number of games released today STILL use it. DX10 never really took off, and while DX11 is showing strong signs of adoption moving forward, a large number of games still run in DX9 mode.
  • GTVic - Thursday, July 8, 2010 - link

    Is the author an NVIDIA fanboi? Apparently the 5870M is anemic while the 480M is the "fastest mobile GPU on the planet". Of course the more moderate comments are hidden in the details while "fastest on the planet" is screamed in bold letters.

    Never mind that unless you have an FPS counter on your display you couldn't tell the difference, apparently a few extra FPSs and a name that starts with "N" is all you need to get a glowing review complete with stupendous superlatives.

    Also apparently it is OK to dismiss certain games because they are known to favour ATI hardware. But lets not mention anything about cough, Far Cry, cough.
  • JarredWalton - Thursday, July 8, 2010 - link

    I'd love to see what NVIDIA thinks of your comment, because I know they felt Dustin was overly harsh. He's also been accused of being an AMD Fanboi, so apparently he's doing his job properly. ;-)

    The gaming performance is a case of looking at what's required to play a game well, as well as focusing on the big picture. Sure, L4D2 is a Source engine game and favors AMD architectures traditionally. It also happens to run at 62 FPS 1080p with 4xAA (which is faster than the 58 FPS the 5870 manages at the same settings). Mass Effect 2 performance has changed quite a bit between driver versions on 5870, and it isn't as intensive as other games. Just because 5870 leads at 1600x900 in two out of nine titles doesn't mean it's faster. At 1080p the margins generally favor 480M, and with 4xAA enabled they favor it even more.

    Even with that, we go on to state that the 480M doesn't deliver a resounding victory. It's the world's fastest mobile GPU, yes, but it ends up being 10-15% on average which is hardly revolutionary. I said the same thing in the G73Jh review on the 5870, and it got an editor's choice while this doesn't. Seriously, read the conclusion (pages 6 and 7) and tell me that we come off as being pro-NVIDIA and anti-AMD in this review.
  • frozentundra123456 - Thursday, July 8, 2010 - link

    I did re-read those parts also. I didn't even notice it myself on the first reading, but I can see how one would see some "ATI bashing" (although I would not use that strong a word), in that the article is about the 480M, but you spend a considerable amount of time criticizing(justifiably) the HD5870M. It just seems that you emphasized the shortcomings of the ATI part in an article primarily about the 480M, while being rather easy on the 480M itself in most sections.
    That said, I dont think you are unfair in general or intentionally, I just think the article was somewhat skewed in that particular section.
    And actually, as you are, I am quite disappointed in both GPUs, but more-so in the 480m in that it is more expensive and power hungry for a rather small performance increase.
  • erple2 - Thursday, July 8, 2010 - link

    I disagree. The "bashing" that's done for the 5870M I think sets the tone for how "lame" the 480M ultimately is.

    I found that the bashing of the 5870 really brought to me in perspective just how relatively uninteresting the 480M really is. I mean, if the 5870 was only marginally faster than an "archaic" G92 part, what does that say about NVidia's self-proclaimed ushering in a "revolution" in graphical performance?

    I see it as a giant "thud", much like the GTX465 is alluded to in page 5..
  • GTVic - Thursday, July 8, 2010 - link

    As I mentioned, I did see the more moderate comments, what I was trying to get across was that the attention grabbing headline was out of balance with the actual review.

    And if you discount one game for being favoured by ATI then you should probably mention Far Cry being favoured by NVIDIA. Those type of issues are being highlighted again with recent revelations that NVIDIA is hobbling CPU benchmarks of PhysX performance with unoptimized code.

    One additional comment, it is always difficult to compare graphs with long titles for the different configurations, especially when the colors and the order keep changing.

Log in

Don't have an account? Sign up now