(Belatedly) Examining AMD’s Mobility 6000M

Last but not least, we have AMD’s new mobile GPUs. We already discussed NVIDIA’s new 500M lineup, but somehow we slipped through the cracks and didn’t get briefed on AMD’s 6000M lineup in advance of the Tuesday unveiling. There was a bit of miscommunication between us and AMD, where we thought we were being briefed in person today on products that would be announced post-CES. AMD meanwhile thought we already had the basic information and we’d just get some additional detail and hands-on experience at the show. Well, that didn’t quite happen. We don’t have the depth of information available that we did with the 500M, but we did get the important details like shader counts, clock speeds, etc. As with the GeForce 500M launch, the Radeon 6000M also has some rebranding going on, but there are some completely new chips as well. Here’s the rundown.

AMD Radeon 6000M Specifications
  6900M 6800M 6700M/6600M 6500M 6400M 6300M
Target Market Ultra Enthusiast Enthusiast Performance Performance Thin Mainstream Value
Stream Processors 960 800 480 400 160 80
Transistors 1.7 Billion 1.04 Billion 715M 626M 370M 242M
Core Clock (MHz) 560-680 575-675 500-725 500-650 480-800 500-750
RAM Clock (MHZ) 900
(3.6GHz)
900-1000
(3.6-4.0GHz)
800-900
(3.2-3.6GHz)
900
(3.6GHz)
800-900
(3.2-3.6GHz)
800-900
(1.6-1.8GHz)
RAM Type GDDR5 / DDR3 GDDR5 / DDR3 GDDR5 / DDR3 GDDR5 / DDR3 GDDR5 / DDR3 DDR3
Bus Width 256-bit 128-bit 128-bit 128-bit 64-bit 64-bit
Compute Performance ~1.31 TFLOPS ~1.12 TFLOPS 696 GFLOPS 520 GFLOPS 256 GFLOPS 120 GFLOPS
Bandwidth (GB/s) 115.2 57.6-64 51.2-57.6 GDDR5 or
25.6-28.8 DDR3
57.6 GDDR5 or
28.8 DDR3
25.6 GDDR5 or
12.8-14.4 DDR3
12.8-14.4 DDR3
ROPs 32 16 8 8 4 4
UVD Version UVD3 UVD2 UVD3 UVD2 UVD3 UVD2
Eyefinity Up to 6 Up to 6 Up to 6 Up to 6 Up to 4 Up to 4
HDMI 1.4a Yes Via Software Yes Via Software Yes Via Software
DisplayPort 1.2 Yes No Yes No Yes No

All of the chips are still on 40nm, but the 6900M, 6700M, and 6400M use new designs based off the Barts architecture. You’ll note that they all include UVD3, HDMI 1.4a, and DisplayPort 1.2. On the rebranding side of things, 6800M, 6500M, and 6300M are all clock speed bumps of the existing 5000M series, which means they’re still the mobile variants of the Redwood architecture. AMD has apparently enabled a software “hack” that lets them do HDMI 1.4a, but they don’t support DP1.2, and they also don’t support Blu-ray 3D. (The HD 6430M also lacks 3D Blu-ray support.) We’ve previously covered the architectural enhancements in the Barts chips, so we won’t dwell on that much here. Clock for clock, Barts should be slightly faster than the previous generation Redwood series, it’s more power efficient, and it has a better video processing engine. One thing that sadly isn’t showing up in mobile GPUs just yet is the Cayman PowerTune technology; we’ll probably have to wait for the next generation mobile chips to get PowerTune as an option, and we’re hopeful that it can do for mobile GPUs what Intel’s Turbo Boost is doing for Sandy Bridge.

As with the NVIDIA hardware, the jury is still out on performance of the various solutions, but on paper everything looks reasonable. Starting at the bottom we have the 6300M, which looks to be a faster clocked HD 5470. That’s not going to win many awards for raw computational prowess, but as with NVIDIA’s 410M/520M it does provide an inexpensive option that will have AMD’s Catalyst drivers, so until Intel can get their Sandy Bridge IGP drivers to the same level we like having alternatives. Of course, we wouldn’t want switchable graphics with something as slow as the 6300M, as the goal should be noticeably better performance. The new 6400M should handle that role nicely. Sporting twice as many stream processors, 6400M should already offer a marked improvement over 6300M/HD 5470. Any configurations that get GDDR5 should reach the point where the GPU core is the sole limiting factor on performance, and while we’re not too fond of the 64-bit interface it should still be a good match for this “mainstream” offering.

Moving up to the next tier, we have the 6500M replacing the HD 5650, with the 6700M using the new architecture. The previous generation HD 5650 at 550MHz generally outperforms the NVIDIA GT 425M, so increasing the bandwidth and clock speeds (i.e. 6500M) should keep the series competitive with (or ahead of) the 525M/535M. The 6700M takes things a step further with 20% more stream processors, and provided the manufacturer uses GDDR5 you’ll get more than enough bandwidth—the 57.6GB/s figure makes the typical DDR3 configurations look archaic, but we worry there will be plenty of slower/cheaper DDR3 models on the market.

Finally, at the top we have the enthusiast and ultra-enthusiast offerings. 6800M is once more a higher clocked version of the existing HD 5850/5870. The 6900M is the potentially killer product. Total computation performance is up 17%, which is nothing special, but the memory interface is specced at 256-bit and 900MHz, yielding a whopping 115.2GB/s of bandwidth. We’ve seen quite a few games in the past where memory bandwidth appears to be a limiting factor, and the 6900M addresses this in a big way. Bandwidth is 80% higher than the previous generation 5870 and the 6800M, and it’s also 20% higher than what NVIDIA is offering with the GTX 485M. Of course, if the games/applications you’re running aren’t bandwidth limited, all that extra headroom might go to waste.

As we stated in the NVIDIA 500M announcement, NVIDIA has a very compelling platform with Optimus Technology allowing them to work seamlessly with integrated graphics and give you the appropriate performance or power savings as appropriate. Okay, so there are occasional bugs to work out with Optimus, but I’d put it at roughly the same level of teething pain as the current SLI support. Since NVIDIA lets you create custom profiles—for SLI as well as Optimus—most of the time things work out fine. The alternatives both involve compromises, namely: lack of regular driver updates in the case of switchable graphics, and lowered battery life with discrete-only.

AMD  did inform us that they’re working on some updates to their switchable graphics design, which will involve putting a driver between the OS and the IGP/GPU drivers. They say it will allow users to update drivers for Intel’s IGP separate from the AMD GPU, and that it will address the concerns we’ve mentioned here and provide some needed competition to Optimus. When exactly will this new technology arrive and how will it work? That remains to be seen.

While I still think a good Optimus-enabled GPU with a quad-core Sandy Bridge processor is the best option for a balanced notebook, we need to see what AMD can do in terms of performance and battery life. Idle GPU power draw has been getting better with each generation, and we might not have to give up too much battery life. Certainly it’s less complex to only deal with a single GPU inside a system. There will also be plenty of AMD IGP + GPU designs that can use switchable graphics with AMD drivers, and since both sets of hardware use the same driver you don’t have to worry about lack of support. With Llano APUs later this year, we should see such configurations, but it’s hard to imagine Llana keeping up with Sandy Bridge on the CPU side. That means Trinity in 2012 will be the real alternative to the current “fast CPU + fast GPU + IGP” ecosystem NVIDIA and Intel are pushing.

Wrapping things up, there are a lot of laptops at CES using Brazos, plenty of AMD and Intel CPUs paired with AMD 6000M GPUs, and of course the Intel CPU + NVIDIA GPU combinations we mentioned earlier in the week. The mobile market just keeps growing, and we look forward to seeing how these new NVIDIA and AMD GPUs stack up. The proof will be in the pudding as usual.

More AMD Demos and Future Roadmap
Comments Locked

72 Comments

View All Comments

  • Christobevii3 - Monday, January 10, 2011 - link

    I've run an asus 790fx am2+ with sb600 for 3 years now. I can still update the bios and run a 6 core phenom now. It started witha 5000+ x2. To be able to still upgrade an intel this day you'd have to had waited for the x58 to be out which is nearing 2 years but dies pretty much now.

    Core 2's you had two busses unless you waited for the p35 or bought the expensive x48 board and again at most you probably got 2 years out of them.

    I'll wait for bulldozer to see instead of intels sandy bridge serving up $170 mobos that only have 8x/8x pci-e configs.
  • azguy90 - Wednesday, January 12, 2011 - link

    Bulldozer isn't any better for upgrading. Bulldozer CPUs are going to require an AM3+ motherboard; so your upgrade path is about the same as Sandy Bridge. You will be able to put older chips in the new boards, but not new chips in old boards.
  • nuudles - Friday, January 7, 2011 - link

    Bulldozer should have been released now, AMD showed how much it hurts if you are late to the party by getting a 6 month head start in dx11 and therefore completely dominating dx11 marketshare. Unfortunately it is much harder keeping up with Intel than Nvidia...

    Hopefully Llano is a homerun for them as the mobile space is more lucrative than desktops. Unfortunately they are even further behind with their mobile cpus thankfullythan the desktop ones. Hopefully power gating will show big battery life gains and the GPU will be strong enough to make anything other than top end discrete mobile cards obsolete (to make up for worse cpu perf between athlon ii + tweaks and sandy bridge).

    Another beef is they should have had something to compete with optimus by now, they must have had 6 months to build support into the 6000 series! Either that or get their cards to draw rediculously low amounts at idle through power gating all but 1 alu or something like that (if that is possible).
  • medi01 - Friday, January 7, 2011 - link

    It wasn't that nVidia was 6 month late, but that on top of being 6 month late, they created power hungry monsters with so-so performance.

    If Buldosers do perform well, delay won't matter much. If they don't, ouch, AMD (unlike nVidia who can simply punch editors harder, for better reviews, comparing stock clocked GPUs to cherry picked overclocked, cough) can't really afford that.
  • shtldr - Friday, January 7, 2011 - link

    "Another beef is they should have had something to compete with optimus by now"

    I've had it for like half a year. It's called switchable graphics.

    Acer 3820TG, switchable between HD 5650 and i5 integrated graphics.
  • nuudles - Friday, January 7, 2011 - link

    yes it is switchable, but not dynamic like optimus
  • YukaKun - Friday, January 7, 2011 - link

    Asus K40AB line bro. 2009 and still kicking besides Optimus.

    AMD just fails hard at marketing and positioning their products sometimes =/

    Cheers!
  • nuudles - Friday, January 7, 2011 - link

    Luckily for AMD we are at a point where even a low to midrange cpu is fast enough for most applications, very few consumers would notice a performance difference between something like an Athlon ii x4 and an i5 2400 in most of their programs.

    Even in games the difference is not huge, so if they strike the right performance with the integrated GPU and let the GPU and CPU work well alongside each other (openCL on the gpu for big performance gains in highly parallel workloads) they might just have a winner.

    I would much rather get athlon ii x4 or phenom ii x4 cpu performance levels together with hd5670/5750 gpu performance levels than i7 980 with hd 5450/5570.
  • sirmo - Friday, January 7, 2011 - link

    """Right now, GlobalFoundries is entering full production mode for 32nm, with AMD’s Llano chips scheduled to be the first market solution to use the process. Later this year, AMD will also launch their Bulldozer cores on the 32nm process."""

    Everything we've seen so far said Bulldozer was going to be first 32nm product in Q2 and Llano was going to be later? Why the sudden change?
  • JarredWalton - Friday, January 7, 2011 - link

    Maybe in the past Bulldozer was coming first, but everything I've heard for the past few months was: Brazos first, Llano seconds, Bulldozer third. Probably the use of K10.5 with HD 5600 to make Llano on 32nm was easier than a completely new architecture. What really worries me is that there were some hidden undertones in conversations that make me think desktop Bulldozer may not even get out in force until 2012. I really hope I'm imagining things and they get the chips out more like Q3'11.

Log in

Don't have an account? Sign up now