(Belatedly) Examining AMD’s Mobility 6000M

Last but not least, we have AMD’s new mobile GPUs. We already discussed NVIDIA’s new 500M lineup, but somehow we slipped through the cracks and didn’t get briefed on AMD’s 6000M lineup in advance of the Tuesday unveiling. There was a bit of miscommunication between us and AMD, where we thought we were being briefed in person today on products that would be announced post-CES. AMD meanwhile thought we already had the basic information and we’d just get some additional detail and hands-on experience at the show. Well, that didn’t quite happen. We don’t have the depth of information available that we did with the 500M, but we did get the important details like shader counts, clock speeds, etc. As with the GeForce 500M launch, the Radeon 6000M also has some rebranding going on, but there are some completely new chips as well. Here’s the rundown.

AMD Radeon 6000M Specifications
  6900M 6800M 6700M/6600M 6500M 6400M 6300M
Target Market Ultra Enthusiast Enthusiast Performance Performance Thin Mainstream Value
Stream Processors 960 800 480 400 160 80
Transistors 1.7 Billion 1.04 Billion 715M 626M 370M 242M
Core Clock (MHz) 560-680 575-675 500-725 500-650 480-800 500-750
RAM Clock (MHZ) 900
(3.6GHz)
900-1000
(3.6-4.0GHz)
800-900
(3.2-3.6GHz)
900
(3.6GHz)
800-900
(3.2-3.6GHz)
800-900
(1.6-1.8GHz)
RAM Type GDDR5 / DDR3 GDDR5 / DDR3 GDDR5 / DDR3 GDDR5 / DDR3 GDDR5 / DDR3 DDR3
Bus Width 256-bit 128-bit 128-bit 128-bit 64-bit 64-bit
Compute Performance ~1.31 TFLOPS ~1.12 TFLOPS 696 GFLOPS 520 GFLOPS 256 GFLOPS 120 GFLOPS
Bandwidth (GB/s) 115.2 57.6-64 51.2-57.6 GDDR5 or
25.6-28.8 DDR3
57.6 GDDR5 or
28.8 DDR3
25.6 GDDR5 or
12.8-14.4 DDR3
12.8-14.4 DDR3
ROPs 32 16 8 8 4 4
UVD Version UVD3 UVD2 UVD3 UVD2 UVD3 UVD2
Eyefinity Up to 6 Up to 6 Up to 6 Up to 6 Up to 4 Up to 4
HDMI 1.4a Yes Via Software Yes Via Software Yes Via Software
DisplayPort 1.2 Yes No Yes No Yes No

All of the chips are still on 40nm, but the 6900M, 6700M, and 6400M use new designs based off the Barts architecture. You’ll note that they all include UVD3, HDMI 1.4a, and DisplayPort 1.2. On the rebranding side of things, 6800M, 6500M, and 6300M are all clock speed bumps of the existing 5000M series, which means they’re still the mobile variants of the Redwood architecture. AMD has apparently enabled a software “hack” that lets them do HDMI 1.4a, but they don’t support DP1.2, and they also don’t support Blu-ray 3D. (The HD 6430M also lacks 3D Blu-ray support.) We’ve previously covered the architectural enhancements in the Barts chips, so we won’t dwell on that much here. Clock for clock, Barts should be slightly faster than the previous generation Redwood series, it’s more power efficient, and it has a better video processing engine. One thing that sadly isn’t showing up in mobile GPUs just yet is the Cayman PowerTune technology; we’ll probably have to wait for the next generation mobile chips to get PowerTune as an option, and we’re hopeful that it can do for mobile GPUs what Intel’s Turbo Boost is doing for Sandy Bridge.

As with the NVIDIA hardware, the jury is still out on performance of the various solutions, but on paper everything looks reasonable. Starting at the bottom we have the 6300M, which looks to be a faster clocked HD 5470. That’s not going to win many awards for raw computational prowess, but as with NVIDIA’s 410M/520M it does provide an inexpensive option that will have AMD’s Catalyst drivers, so until Intel can get their Sandy Bridge IGP drivers to the same level we like having alternatives. Of course, we wouldn’t want switchable graphics with something as slow as the 6300M, as the goal should be noticeably better performance. The new 6400M should handle that role nicely. Sporting twice as many stream processors, 6400M should already offer a marked improvement over 6300M/HD 5470. Any configurations that get GDDR5 should reach the point where the GPU core is the sole limiting factor on performance, and while we’re not too fond of the 64-bit interface it should still be a good match for this “mainstream” offering.

Moving up to the next tier, we have the 6500M replacing the HD 5650, with the 6700M using the new architecture. The previous generation HD 5650 at 550MHz generally outperforms the NVIDIA GT 425M, so increasing the bandwidth and clock speeds (i.e. 6500M) should keep the series competitive with (or ahead of) the 525M/535M. The 6700M takes things a step further with 20% more stream processors, and provided the manufacturer uses GDDR5 you’ll get more than enough bandwidth—the 57.6GB/s figure makes the typical DDR3 configurations look archaic, but we worry there will be plenty of slower/cheaper DDR3 models on the market.

Finally, at the top we have the enthusiast and ultra-enthusiast offerings. 6800M is once more a higher clocked version of the existing HD 5850/5870. The 6900M is the potentially killer product. Total computation performance is up 17%, which is nothing special, but the memory interface is specced at 256-bit and 900MHz, yielding a whopping 115.2GB/s of bandwidth. We’ve seen quite a few games in the past where memory bandwidth appears to be a limiting factor, and the 6900M addresses this in a big way. Bandwidth is 80% higher than the previous generation 5870 and the 6800M, and it’s also 20% higher than what NVIDIA is offering with the GTX 485M. Of course, if the games/applications you’re running aren’t bandwidth limited, all that extra headroom might go to waste.

As we stated in the NVIDIA 500M announcement, NVIDIA has a very compelling platform with Optimus Technology allowing them to work seamlessly with integrated graphics and give you the appropriate performance or power savings as appropriate. Okay, so there are occasional bugs to work out with Optimus, but I’d put it at roughly the same level of teething pain as the current SLI support. Since NVIDIA lets you create custom profiles—for SLI as well as Optimus—most of the time things work out fine. The alternatives both involve compromises, namely: lack of regular driver updates in the case of switchable graphics, and lowered battery life with discrete-only.

AMD  did inform us that they’re working on some updates to their switchable graphics design, which will involve putting a driver between the OS and the IGP/GPU drivers. They say it will allow users to update drivers for Intel’s IGP separate from the AMD GPU, and that it will address the concerns we’ve mentioned here and provide some needed competition to Optimus. When exactly will this new technology arrive and how will it work? That remains to be seen.

While I still think a good Optimus-enabled GPU with a quad-core Sandy Bridge processor is the best option for a balanced notebook, we need to see what AMD can do in terms of performance and battery life. Idle GPU power draw has been getting better with each generation, and we might not have to give up too much battery life. Certainly it’s less complex to only deal with a single GPU inside a system. There will also be plenty of AMD IGP + GPU designs that can use switchable graphics with AMD drivers, and since both sets of hardware use the same driver you don’t have to worry about lack of support. With Llano APUs later this year, we should see such configurations, but it’s hard to imagine Llana keeping up with Sandy Bridge on the CPU side. That means Trinity in 2012 will be the real alternative to the current “fast CPU + fast GPU + IGP” ecosystem NVIDIA and Intel are pushing.

Wrapping things up, there are a lot of laptops at CES using Brazos, plenty of AMD and Intel CPUs paired with AMD 6000M GPUs, and of course the Intel CPU + NVIDIA GPU combinations we mentioned earlier in the week. The mobile market just keeps growing, and we look forward to seeing how these new NVIDIA and AMD GPUs stack up. The proof will be in the pudding as usual.

More AMD Demos and Future Roadmap
Comments Locked

72 Comments

View All Comments

  • Edgar_Wibeau - Friday, January 7, 2011 - link

    The original plan at the begnning of 2010 was:
    - Llano first
    - Ontario second
    - Bulldozer third

    Then, further problems emerged with Llano, yiel problems according to AMD, some suspect problems with the GPU-part having yield-problems in 32nm SOI/HKMG which is a completely new process tech for a GPU.

    So on the analyst CC in november (even befor that IIRC) the current (inofficial) plan was communicated:
    - Ontario first
    - Bulldozer starting from april
    - Llano in Q3

    Some sources now claim a re-push forward to june for Llano, but that's very uncertain as of now.

    There are more mistakes in the article, like Ontario is claimed to be manufactured on a 32nm process, which (ULP Bulk CMOS) doesn't even exist neither at TSMC, nor at GloFo. Bothe were cancelled in favour of 28nm.

    Maybe Anand should hire an AMD spinner for a change.
    http://www.techeye.net/chips/top-intel-spinner-tip...
  • Edgar_Wibeau - Friday, January 7, 2011 - link

    http://www.planet3dnow.de/photoplog/index.php?n=12...
  • Edgar_Wibeau - Friday, January 7, 2011 - link

    http://www.planet3dnow.de/photoplog/index.php?n=12...

    Inofficial of course, could be fakes of course.
  • spigzone - Friday, January 7, 2011 - link

    Charlie said in an article a couple weeks ago on Semiaccurate GloFo's latest Llano respin suddenly came up roses, apparently everything fell into place and they suddenly had a production ready yield. May have resulted in Llano being bumped up ahead of bulldozer.
  • JarredWalton - Friday, January 7, 2011 - link

    Okay, so after meeting with AMD yet again today I asked for clarification. Sorry for the misinformation above, but Bulldozer and Llano are both supposed to come out Q2 apparently. I was told they should launch within ~1 month of each other.
  • sirmo - Friday, January 7, 2011 - link

    Those are good news. Thanks for the clarification.
  • GeorgeH - Friday, January 7, 2011 - link

    "First it was getting below 1 micron, but we’ve long since smashed that barrier and are moving steadily towards the 1nm mark. How small can we go?"

    Well, the radius of a single atom is ~0.1nm (depending on how you define radius.) I'd say that's a pretty solid floor on feature size. :)
  • HibyPrime1 - Friday, January 7, 2011 - link

    Then they need to get working on making individual electrons into transistors.
  • marraco - Friday, January 7, 2011 - link

    No. Miniaturization is dead end.

    After 1nm we are into picometer scale, on which quantum forces completely changes the rules.

    The answer are polinary transistors, which tap picometer capabilities by working with many atoms in coordination, instead to just reducing the number of atoms on the same transistor.

    We need to use the same atoms on different transistors, and simultaneously. That way we would increase [logical] transistor density witouth [non-existent] smaller atoms, or stacking layers in 3D.
  • marraco - Friday, January 7, 2011 - link

    Here is the first polinary transistor:

    http://www.physorg.com/news/2010-10-triple-mode-tr...

    It is capable of switch between 3 states instead of the 2 states traditional transistors.

    Once we achieve the four states transistor, it will be able to do the work of 2 transistors on the space of one, efectively duplicating density. It is the future of Moore's law, but it requires deeper understanding of quantum forces.

Log in

Don't have an account? Sign up now