• What
    is this?

    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.

    PRESENTED BY

CES has wrapped up now and we’re all back home, but we’ve still got a few items to cover. While Anand was meeting with AMD on Thursday to go over some other their other tech, I got a chance to head into a separate area for a briefing on their mobile technology. There are two items I want to quickly discuss: the Radeon 7000M lineup, and Trinity.

We’ve already covered some of the 7000M parts that will ship in the very near future—quite a few laptops at CES were running these “new” GPUs. The 7400M, 7500M, and 7600M are all VLIW5 parts, which we’ve called “rebadged” GPUs. AMD pointed out (and we at agree at least in part) that calling these rebadged GPUs is a bit too harsh—reused or recycled might be a better term, particularly if you’re into the whole “green” thing. But seriously, the latest 7000M GPUs aren’t just a straight rerelease of the same silicon under a new name; we’ve pointed out in the past that as time passes, companies become more familiar with a process technology, both on the fabricator side and on the chip designer side. This is why initial 40nm GPUs don’t offer all the performance and features of later model 40nm GPUs for example. AMD didn’t specifically state that the 7400M/7500M/7600M will use a new revision/spin of the existing Northern Islands cores, but it was at least implied, and it’s likely that we’ll see slightly better performance and power characteristics out of the latest batch. You can see the released specs and expected performance in the gallery below.

Okay, that’s part one of the 7000M strategy: reuse the existing Northern Islands family to occupy the value and mainstream price segments. The second part of the strategy, we don’t have any specifics to reveal in terms of specs, but AMD did let us know one important piece of information. They have in essence drawn a line in the sand (e.g. in their product portfolio), and everything 7600M and below will reuse their existing 40nm VLIW5 architecture while all of the yet-to-be-announced parts above 7600M (7700M/7800M/7900M) will switch to 28nm GCN (Graphics Core Next). It sounds like the mobile GPUs will use lower power variants of “Pitcairn” and “Cape Verde”, leaving “Tahiti” as a desktop-only GPU for the time being, but features like DX11.1, VCE, and ZeroCore Power Technology will be present in the higher performance 7000M parts when they launch. And just when will that be? AMD wouldn’t give us a date, but all indications are we’ll see the 7000M Southern Islands GPUs in the April/May timeframe.

AMD had a couple of laptops (okay, monstrous DTR beasts really) running in their booth to show that they had working silicon for both classes of 7000M hardware. Both notebooks are Clevo X7200 units using desktop CPUs, so they’re more proof of concept than something that most people are going to buy, but they were both happily running 3D applications. The notebook on the left has a single HD 7690M running a custom AMD demo, while the notebook on the right has CrossFire high-end 7000M hardware (presumably something in the HD 7900M class) running Aliens Vs. Predator 2.

Finally, a few people continue to ask questions about Trinity hardware. Obviously Trinity was running in a couple demonstrations, but AMD is not yet disclosing the full hardware specs. Some have speculated that Trinity will have a GCN-based graphics core, but if you stop to think about it for a minute that’s obviously not going to happen. GCN is coming out on TSMC’s 28nm process technology while Trinity will use GLOBALFOUNDRIES’ 32nm process; as AMD is busy trying to work on the improved Bulldozer cores in Trinity along with upgrading the GPU, trying to bring GCN into the mix would seriously delay the whole process. The short story is that AMD (again) confirmed that Trinity is using a VLIW4 core for graphics, and it offers enhanced performance relative to the core in Llano. We’ll hopefully have final hardware in hand in the next few months to provide the full performance analysis.

POST A COMMENT

16 Comments

View All Comments

  • DanNeely - Monday, January 16, 2012 - link

    A PCIe2.0 x4 equivalent is consistently fast enough for gaming. A 1x is not (the ~30% average slowdown consists of some games taking no penalty and others suffering as much as a 75% drop in FPS). Unfortunately TechPowerUp didn't test 2x bandwidth; I'm guessing they only did physical slot testing instead of the tape the contacts method needed to get a 2x effective slot in a desktop.

    http://www.techpowerup.com/reviews/AMD/HD_5870_PCI...
    Reply
  • JarredWalton - Monday, January 16, 2012 - link

    Keep in mind that article is over two years old; many of those titles are a lot less demanding than current generation games, and I'm not sure if any of them support DX11 features. That said, HD 5870 on desktop is still faster than everything short of SLI/CF configurations on laptops. I'd have to see testing done with things like GTS 430 and HD 6770 to get a better feel for what the performance loss due to limited PCIe bandwidth will be on recent titles. Maybe it's something to investigate when I get time. :-) Reply
  • DanNeely - Monday, January 16, 2012 - link

    The results have remained fairly consistant since the first Toms Hardware test I saw using IIRC PCIe1.0 and an 8800 series card; meaning the 8800 w 1.0 1/4x slots took similar hits to the 5870 with 2.0 1/4x slots. Without taking an inordinate amount of time to bench a dozen+ modern games I can't categorically say it won't have any effect, I'd be shocked if it turned out to do so. Reply
  • tipoo - Sunday, January 15, 2012 - link

    Strategy aside I still wish they reserved new main numbers for the new generation of graphics cores. There was a time when card names made sense and refreshes using the same architecture would be ***50 parts. Now no one can tell what generation card it is without looking it up, and mobile cards are even worse. Reply
  • eanazag - Monday, January 16, 2012 - link

    I understand they may perform better than originally released products due to maturity, but it is not really a new GPU. I think they should have just opted to throw an R2 on the end so people have a sensible way to tell the difference.
    I find it interesting they used an Intel system in addition to the AMD for examples, too (in the slides). I think throwing a few more games would have been better; like 10 total.
    Reply
  • DanNeely - Monday, January 16, 2012 - link

    The problem is OEM sales/marketing types. Once the first 7xxx series parts are out all 6xxx series parts are obsolete by definition and (they think) Joe Luser will decide any computer with them inside is outdated crap. A half dozen or a dozen years ago when a GPU was simple enough that a top-bottom redesign was possible every generation (and even when not, adding an additional increment of hardware video decode was possible from the die shrink) this didn't matter.

    Designing new chips isn't going to get easier as time passes by; and mass educating lusers is an even more impossible challenge so the current state of affairs is something we geeks are just going to have to live with.
    Reply

Log in

Don't have an account? Sign up now