GCN HD 7000M: Key Features and Technologies

With the chips themselves out of the way, let’s discuss some of the other features. The three key items AMD mentions in the above slide are GCN, AMD Enduro Technology, and AMD App Acceleration. The first we’ve already covered, and there’s not much new to say with regards to AMD’s App Acceleration—there are apparently 200+ GPU-accelerated applications. The second item sounds far more interesting for the mobile world, though, so let’s dig a little deeper into Enduro and AMD’s power technologies in general. The following gallery contains all the pertinent slides for this discussion.

Like the desktop Southern Islands parts, all of the higher-end mobile variants will have power gating and ZeroCore technology. That means that idle power draw should be a step down from where we’ve seen it on previous mobile GPUs, and for CrossFire configurations it means that the secondary GPU can be completely shut down when it’s not in use. As for Enduro, AMD informed us that this is the latest iteration of their dynamic switchable graphics technology. We asked for additional details, but AMD didn’t really have anything more to add to the discussion so it could be that Enduro functions exactly like dynamic switchable graphics on the previous generation 6000M parts. And despite the above slide showing an AMD APU and GPU, Enduro will also work with Intel CPUs like Sandy Bridge and Ivy Bridge.

There are two problems that AMD didn’t really have an answer for in our conversation: first, the user interface for dynamic switchable graphics was pretty weak the last time we looked at it. We’re not sure if it’s any better six months later, but let’s hope so. The second is the real concern: until we can get AMD driver updates separate from Intel driver updates, it’s our opinion that Enduro won’t be particularly useful on Intel gaming platforms. With HD 7970M being such a potent chip, it would certainly be CPU-limited on AMD’s Llano APU, and unless Trinity really manages to improve on Bulldozer performance, it will likely also pose a CPU bottleneck for HD 7970M (never mind CrossFire configurations). So, once again we informed AMD that we really need an answer to the driver updates dilemma for switchable graphics laptops, and they need to get laptop OEMs (e.g. Sony and HP) to allow users to download AMD reference drivers.

Of course, if ZeroCore technology and power gating works well enough, all this discussion of switchable graphics may be moot: imagine a laptop with a discrete GPU that idles at the same power requirements as an IGP; why would you even want switchable graphics if you don’t need it to save power? We’ll have to wait for hardware to see how 7700M and above fare in terms of idle and low load power draw, but we could end up pleasantly surprised. I’ve stated in the past that the holy grail for laptop GPUs at this point is to use as little power as IGPs when there’s nothing complex happening, and ZeroCore and power gating could actually deliver on that goal.

One final feature that was mostly glossed over in the slides is VCE support—AMD’s Video Codec Engine that we have yet to see demonstrated. It’s still present on these mobile parts, and on paper VCE is a competitor to Intel’s Quick Sync technology. Originally discussed back in December when AMD launched HD 7970, we thought we’d see some software make use of the feature by this point. I even went so far as to flat out ask AMD if the VCE hardware is broken in Southern Islands, as it’s been over four months now. Their response: “VCE is anything BUT broken – we’ll have lots more on it shortly - Stay tuned.” And that we will, as I’d love to see AMD offer a more flexible alternative to Quick Sync.

One final power-related technology making its first appearance on the mobile AMD chips is PowerTune. In the past, GPUs were designed with very specific TDP targets, and clock speeds had to be selected so that power (and heat) stayed within the allotted range. With so-called “power viruses” like OCCT and Furmark, users started encountering issues with GPUs exceeding those limits, and the results ranged from crashing to even failed hardware. NVIDIA and AMD both responded initially with attempts to detect such applications and adjust clocks accordingly, but that’s a crude approach and it won’t always work with newer programs. PowerTune is a hardware solution to the problem, with intelligent hardware in the chips that determines the current load and how stressful an application is—so it’s not just a thermal diode. PowerTune looks at environmental factors such as temperature along with internal logic to determine what the workload is and set clocks appropriately. The end result is that performance can be maximized for applications that aren’t as strenuous, allowing performance improvements of up to 10% in some cases, all without exceeding the TDP. This is actually a good thing, especially for laptops and notebooks, and ever since AMD first introduced PowerTune in the HD 6970 I’ve been waiting for it to arrive in mobile chips.

There’s not much else to say at this point, other than mobile GCN laptops look promising. The first announced notebook with a GCN GPU is Alienware’s M17x, which can now be configured with the new HD 7970M starting at $1900. We’re working to get the revised M17x in for review so we can see for ourselves just how potent HD 7970M is.

AMD Launches Radeon Mobility 7700M, 7800M, and 7900M GPUs
POST A COMMENT

50 Comments

View All Comments

  • shaw - Wednesday, April 25, 2012 - link

    These charts always cracks me up and I laugh. <AMD Chart>We are more better x2! They are less better x3! Do the math!</AMD Chart>

    It's like, with consoles the bit wars tag line has died out, but its PC equivalent has never stopped.
    Reply
  • JarredWalton - Wednesday, April 25, 2012 - link

    Except the charts clearly show the 0.8x to 1.6x times faster labels, so the only people who have problems are those who don't know how to read a graph. Anyone that glances at a graph and thinks, "Wow, the red bar is four times as big as the green bar!" without actually looking at what the bars mean deserves exactly what they get. Reply
  • erple2 - Wednesday, April 25, 2012 - link

    Now Jarred.

    Graphing 101 tells us to make clear graphs. The lines marked the way that are listed are clearly done strictly as marketing - it "cheapens" the graph completely by not having a common datum.

    The graph is supposed to convey 2 pieces of information - a useful representation of the relative performance of the product, and on more careful examination, the exact differences.

    Why bother putting a bar graph in it if you're not actually making a bar graph? That's the problem. You're using a tool designed to graphically convey useful information in an at best misleading and, at worst negligent fashion.

    Perhaps the data visualization perfectionist in me cringes every time I see a poor data representation. Either way, I can see that it's just plain wrong.
    Reply
  • UltraTech79 - Wednesday, April 25, 2012 - link

    What kind of shitty attitude is that? Are you seriously defending misleading graphs based on "you should know better, and if not then you deserve to be screwed" ?

    You should work for the credit card industry with crappy ethics like that.
    Reply
  • JarredWalton - Thursday, April 26, 2012 - link

    As I have said twice in the threads, these are AMD's graphs, showing their numbers, and everyone reading this article should be absolutely aware of that. RTFA. Don't tell me how to make graphs when these aren't my graphs, because I certainly wouldn't do a graph like this. I'm likewise not putting the AnandTech graphing style on display, because then the casual reader might think we actually ran some tests. I'm not sure how I can be any more clear than that.

    With that said, the graphs are still clear about what they show and you all know exactly what they mean. The graphs come from a marketing department, and marketing loves to try and make things look better. AMD, Intel, and NVIDIA all put out charts like this, and it's allowed because the necessary information to correctly interpret the results is right there in the graphs. It is slightly misleading, but only to people that don't care enough to use their brain cells. I'm guessing when we show the same sort of charts for NVIDIA "launched but not benchmarked by AnandTech" we'll see the exact same comments, only it will probably be by different people.

    If you are gullible enough to go out and try to buy something based on a non-review press release type of article, then you deserve to be screwed, yes. And people do stupid stuff like that all the time, which is why we've ended up with lowest common denominator LCDs in laptops. But don't tell me I have bad ethics because I post an article with AMD's graphs and state, right in the text:

    "As always, take these graphs for what they're worth." Or, "Results are at 1920x1080/1920x1200 with a variety of quality settings, so take the following with a grain of salt."

    You want to talk about unethical practices? How about putting 2GB RAM on a GPU that's so slow that it doesn't matter how much RAM it has, and then all the OEMs selling said GPU as a $80 upgrade? Or what about building laptops that basically are designed to fail after a couple years of regular use, because the materials simply aren't designed to hold up? But you can't force a company to build and use higher quality parts, especially when consumers aren't willing to pay the cost. You can't force people to research hardware if they don't want to; so they'll go into some store and the sales people get to talk them into whatever they can, often selling them hardware that's fast in the wrong areas, more expensive than they need, and not a good fit for their particular needs.
    Reply
  • Dracusis - Thursday, April 26, 2012 - link

    I know you didn't make the charts, but as a journalist you should care about information clarity and shouldn't defend them like you did in the comment above.

    Oh and implying your readers "deserves exactly what they get", also not the best attitude to exhibit as a journalist.

    Sure it may be a press release, but you're reporting on it and re-publishing that information.

    Having said all that, I thought your statements in the article were carefully measured against the poor quality materials without being insulting. Honestly I'm not really sure why anyone got upset to begin with - perhaps we need fresh bait in the troll traps.
    Reply
  • JarredWalton - Thursday, April 26, 2012 - link

    I'm not saying our readers deserve it, I'm saying people who don't do the research and don't care to pay attention to all the information in a graph deserve what they get. What I specifically said is: "Anyone that glances at a graph and thinks, 'Wow, the red bar is four times as big as the green bar!" without actually looking at what the bars mean deserves exactly what they get.'"

    What's crazy is that everyone is harping on this like the data is somehow obscure. The chart starts at 0.8X and goes to 1.7X or 1.4X (depending on which graph we're looking at). To act like that is hard to understand, particularly on a tech savvy web site like ours, is ludicrous. I'm pretty sure that everyone who cares to read articles like this at AnandTech knows what the chart means. If the chart instead said, "Percent improvement" and started at 0% and went up to 70%, no one would have complained, and yet that would be just as "misleading" to the graph impaired that only stare at the bars and not the labels.

    Furthermore, right below the AMD vs. AMD graph is the data showing the numbers for AMD vs. NVIDIA. Wow, everything sure is hidden and misleading when you can see a relative performance chart followed by another table showing some actual numbers. Seriously, if people take things out of context and don't read the text or the table and *think* for a minute or two, how are you going to educate them on the Internet? Anyone that clueless wouldn't know why we're even talking about mobile GPUs in the first place.
    Reply
  • raghu78 - Tuesday, April 24, 2012 - link

    Can't wait for the Alienware M17X review. Till Nvidia come out with their GTX 680M based on GK106 its one way traffic. I think even with GTX 680M it might be not so easy for Nvidia to reclaim the mobile performance crown because Pitcairn has 62.5% of the shader count of Tahiti with the exact same front end / tesselator / rasterizer setup whereas I think GK106 is going to be a halving of GK104 with lesser tesselation units. Pitcairn is AMD's best perf/watt GPU in HD 7000 series and HD 7970M will be a true next gen mobility card giving performance close to a GTX 570. Reply
  • JarredWalton - Tuesday, April 24, 2012 - link

    NVIDIA hasn't made any statements, but I'm guessing we'll see GK104 in a laptop at some point, albeit with lower clocks. If they could get tweaked GF110 into laptops, GK104 should be easy. Now they just need yields on GK104 to reach the point where it's practical. Reply
  • raghu78 - Tuesday, April 24, 2012 - link

    Nvidia got a GF100 aka GTX 480M into laptops but that was a unmitigated disaster because clocks suffered severely. Only when they got a GF104 aka GTX 485M with power usage suitable for laptops things were better. The GTX 485M was launched in early 2011 and midway into the 40 nm cycle. I would expect the same timeframe for Nvidia (Q4 2012 or Q1 2013) to turn a GK104 into a 100w design with decent clocks and suitable yields.
    Having said that as a member of the tech industry press you might know better about Nvidia's roadmap plans.
    Reply

Log in

Don't have an account? Sign up now