POST A COMMENT

50 Comments

Back to Article

  • Meaker10 - Tuesday, April 24, 2012 - link

    "The GPU clock is 850MHz compared to 1100MHz (stock) on the desktop HD 7870"

    The 7870 is clocked at 1000MHz stock.

    I want the 7970M on a nice MXM-B module to slap in my notebook, cheers :)
    Reply
  • JarredWalton - Tuesday, April 24, 2012 - link

    Right you are; sorry for the error, though I actually had the correct math and just wrote 1100MHz (the overclocked value) instead of 1000. :-) Reply
  • alterecho_ - Tuesday, April 24, 2012 - link

    I appreciate AMD as a clean company but they should really stop using non-zero values as their base line, in their graphs. Reply
  • Goty - Tuesday, April 24, 2012 - link

    People have really got to get over this point. AMD does it, Intel does it, NVIDIA does it; it's nothing new. If you still have trouble reading charts, you might want to consider going back to elementary school. Reply
  • BSMonitor - Tuesday, April 24, 2012 - link

    Pot, this kettle. Kettle this is pot. Reply
  • nPawn - Wednesday, April 25, 2012 - link

    I don't understand what the problem is in the first place?? Reply
  • suprem1ty - Monday, April 30, 2012 - link

    I think its because it's a bit misleading visually.

    Look at the first graph, on the Battlefield 3 results at first glance the difference in height makes AMD's card look 3x faster, when in reality its only (relatively speaking) 60% quicker.
    Reply
  • Gc - Tuesday, April 24, 2012 - link

    Ugh, I agree. This contributes to giving the AMD marketing department a bad name,
    and hurts AnandTech for using the bad chart in their article.
    I had hoped the AMD marketing purge might have gotten rid of some bad players.

    Would be better if the vertical axis showed a gap before it intersects the horizontal axis.
    We are used to seeing the intersection of the axes as the origin 0,0.
    So just don't show a continuous axis, show a break or even a zigzag.

    Bar charts are harder, better to show a break or cut-off in the bars as well.
    These 3-d bars are bad, they look like the bars are sitting on a floor, not cut-off.
    Reply
  • JarredWalton - Tuesday, April 24, 2012 - link

    As I comment below: Our text dissects the information, and right now these charts are the only indication of performance that we have. I just assume everyone who comes here is smart enough to read the charts and understand what they mean. I even say, "take the charts with a grain of salt."

    This is a press release for the most part, and we're not performing the tests. I certainly don't want to make an AnandTech style graph that people might actually interpret as being our own independent test results! These are AMD's tests and AMD's information, and what they do here isn't all that different from what I've seen elsewhere (e.g. NVIDIA and Intel "what to expect" guides).
    Reply
  • doron1 - Tuesday, April 24, 2012 - link

    Actually this graph made me chuckle, I'm surprised no one saw why they did that - http://legitreviews.com/images/reviews/news/NVIDIA...

    Ouch.

    Btw never seen AMD use such graph before, but I could be wrong.
    Reply
  • shaw - Wednesday, April 25, 2012 - link

    These charts always cracks me up and I laugh. <AMD Chart>We are more better x2! They are less better x3! Do the math!</AMD Chart>

    It's like, with consoles the bit wars tag line has died out, but its PC equivalent has never stopped.
    Reply
  • JarredWalton - Wednesday, April 25, 2012 - link

    Except the charts clearly show the 0.8x to 1.6x times faster labels, so the only people who have problems are those who don't know how to read a graph. Anyone that glances at a graph and thinks, "Wow, the red bar is four times as big as the green bar!" without actually looking at what the bars mean deserves exactly what they get. Reply
  • erple2 - Wednesday, April 25, 2012 - link

    Now Jarred.

    Graphing 101 tells us to make clear graphs. The lines marked the way that are listed are clearly done strictly as marketing - it "cheapens" the graph completely by not having a common datum.

    The graph is supposed to convey 2 pieces of information - a useful representation of the relative performance of the product, and on more careful examination, the exact differences.

    Why bother putting a bar graph in it if you're not actually making a bar graph? That's the problem. You're using a tool designed to graphically convey useful information in an at best misleading and, at worst negligent fashion.

    Perhaps the data visualization perfectionist in me cringes every time I see a poor data representation. Either way, I can see that it's just plain wrong.
    Reply
  • UltraTech79 - Wednesday, April 25, 2012 - link

    What kind of shitty attitude is that? Are you seriously defending misleading graphs based on "you should know better, and if not then you deserve to be screwed" ?

    You should work for the credit card industry with crappy ethics like that.
    Reply
  • JarredWalton - Thursday, April 26, 2012 - link

    As I have said twice in the threads, these are AMD's graphs, showing their numbers, and everyone reading this article should be absolutely aware of that. RTFA. Don't tell me how to make graphs when these aren't my graphs, because I certainly wouldn't do a graph like this. I'm likewise not putting the AnandTech graphing style on display, because then the casual reader might think we actually ran some tests. I'm not sure how I can be any more clear than that.

    With that said, the graphs are still clear about what they show and you all know exactly what they mean. The graphs come from a marketing department, and marketing loves to try and make things look better. AMD, Intel, and NVIDIA all put out charts like this, and it's allowed because the necessary information to correctly interpret the results is right there in the graphs. It is slightly misleading, but only to people that don't care enough to use their brain cells. I'm guessing when we show the same sort of charts for NVIDIA "launched but not benchmarked by AnandTech" we'll see the exact same comments, only it will probably be by different people.

    If you are gullible enough to go out and try to buy something based on a non-review press release type of article, then you deserve to be screwed, yes. And people do stupid stuff like that all the time, which is why we've ended up with lowest common denominator LCDs in laptops. But don't tell me I have bad ethics because I post an article with AMD's graphs and state, right in the text:

    "As always, take these graphs for what they're worth." Or, "Results are at 1920x1080/1920x1200 with a variety of quality settings, so take the following with a grain of salt."

    You want to talk about unethical practices? How about putting 2GB RAM on a GPU that's so slow that it doesn't matter how much RAM it has, and then all the OEMs selling said GPU as a $80 upgrade? Or what about building laptops that basically are designed to fail after a couple years of regular use, because the materials simply aren't designed to hold up? But you can't force a company to build and use higher quality parts, especially when consumers aren't willing to pay the cost. You can't force people to research hardware if they don't want to; so they'll go into some store and the sales people get to talk them into whatever they can, often selling them hardware that's fast in the wrong areas, more expensive than they need, and not a good fit for their particular needs.
    Reply
  • Dracusis - Thursday, April 26, 2012 - link

    I know you didn't make the charts, but as a journalist you should care about information clarity and shouldn't defend them like you did in the comment above.

    Oh and implying your readers "deserves exactly what they get", also not the best attitude to exhibit as a journalist.

    Sure it may be a press release, but you're reporting on it and re-publishing that information.

    Having said all that, I thought your statements in the article were carefully measured against the poor quality materials without being insulting. Honestly I'm not really sure why anyone got upset to begin with - perhaps we need fresh bait in the troll traps.
    Reply
  • JarredWalton - Thursday, April 26, 2012 - link

    I'm not saying our readers deserve it, I'm saying people who don't do the research and don't care to pay attention to all the information in a graph deserve what they get. What I specifically said is: "Anyone that glances at a graph and thinks, 'Wow, the red bar is four times as big as the green bar!" without actually looking at what the bars mean deserves exactly what they get.'"

    What's crazy is that everyone is harping on this like the data is somehow obscure. The chart starts at 0.8X and goes to 1.7X or 1.4X (depending on which graph we're looking at). To act like that is hard to understand, particularly on a tech savvy web site like ours, is ludicrous. I'm pretty sure that everyone who cares to read articles like this at AnandTech knows what the chart means. If the chart instead said, "Percent improvement" and started at 0% and went up to 70%, no one would have complained, and yet that would be just as "misleading" to the graph impaired that only stare at the bars and not the labels.

    Furthermore, right below the AMD vs. AMD graph is the data showing the numbers for AMD vs. NVIDIA. Wow, everything sure is hidden and misleading when you can see a relative performance chart followed by another table showing some actual numbers. Seriously, if people take things out of context and don't read the text or the table and *think* for a minute or two, how are you going to educate them on the Internet? Anyone that clueless wouldn't know why we're even talking about mobile GPUs in the first place.
    Reply
  • raghu78 - Tuesday, April 24, 2012 - link

    Can't wait for the Alienware M17X review. Till Nvidia come out with their GTX 680M based on GK106 its one way traffic. I think even with GTX 680M it might be not so easy for Nvidia to reclaim the mobile performance crown because Pitcairn has 62.5% of the shader count of Tahiti with the exact same front end / tesselator / rasterizer setup whereas I think GK106 is going to be a halving of GK104 with lesser tesselation units. Pitcairn is AMD's best perf/watt GPU in HD 7000 series and HD 7970M will be a true next gen mobility card giving performance close to a GTX 570. Reply
  • JarredWalton - Tuesday, April 24, 2012 - link

    NVIDIA hasn't made any statements, but I'm guessing we'll see GK104 in a laptop at some point, albeit with lower clocks. If they could get tweaked GF110 into laptops, GK104 should be easy. Now they just need yields on GK104 to reach the point where it's practical. Reply
  • raghu78 - Tuesday, April 24, 2012 - link

    Nvidia got a GF100 aka GTX 480M into laptops but that was a unmitigated disaster because clocks suffered severely. Only when they got a GF104 aka GTX 485M with power usage suitable for laptops things were better. The GTX 485M was launched in early 2011 and midway into the 40 nm cycle. I would expect the same timeframe for Nvidia (Q4 2012 or Q1 2013) to turn a GK104 into a 100w design with decent clocks and suitable yields.
    Having said that as a member of the tech industry press you might know better about Nvidia's roadmap plans.
    Reply
  • JPForums - Tuesday, April 24, 2012 - link

    Nvidia managed to get GF100 into laptops, though I wouldn't call it a success. However, GF104/GF114 (GTX485M/GTX580M) did rather well. The GTX680 only has a 195W TDP (compared to 244W for the GTX580 and 170W for the GTX560Ti). I would expect GK104 to replace Nvidia's current top mobile performers.

    Also, given the changes Nvidia made to make the chip smaller, cooler, and more power efficient (at the expense of FP64 and compute functionality) and AMD's efforts at expanding compute capability (at the expense of size and probably power efficiency), this chip is even better positioned than their previous efforts to be top dog in the mobile gaming space.

    AMD technologies may give them an edge in battery life (or perhaps not), but I can't imagine Nvidia will loose in raw gaming performance this time around. That said, I get the feeling that, like the desktop variants, AMD will have a significant period of uncontested time in the market. In any case, I doubt we will see a GK104 mobile variant until the production issues with the desktop variants are straightened out.
    Reply
  • Meaker10 - Tuesday, April 24, 2012 - link

    Early tests at NBR give a stock score of 6100 in 3dmark 11 with a high end SB CPU. Reply
  • mySN.de - Tuesday, April 24, 2012 - link

    Do you have any source link for the benchmark at NBR? Reply
  • Meaker10 - Tuesday, April 24, 2012 - link

    http://i410.photobucket.com/albums/pp186/powwow71/...
    http://i410.photobucket.com/albums/pp186/powwow71/...

    Links to the pictures.
    Reply
  • Meaker10 - Tuesday, April 24, 2012 - link

    Original thread:

    http://forum.notebookreview.com/alienware-m17x/658...
    Reply
  • quiksilvr - Tuesday, April 24, 2012 - link

    You can pick the lower end M17x for $1500 and upgrade to the 7970 for $200. Reply
  • quiksilvr - Tuesday, April 24, 2012 - link

    Upon further investigation, its amazing how much of a scam these multi-tier level laptop choices are. You can configure the lower end model with the exact same specifications as the middle model for $150 less. The only difference is the video card and the hard drive. Everything else, even the RAM and CPU, are exactly the same. Reply
  • A5 - Tuesday, April 24, 2012 - link

    Video card and hard drive upgrades make a lot of money for these laptop builders. Saying they are "exactly" the same except for two major components is a tad bit disingenuous. Reply
  • extide - Wednesday, April 25, 2012 - link

    Wrong. If you configure them exactly the same (1920x1080 screen, 750GB 7200RPM) they both come out to exactly $2049. Reply
  • extide - Wednesday, April 25, 2012 - link

    Actually that $2049 price is with both of them having the i7 2760. If you leave them both at the default processor (i7 2670) then they are still exactly the same, at $1899. Reply
  • Brandenburgh_Man - Tuesday, April 24, 2012 - link

    When you first look at AMD's Radeon HD 7000 Performance graph, the vertical length of the Red bars make the HD 7970M look like it's, on average 3 times faster than the Yellow bars for the HD 6990M. Wow, what a BEAST!

    Then you look at the scale on the left and realize that, at best, it's only 60% faster and, on average, 40%. Such cheap tricks make me lose all respect for the company.
    Reply
  • A5 - Tuesday, April 24, 2012 - link

    Pretty much every company, ever, has done something like this. Intel, Nvidia, Apple, Qualcomm, ARM, etc. Reply
  • Gc - Tuesday, April 24, 2012 - link

    That doesn't mean AnandTech has to repeat their misleading graphs.
    AnandTech can strive to be a more accurate, less misleading source of information, not just a press release repeater.
    Reply
  • JarredWalton - Tuesday, April 24, 2012 - link

    Our text dissects the information, and right now this is the only indication of performance that we have. I just assume everyone who comes here is smart enough to read the charts and understand what they mean. It's a press release for the most part, and we're not performing the tests. I certainly don't want to make an AnandTech style graph that people might actually interpret as being our own independent test results! Reply
  • 6kle - Friday, April 27, 2012 - link

    "I just assume everyone who comes here is smart enough to read the charts and understand what they mean."

    I don't think this is a case of people being too dumb for your article. Why do you think there are so many complaints given for using misleading graphs? It's because when there are tons of readers there will always be some who forgot to "check", even if they are not dumb. It's called being human. Why put such "traps" in an article for people to look out for. You are basically saying people can't skim your articles quickly and need to read all the disclaimers you have placed somewhere in the text? Is that good journalism for a site that strives to provide accurate information.

    The only reason to use visual graphs is to give a visual representation of the difference by the means of comparing their size. Not making them start from "zero" completely destroys the very purpose of using them in the first place.

    If you need to start reading the numbers at the side of the graphs you might as well just drop the bars entirely and stick to using numbers only.

    I find the criticism very valid in this case and would hope for you to learn from it. I am sad to see that you are instead calling some people (indirectly) stupid. I think it's more stupid in this case to use those bars when it can be avoided.

    I would suggest you either correct the bars to start from zero or use numbers only (you can put a big clear label above them saying something like "Source: AMD Marketing" so people won't think they are test results from this site.
    Reply
  • seapeople - Friday, April 27, 2012 - link

    Wow, so now in your mind anyone who skims an AnandTech article should be able to quickly glance at the relative height of bars in a graph and presume that AnandTech has conducted in-house tests and validated that card A is 4x better than card B, despite the article and graph both clearly being labeled as based on OEM-provided results only?

    I'm glad I don't work for a place like AnandTech, the posters would mentally destabilize me.
    Reply
  • 6kle - Saturday, April 28, 2012 - link

    I don't know what you are talking about. You seem to be quoting me but I never said anything like that. Reply
  • seapeople - Friday, April 27, 2012 - link

    I think Jarred should remake the graph with the minimum value at 0 and then have the maximum value be at 1 million. Would you be happy then? Reply
  • seapeople - Friday, April 27, 2012 - link

    You must have trouble buying gas, then, because I'm sure someone like you would lose respect for someone who charges $3.99 and 99/100 for a gallon of gas which makes people look and think "Wow, I'm only paying three dollars a gallon!" Reply
  • Tujan - Tuesday, April 24, 2012 - link

    Im confused. You say 'GPU' for a notebook. There is a photo of a 'pin' type processor. And this then is not for a pcie slot ect. Notebooks do not have dual sockets,and I thought that ATI was running APUs within most of their new product lines,and even in notebook.
    So what is the situation here ? Exactly how does a 'GPU'run in a notebook that has an APU. Or is this something that runs in a notebooks PCIe slot etc ? Somebody tell me where this 'fits' in,and what notebook platform/series they could be used in/for.

    Note:I dont have any notebooks. And haven't gazed a look at any curcuit board layouts for any.
    Reply
  • Tujan - Tuesday, April 24, 2012 - link

    Notebook equipment built into a desktop,or used by the desktop. So this really tossed me here:

    "Enduro will also work with Intel CPUs like Sandy Bridge and Ivy Bridge."

    ..so they must be PCIe components . Only marketed for the portable notebooks . Right , or no .
    Reply
  • Meaker10 - Tuesday, April 24, 2012 - link

    The highend cards will be MXM modules. Reply
  • André - Tuesday, April 24, 2012 - link

    I still think it is rather annoying how the mobile graphic line is upping the same GPUs to a higher naming convention.

    Why not just call a shovel what it is, a shovel. Make the differentiation about the letter M instead.

    Desktop Pitcairn Radeon HD 7800 series, mobile Pitcairn Radeon HD 7800M series.
    Desktop Cape Verde Radeon HD 7700 series, mobile Cape Verde Radeon HD 7700M series.
    And so on.

    Much easier on consumers and it feels a bit more honest.
    Reply
  • bennyg - Tuesday, April 24, 2012 - link

    Honesty died when invented numbers being used as names became acceptable.

    And sales increased.

    I don't know how the consumer has a clue what they're buying. Most reviewers (other than AT of course) don't seem to either.

    If I ran the world crap like this would fall under the scope of "Misleading and deceptive conduct" for the purposes of trade practices law.
    Reply
  • mczak - Tuesday, April 24, 2012 - link

    Hopefully ZeroCore will help amd to allow their drivers to be installed without the "help" of the notebook vendors. If the power used in this state is really low enough (it is allegedly sub-1W but it should probably be closer to 0W rather than 1W if it's used in this way) they might not need any of the acpi power switching methods to switch the discrete chip on and off, which means the driver should no longer have such platform dependencies. Reply
  • Aloonatic - Tuesday, April 24, 2012 - link

    What really makes laptop gamers feel like second class citizens is waiting for manufacturers to release drivers.

    I'll admit, It's put me off buying a laptop for any sort of game playing, as it's just a ridiculously irritating state of affairs. Have AMD pulled their fingers out on this issue, or do manufacturers still have their foot on our throat?
    Reply
  • JarredWalton - Tuesday, April 24, 2012 - link

    Right now, if you want regular driver updates on a notebook you have two viable options:

    1) Get a laptop with a discrete GPU and no switchable graphics. It doesn't matter if it's an AMD or an NVIDIA GPU; you should get driver updates. Note however that some OEMs aren't on board with AMD's mobile reference driver program (Sony for sure, maybe HP?), so shop accordingly.

    2) Get a laptop with NVIDIA's Optimus Technology. You'll get their regular driver updates and you can install Intel's latest drivers as well -- the two are independent of each other.
    Reply
  • zcat - Tuesday, April 24, 2012 - link

    Does NVidia have anything yet (for the desktop; not mobile) that can compete with AMD's 7000 series in terms of max performance VS idle wattage under 10Watts, since that what the system will be doing most of the time anyway?

    I'd love to go with NVidia in my next mini-ITX system to get VDPAU h/w acceleration that just works in Linux, but AMD's 77XX idling at only 5W is a huge draw.

    (And on a related note: Why do AMD's Radeon wikipedia articles all list idle TDP, but none of NVidia's?)
    Reply
  • JarredWalton - Wednesday, April 25, 2012 - link

    "Idle TDP" doesn't really mean anything, since TDP is "Thermal Design Power", or in other words, how much the cooling system has to be able to safely dissipate. "Idle Power Draw" would be the more appropriate terms, and it's simply a choice not to publish idle power figures. You'll note that AMD and Intel don't publish idle power draw for their CPUs/APUs either, so we generally have to infer how much power they use through measurements -- measurements that include the rest of the system hardware to varying degrees.

    As for NVIDIA competing with AMD for high performance/low power, Kepler GK104 (GTX 680) is clearly a major step forward compared to Fermi and GTX 580. We know GK107 is already out for laptops, and GK106 I believe is coming at some point (along with GK110 for the ultra high-end probably in six months or so). I'm guessing the GK106/GK107 parts will be quite competitive with AMD's similar parts.
    Reply
  • JarredWalton - Wednesday, April 25, 2012 - link

    There's a difference between "Jarred doesn't like" and "Jarred isn't kowtowing to". Just because I'm not in love with all things AMD doesn't mean I dislike them. I just wish their CPUs were more compelling, because right now, particularly on laptops, all they're doing is improving IGP performance while CPU performance is stagnating. I get the whole "CPU isn't everything" argument, but if someone really cares about graphics performance, they'll use a discrete GPU that's many times faster than the fastest IGP. Which leaves us with a CPU that's less than half what Intel offers. Reply

Log in

Don't have an account? Sign up now