Back to Article

  • radium69 - Monday, January 09, 2012 - link

    Maybe it's me, but this looks VERY classy!
    Not gimmicky and plasticky but very tight and sexy!
    I'm going to keep an eye out for this card. hope they stick more with the allumium design.
  • RubyX - Monday, January 09, 2012 - link

    Couldn't you fix the idle noise issue by just changing the fan speed via software? Or is that not possible with this card for some reason? Reply
  • Ryan Smith - Monday, January 09, 2012 - link

    The lowest fan speed with AMD's fan profile is 20%, which is where it already settles to at idle. It's not possible to go below 20% right now, hence 43dB really is as quiet as the DD cooler can get. Reply
  • dj christian - Tuesday, January 10, 2012 - link

    Well i can go to 0% in MSI Afterburner. But the fan never stops no matter how low you put it Reply
  • Ryan Smith - Tuesday, January 10, 2012 - link

    Afterburner is just a frontend to Overdrive in this case. It can't take the fan any lower than Overdrive will allow, and that's 20%. Reply
  • james.jwb - Thursday, January 12, 2012 - link

    what about speedfan? Reply
  • cactusdog - Monday, January 09, 2012 - link

    I'm going to wait for a better quality non-reference cooler like the Asus DCUII, MSI Twin Frosr, Sapphire vapourX, Gigabyte Windforce.

    If you can hear the card at idle that is a disaster, especially when it costs $60 more
  • LB-ID - Monday, January 09, 2012 - link

    I completely agree. Sapphire's VaporX cards have spoiled me, I won't settle for less as far as acoustic management is concerned. Reply
  • Artifex28 - Tuesday, January 10, 2012 - link

    Got lucky year back or so and got a VaporX 5870 instead of my original 5850. Can´t complain. Excellent card!

    In my case it´s an external HDD fan that makes the most noise and this pulsating hum...
  • piroroadkill - Monday, January 09, 2012 - link

    I agree that it is pathetic you can hear it at idle, especially given ATI's massive gains in the areas of idle power. Massive gains.

    I use a custom fan curve through MSI Afterburner on my Radeon 6950 Twin Frozr III, and it is simply inaudible at idle.
  • piroroadkill - Monday, January 09, 2012 - link

    The overclocking scales amazingly, and without increasing voltage.

    Soon we'll see some great coolers, and some high clocks.

    NVIDIA has their work cut out...
  • Beenthere - Monday, January 09, 2012 - link

    The kiddies won't be able to get that $600 out of their pockets fast enough. It's like crack for a crackhead. :) Reply
  • imaheadcase - Monday, January 09, 2012 - link

    It would take a fanboy to buy a card that is less than %6 faster than a GTX 580 and $200 more. Reply
  • piroroadkill - Monday, January 09, 2012 - link

    Not really. It overclocks like crazy, uses less power while doing it, and has an updated feature set.

    Not to mention the increased VRAM. I've seen my 6950 use 1.6GB while playing Skyrim with custom texture packs, where there the 580 would be hitting the limits.

    If you're running an assload of screens at once, you really, really could use the extra VRAM.
  • piroroadkill - Monday, January 09, 2012 - link

    Further to this, if you own a GTX580, it means you wanted the fastest single GPU card without caring about the cost to start with - a GTX580 looks like bad value compared to getting a 6950 2GB, unlocking it 6970 and then applying mild overclocks.

    Point is, some people want the best there is. This is without doubt the best single GPU card there is.
  • Revdarian - Monday, January 09, 2012 - link

    Less than 6% faster than a 580? 200$ more?

    Which benchmarks are you seeing? because it is obvious that you aren't looking at the ones on this article.

    Crysis Warhead is 31% faster on average.
    Metro2033 is 36% faster.
    Dirt3 is 32% faster
    Shogun is 34% faster
    Batman is 22%
    Portal 2 is 17% faster with SSAA
    BF3 is 20% faster

    And all that is without the other overclock that adds up to exactly 11% better across the board (multiplicative, so all those other scores, multiply by 1.11 and you get the proper scaling)

    34.41; 39; 35; 37; 24; 19; 22

    And as for 200$ more, please link a retail 3GB 580 for 400$ as plenty of peeps would be interested on it, and since this Black Edition isnt really worth the premium as the standard ones are able to get the same clocks if a bit louder, then try to link a 350$ 3GB 580.
  • imaheadcase - Monday, January 09, 2012 - link

    1.5 gigs GTX 580 are about $400-450 now with rebates.

    Vram is a non-issue for %99 of games so that point is moot.

    People don't care about powering savings or draw in a gaming card, if you do you can't afford it.

    Most people who get a high end card don't overclock it. Maybe THIS card, but majority of people buy reference cards.

    %6 is being to generous..what i meant to say 6 FPS on average.

    Point still stands correct. :P
  • imaheadcase - Monday, January 09, 2012 - link

    Point is, people buy gaming cards not for minor increases, even if you can afford the best even by a small UNNOTICEABLE gain. Reply
  • Revdarian - Monday, January 09, 2012 - link

    Dude the second you said that 6% is being too generous you showed a total lack of Math understanding, as i said from the numbers in this review the lowest increase was around 17% and it usually hovered towards 30% and higher, those are significant numbers if you understood math.

    BTW this card isn't meant for low resolutions/low settings, so hmm yeah, of course that in low resolutions and settings the VRAM won't matter, but then again, you are bringing a cannon to a gun fight, totally overkill, so no, your post does not stand correct at all.
  • piroroadkill - Monday, January 09, 2012 - link

    No, your point doesn't stand. VRAM does matter, especially on multi-monitor setups.

    Also, in some cases, the 7970 is quite a lot faster than the 580.

    I would say you're being a fanboy by saying faster card? Why would you need a faster card.

    A 580 should be enough for anyone!

    It's called progress, and for some, it doesn't matter what the cost is.
  • piroroadkill - Monday, January 09, 2012 - link

    I'm not one of those people. I'm not buying a 7970.

    But just because you don't think it offers a large enough increase is an absolutely meaningless statement.

    Fact is, people bought stuff like 8800 Ultra, 7800GTX SLI, Vapochill, and so on.

    Cost is not a factor.
  • Morg. - Monday, January 09, 2012 - link

    you're a head case.

    VRAM is a non-issue for 2011 games in low resolution.
  • SlyNine - Monday, January 09, 2012 - link

    But people do care about heat. Thats the biggest reason to keep power usage down when its not in use.

    If your like me and your gaming PC is also your server you don't want much more then 100watts idle.

    Which is why I love my 2600k @4.4ghz w/ 5870. At idle its only pulling around 120 watts.
  • Sabresiberian - Monday, January 09, 2012 - link

    The cheapest GTX 580 on newegg is $480 AFTER rebate (479.99). Even TigerDirect doesn't quite match that price. I did a search and came up with one seller that is pricing the GTX 580 near $400, some company named Starworth Computers. Maybe it's legit, but I'm wondering why they have it listed $100 or more below everyone else.

    The only thing accurate about your post is your self-appointed name.
  • WileCoyote - Monday, January 09, 2012 - link

    I accept your challenge:

    3GB 580, picked one up last week. $429.99 - $300MIR = $399.99


    If the 580 hadn't fallen in price I would have purchased the 7970.
  • Duraz0rz - Monday, January 09, 2012 - link

    The card you linked is a 1.5GB card. Reply
  • iamezza - Tuesday, January 10, 2012 - link

    lol, fail Reply
  • Morg. - Monday, January 09, 2012 - link

    It would take a fanboy to think that card is less than 6% faster than a gtx580 ;)

    Just bring that 7970 to GTX580 TDP ... and you'll start understanding ;)

    The only advantage the GTX580 ever had over any card was that it had the bigger TDP . nothing else.
  • ET - Monday, January 09, 2012 - link

    It would take a fanboy to buy a card that's less than 6% faster a Radeon 6970 and costs $200 more. Namely the GTX 580. Reply
  • deaner - Monday, January 09, 2012 - link

    It would take a fanboy to quote such % to $ value, against a GTX 580. Maybe, just maybe there are people who are AMD fans!!?? That is an intersting thought.... Curious to your stats as well. Reply
  • FaaR - Monday, January 09, 2012 - link

    It's a lot more than 6% faster, don't be such a ridiculous fanboy.

    It also draws a lot less power than a 580, saving back (some) of the price difference in the long run as a lower electricity bill.
  • wonderpookie - Monday, January 09, 2012 - link

    What's a "pre-binned" card?

    Thanks! :)
  • Rick83 - Monday, January 09, 2012 - link

    A card which has been tested to have tighter tolerances (in this case) than others and has been selected with this in mind.

    Technically "pre-binned" is not a very sensible word creation, as the "pre" refers to the fact that it is done by the manufacturer and not the end-user-over-clocker -- but these days more people by manufacturer over-clocked cards than there are people that actually buy a dozen cards, test them all for the highest achievable OC, and then resell/send back the less stable cards.
  • Morg. - Monday, January 09, 2012 - link

    binning : the process of separating *PU's according to their maximum operating frequency.

    Called like that because you would test all your GPU's and put all those between 1200-1300 mhz stable @ xyz volts in bin 1, all those between 1100-1200 in bin 2, etc. in order to present differentiated offerings.

    In the past, many CPU part numbers were just different binnings of the same part (like C2D 2.33 or 2.4 ghz etc.), same goes for GPU's, and AMD even introduced a different kind of binning with their x2,x3,x4 variants based on how many cores were stable at the target speed, and locking those that weren't (x2s were quickly made from x2 parts, and then x1 (sempron) selected from failed x2s).

    So in this case, it would mean XFX tested the cards and selected the better performers to sell as DDBE, leaving the others for non-overclocked cards.

    This works that way because no CPU or GPU is perfect and all dies are more or less failed prints of the actual design.

    The most flawed cannot be used at all and go to the trash.
    The slightly flawed get some parts of the die disconnected (faulty cache, broken core, ...).
    The slightly less flawed just get overvolted / underclocked in order to run.

    The best parts come closer to the intended result and will thus operate at lower voltages or higher frequencies in comparison.

    Usually, binning isn't perfect and some better dies can fall in lower bins, hence the unlockable cores on older phenom/sempron x2,x3, etc. -

    It is also not as fine-grained as some pro OCers would like it to be, and because of that, they tend to buy a bunch of CPU's and bin them themselves in order to break WR's (look at
  • Iketh - Monday, January 09, 2012 - link

    "However it’s interesting to note that temperatures under load end up being identical to the reference 7970. The BEDD is no cooler than the reference 7970 even with its radically different cooling apparatus. This is ultimately a result of the fact that the BEDD is a semi-custom card; not only is XFX using AMD’s PCB, but they’re using AMD’s aggressive fan profile. At any given temperature the BEDD’s fans ramp up to the same speed (as a percentage) as AMD’s fans, meaning that the BEDD’s fans won’t ramp up until the card hits the same temperatures that trigger a ramp-up on the reference design. As a result the BEDD is no cooler than the reference 7970, though with AMD’s aggressive cooling policy the reference 7970 would be tough to beat."

    You do realize overclocking increases temps, right? Even if voltages aren't touched.
  • NJoy - Monday, January 09, 2012 - link

    have you tried to understand what is written in the bit you copypasted? Reply
  • Iketh - Tuesday, January 10, 2012 - link

    Sorry, but it's you that isn't understanding. My point is that I believe the cooler to be 2-4C cooler than the reference considering it's holding the same temps with an overclock. That's all. Reply
  • Morg. - Monday, January 09, 2012 - link

    Very true ...

    But this is Anandtech ;)

    -- mainstream tech news for the masses

    It's like their new config advice .. every config they advise is at least 10% pure waste of cash .. but it's still better than no advice for those who don't know better.

    And furthermore, the temperature points are simply a result of what XFX wanted for this card. nothing else.

    AMD's fan profile is part of that but you can rewrite it in the BIOS iirc so they just didn't bother / their fans don't run reliably below voltage xyz.

    I could mod that card to make it silent, so could XFX, they just didn't want to go into such detail when they could simply slap their cooler on it, change clocks and ship it - all there is to it.
  • B3an - Monday, January 09, 2012 - link

    WTF are you on about you stupid little kid. Many of Anand's articles on here are unrivalled for technical details and insight. Reply
  • Morg. - Tuesday, January 10, 2012 - link

    lol .

    And you would know ... with a fake 1337speak nick ... if you think they're so good it only goes to show your ignorance makes anandtech a perfect fit for you, a fact I was pointing to in the post you replied to.

    At your level of understanding, Anandtech is a perfect fit.

    Good for Anand and for you tbh . enjoy it.
  • wifiwolf - Tuesday, January 10, 2012 - link

    I would assume it's a nice fit for you too as you tend to persist. Reply
  • Morg. - Tuesday, January 10, 2012 - link


    Information : good

    Information + Information about the Information : better

    The content presented here is not worthless, one just has to know what it is and how it is limited (i.e. anandtech needs funding, they can't do all benchmarks themselves, etc.)
  • AssBall - Wednesday, January 11, 2012 - link

    Trolls like you?

    Not informative.

    Not factual.

    Not worth reading.
  • MrBunny - Monday, January 09, 2012 - link

    point made. formulation could be more along the lines that this cooling solution(though being louder at idle but cooler) is nicely executed being the card is overclocked and there by beating the reference design cooler easily in temps and noise.

    the only thing that they need to fix is the the idle fan pwm so it can be silent at idle aswell.

    @Njoy i think he read it just right.
  • Morg. - Tuesday, January 10, 2012 - link

    Just edit the bios manually when tools are available and you can change the curve from the original (which XFX didn't bother to modify for some reason .. they simply had to lower the first point in the curve to 15% or something - unless as I said there was a minimum voltage for the fans to start -- ) Reply
  • R3MF - Monday, January 09, 2012 - link

    how is GCN an architecture targetted at compute tasks when it is no more capable of DP FP than the VLIW4, in that it is still only capable of doing DP tasks at 1/4 speed of SP?

    or, is the 1/4 only a function of crippled consumer drivers, whereas professional products will see perhaps 1/2 for DP FP?
  • Morg. - Monday, January 09, 2012 - link

    Probably the latter.

    All in all, GCN is exactly like Fermi (which is also like an older design) and the performance characteristics should be very close in the end - where it matters (i.e. not gamer products).
  • R3MF - Monday, January 09, 2012 - link

    would be a shame if true, especially when paying $549 for the hardware! Reply
  • Morg. - Tuesday, January 10, 2012 - link

    Are you really doing GPU accelerated computing ?? Reply
  • R3MF - Wednesday, January 11, 2012 - link

    me? no.

    but it is going to become a very mainstream thing for performance hungry applications, and i always dislike buying artificially disabled products.
  • Death666Angel - Monday, January 09, 2012 - link

    You should read the launch article. But in case you won't:
    "At the 7970’s core clock of 925MHz this puts Tahiti’s theoretical FP32 compute performance at 3.79TFLOPs, while its FP64 performance is ¼ that at 947GFLOPs. As GCN’s FP64 performance can be configured for 1/16, ¼, or ½ its FP32 performance it’s not clear at this time whether the 7970’s ¼ rate was a hardware design decision for Tahiti or a software cap that’s specific to the 7970. However as it’s obvious that Tahiti is destined to end up in a FireStream card we will no doubt find out soon enough."
  • R3MF - Wednesday, January 11, 2012 - link

    many thanks, must have missed that first time around. Reply
  • cyrusfox - Monday, January 09, 2012 - link

    Its great that you still include Starcraft 2 results, you're about the only site that still constantly includes that game, and as that game has odd issues on amd cpus and gpus, its good to know this card still scales well on that game. Appreciate that you still bench it Ryan. Thanks Reply
  • geniekid - Monday, January 09, 2012 - link


    I understand that FPSes are usually the most graphically taxing games, but SC2 and Civ 5 show that there are other genres that take advantage of graphical processing power. Plus, it's always nice to see benchmarks for games I actually play :)
  • vol7ron - Monday, January 09, 2012 - link

    Agreed for all the above.

    I used to love that Counter Strike was included in all the benchmarks. Realizing that game no longer saw any true benefit from a new GPU some odd years ago; but it was nice to see in the charts here on AT for posterity. BTW, I think Steam is developing a new engine for CS; maybe AT would like to do some reviews of software version differences.
  • chizow - Monday, January 09, 2012 - link

    AMD and its fans can't really claim they're the champions of the poor and downtrodden budget enthusiast anymore with the 7970's pricing. I mean the pricing looks OK compared to last-gen parts as of today, but I don't think that's going to be the case when Nvidia releases their Kepler parts in the next few months.

    Nvidia has a great opportunity with Kepler to do what AMD did to them a few years ago with Cypress....which is make the opposition look really bad with regard to pricing and win back some of that mindshare and goodwill AMD has built up over the years. If the high-end Kepler part ends up 15-20% faster than the 7970 as many expect and is priced at $500 like the last 2 Nvidia flagship single-GPU parts, I wonder if AMD will be the one issuing rebate checks?

    I always considered the 4870 a pricing mistake on AMD's part where they failed to capitalize on a successful part. What's clear is that AMD also realized their mistake and have made steps to correct their pricing over the years:

    4870 $299
    5870 $379 (raised to ~$430)
    6970 $369
    7970 $549!!!

    In the past, even when ATI was running 2nd for the generation behind Nvidia, they provided users value at a price point that made sense against that generation's competitor parts. I don't think that will hold true in this case when Kepler is finally released, and AMD will have to suffer those consquences (similar to Nvidia and GT200).

    Will be fun to see how it shakes out either way, but its good to see AMD trying to make a buck or two and put the charitable spin to rest for good.
  • SlyNine - Monday, January 09, 2012 - link

    I'm really hoping they lower the price.

    I bought 2 8800GTs 512 (when it first came out) and the 5870 when it first came out. I knew both those cards values were over the top. Proven by the fact that the cost went up soon after I bought them.

    My point is there can be value at the very high end, the 5870 is proof of that. This card can not touch what the 5870, 4870, 9700pro, 8800GT was in value at the time of release. If this card is the card that made them switch to AMD then they were not paying attention.
  • Morg. - Tuesday, January 10, 2012 - link

    They will lower the price, because nVidia will try to compete.

    They're just taking advantage of the current position of the 7970 : first 28nm gpu.
  • chizow - Tuesday, January 10, 2012 - link

    Well I think those are all valid case points and extremely impressive parts, but the reality of it is, the 5870's pricing was just a result of the fallout from the 4870.

    If you look back, as fast as the 5870 was, it was still in a similar position as the 7970 is today, only 15-25% faster than the GTX 285. The GTX 285 launched at only $380 as a die-shrink refresh of the 280. All prices around that time were badly deflated due to price wars, the economy, but most importantly, the 4870's pricing. So when the 5870 launched at the end of 2009, they couldn't price it any higher at first, but once it became clear Nvidia didn't have a 40nm response in 2009, they quickly jacked up the price.

    Overall though I think value just depends on where you are in the upgrade cycle with either Nvidia or AMD and how much of an improvement you need to see before you upgrade. If you're with Nvidia right now with a 480/580, the 7970 doesn't really look all that great for 15-25% more performance at $550. It makes more sense to wait for Kepler for that expected 50% increase at roughly the same price point.

    But it might be worth it for an AMD user who's going to see 50%+ gains from a 6970/5870. Still, one has to wonder if that performance is worth it for such a huge increase in price, which again, is the position AMD has put itself in based on their historical pricing.
  • Morg. - Tuesday, January 10, 2012 - link

    The 15-25% performance is wrong.

    drivers are beta at best

    resolution reviewed are typically where the 580 shines

    the 6970 was about 5% worse than a 580 above full HD . don't know where you get 50% but that's great for you.

    No top card ever looked great at its top card price. that's not the point of the top card.
  • chizow - Tuesday, January 10, 2012 - link

    15-25% is right according to this review and every other one on the internet, beta drivers or not.

    If you want to nitpick about results, you'll see the differences are actually even lower than 15-25% in the one bench that matters the most, minimum FPS.

    As for not being able to interpret or follow an argument, the 50% was in reference to the 7970 compared to last-gen AMD parts like the 6970/5870, not the 580.
  • Kjella - Monday, January 09, 2012 - link

    The 4870 was the killer? The 5000 series kicked major ass with the 5870 flying off the charts and the 5850 - the one I got - was a total killer at $259 MSRP that I managed to get before the price hike. You'll still be paying $200+ to get a card to beat it and the 7970 did nothing to shake that up. I'm waiting for Ivy Bridge anyway, hopefully we'll have Kepler by then but if they don't improve the performance/$ more than AMD did I might just sit it out for another generation. Reply
  • SlyNine - Tuesday, January 10, 2012 - link

    Agreed but I'm not really willing to spend more then I did for this 5870.

    However the more I look at the benchmarks the more I wonder what they will look like when you really start to push the 7970.

    Thinking back to the 9700pro launch alot of people didn't consider that to be alot faster then the 4600 because they were benchmarking them at irrelevant setting for the 9700pro. But when you really pushed each card the 9700pro was more then 4x as fast.
  • Galid - Monday, January 09, 2012 - link

    Nnvidia fanboy.... saying something like ''I don't THINK that will hold true in this case when (video card that doesn't exist yet made by nvidia) is finally released'' totally useless

    The only reason 4870 was so cheap is because the die was SO small compared to GT200 parts... and Nvidia's politic is to make the fastest video card at the expense of big die(lower yields) and high costs....
  • chizow - Tuesday, January 10, 2012 - link

    " Nvidia's politic is to make the fastest video card at the expense of big die(lower yields) and high costs.... "

    And let your own thoughts govern your conclusions....see how they match mine...

    Its funny that you immediately jump to the fanboy conclusion when even the AMD fanboys are coming to the same conclusion. High-end Kepler once released will be faster than Tahiti, its not a matter of if, its a matter of when.

    AMD made a ~50% jump in performance going to 28nm, to expect anything less from Nvidia with their 28nm part would be folly. A 50% increase from GTX 580 puts Kepler comfortably ahead of Tahiti, but given 7970 is only 15-25% faster, it doesn't even need to increase that much.

    Also, the 4870 was so cheap because ATI badly needed to regain market and mindshare. They stumbled horribly with the R600/2900XT debacle and while the RV670/3870 was a massive improvement in thermals, the performance was still behind Nvidia's 3rd or 4th fastest part (8800GTS) and still significantly slower than Nvidia's amazing mid-range 8800GT.

    Still, I think they underpriced it by a large amount given it was half the price of a GTX 280 and only ~15% slower. Just a lost opportunity there for ATI but they felt it was more important to get mind/marketshare back at that point and now they get to reap the windfall. The trickle down effect becomes most obvious once you start projecting performance of the mid-range parts against last-gen parts.
  • Morg. - Tuesday, January 10, 2012 - link

    50% ? not really .

    As I said before, this is mostly a matter of TDP.

    The only reason the gtx580 was ahead (and that was only at full HD and lower resolutions) was its higher TDP / bigger die size.

    nVidia may choose to release yet another high TDP part for 680, just like 580 and it may just give them the same edge, but they will NOT win this round, just like they did NOT win the previous one.

    The main problem for nVidia on this round is being late to the 28nm party, other than that it's business as usual.
  • Morg. - Tuesday, January 10, 2012 - link

    What I mean by that is simply that perf/watt/dollar is the ONLY measure of a good GPU or CPU, the actual market position of the part does not make it "good" or "bad", just "fitting".

    The gtx580 was the perfect fit for "biggest and baddest single gpu card", it however had worse perf/watt than 6-series, and much worse perf/dollar.

    AMD could easily have decided to double die size on 6- series and beaten the crap out of the gtx580 but they didn't, because they targeted a completely different market position for their products.
  • chizow - Tuesday, January 10, 2012 - link

    Huh? No.

    If you target the high-end performance segment, the only thing that matters is performance. Performance per watt isn't going to net you any more FPS in the games you're buying or upgrading a card for, and its certainly not going to close the gap in frames per second when you're already trailing the competition.

    You're quite possibly the ONLY person who I've ever seen claim perf/watt is the leading factor when it comes to high-end GPUs. Maybe if you were referring to the Server CPU space, but even there raw performance with form factor is a major consideration over perf/watt. No one's running mission critical systems on an Atom farm because of power considerations, that's for sure.

    And yes, of course Nvidia is going to release another high-end massive GPU, that's always been their strategy. If you haven't noticed, AMD has quietly gone along this path as well, losing their small die strategy along the way, making it harder and harder for them to maintain their TDP advantages or produce their 2xGPU parts for each generation. AMD used to crank out an X2 with no sacrifices, but lately they've had to employ the same castration/downclocking methods Nvidia has used to stay within PCI specs.

    And to set the record straight, Nvidia has won the last two generations. AMD certainly had their wins at various price points, but ultimately the GTX 280/285 were better than the 4870/4890 and the 480/580 were better than the 5870/6970. Going down from there, Nvidia was competitive in both price and performance with all of AMD's parts, and in many cases, provided amazing value at price points AMD struggled to compete with (See GTX 460, GTX 560Ti).
  • Morg. - Tuesday, January 10, 2012 - link

    Right .

    perf/watt is everything.

    That is why the 6990 completely raped the 590 . perf/watt .

    The 6990 with minor tweaks actually matched a 6970 CF.

    The 590 with minor tweaks actually exploded. and it was more expensive...

    Your fanboism clouds your mind young padawan ... nVidia may have taken single GPU crown on the last two rounds, but they never had price/performance for anything.

    Need I remind you you could get a CF of 6950's for the price of a gtx580 ? and that said CF would kill a 680 should it be 50% faster than said 580 ?

    Gtx 460 and 560 ti were failure compared to AMD's offerings - in terms of performance.

    They had the nVidia logo, the nVidia drivers (good thing actually) and some stuff.

    But they did NOT have better performance for the same price. 560 Ti was almost in the same price bracket as 6950 and much slower, non-unlockable etc..

    So yes, single GPU crown all you like ..
    best GPU / arch. ?
    well I'd say that's best efficiency with comparable performance = AMD

    (Again .. if you're one of those who think a gtx580 is a good card for one full HD screen .. go ahead spend 550 bucks on a GPU and 100 on the screen -- otherwise the reality is AMD was within 5% with the 6970)
  • chizow - Tuesday, January 10, 2012 - link

    Perf/watt and perf/price means nothing when the concern in this segment is absolute performance. Once you start compromising and qualifying your criteria, you start down a slippery slope that you simply can't recover from. If anything, lower performance means you necessarily win the perf/watt and perf/price categories but by doing so, you lose the premium value of compromising nothing for performance.

    By your metric, an IGP or integrated GPU would be winning the GPU market because it costs nothing and uses virtually no power, but of course, it would be completely asinine to make that assumption when referencing the high-end discrete GPU market where raw performance relative to the market is the only determinant of price.

    You can go down the line all you like in price/performance segments with CF/SLI, for every example you give there's an equally if not more compelling offering from Nvidia with the GTX 460, 560, 560Ti, 570 etc. that offers price and performance points that have AMD matched or beaten. Because the deck is stacked starting at the top, and when you have the highest performing part in the segment, that sets the tone for everything else in the market.

    AMD is finally coming to grips with this which is why they are pricing this card as a halo product and not as a mid-range product.
  • chizow - Tuesday, January 10, 2012 - link

    Also to this:

    "Gtx 460 and 560 ti were failure compared to AMD's offerings - in terms of performance."

    Contradict yourself much? If this is true, then you're admitting every single GPU AMD has created since G80 is a failure compared to Nvidia's offerings and directly contradicting the points you're trying to make with regard to price and performance.

    Sorry, you can't argue out of both sides of your mouth, the message just comes out a big jumbled mess.
  • wifiwolf - Tuesday, January 10, 2012 - link

    I think with this post you're just saying he was right all along.
    And I agree. It's just business, you just choose where you you want to place your product: On profits or brand.
  • chizow - Tuesday, January 10, 2012 - link

    No, I don't agree with any of that because its clearly off-base and out of place in a discussion about high-end performance parts.

    What drives pricing in this segment? This is a simple answer.

    Performance. That's all that matters.

    Performance/watt and performance/dollar are just tertiary considerations that take a back seat to secondary considerations like feature sets and application support. You win these "value" market segments not because you want to, but because you have to when you can't win without compromises.
  • wumpus - Tuesday, January 10, 2012 - link

    DPFlops/Watt matters if Fermi and Kepler were designed for GPU computing. Nvidia makes a ton of money there, and doesn't have to compete with AMD nearly as much as in graphics.

    DPFlops/Card seem to matter more. I suspect that DPFlops/Card matters most (due to IO issues) than DPFlops/$ (once known as machoflops, mostly for govt/academic epeenwaving).

    Now that both companies appear to be designing for GPU computing, it will be interesting to see how they compare (even if the 7970 seems to be missing half of its DPFlops. I wonder if they managed to de-power the transistors if they are there).
  • chizow - Tuesday, January 10, 2012 - link

    DPFLops/Watt discussions have a place, but not for desktop GPU parts branded GeForce or Radeon. If this were a Tesla part it'd be more meaningful.

    Nvidia cripples DP performance pretty badly on their GeForce parts starting with Fermi and I imagine they will do the same for Kepler. It also sounds like AMD is doing the same for Tahiti, but it will be some time before we have any idea if GCN is even directly competitive in real world GPU compute applications.

    Benchmarks and synthetic tests look great, but its going to take quite a bit of effort for AMD to get any penetration in a HPC market Nvidia has clearly dominated. Nvidia basically had to create their own API and GPU compute market from scratch, so AMD has their work cut out for them to catch up.
  • Jediron - Wednesday, January 11, 2012 - link

    No, not the Champions of the poor. But yes, Champions of the single core Videocard!
    GTX580, you lose.

    Sure the ball lies in Nvidia camp now, no suprise there! The ball was in AMD's camp, and they scored; that is obvious.

    Long live the red camp :-)
  • mhampton - Monday, January 09, 2012 - link

    Page 2 of the article starts by listing the setup tested, and the CPU must be a misprint - as far as I know there is no "i7-3936". Presumably this should be 3960 or 3930 instead. Reply
  • know of fence - Monday, January 09, 2012 - link

    The same 3936 misprint can be found here:
  • Ryan Smith - Monday, January 09, 2012 - link

    Noted and fixed. Thank you. Reply
  • BenSides - Monday, January 09, 2012 - link

    I agree with a poster above, who is apparently content running a single 5850 paired with the latest game releases. I have 2 5870's in crossfireX, in conjuction with an old i7 950 cpu. My benchmark results (Crysis) blow this new thing (7870) to hell.

    While not a gamer, benchmarking is a hobby of mine. Looking at the results here at Anandtech, it is reassuring to know that with the elderly cards I have, it seems there is nothing, at the present time, or looming on the horizon, which would make my graphics cards obsolete.

    In short, until AMD or NVIDIA introduce some revolutionary new technology, you're fine with your current card(s), if the presented benchmarks are any indication. Seems to me that at the present time, NVIDIA and AMD are producing new graphics solutions which are just overclocked versions of *old* graphics solutions.
  • Duraz0rz - Monday, January 09, 2012 - link

    I don't think GCN, a completely new architecture, would qualify as an "old graphics solution". Reply
  • Morg. - Tuesday, January 10, 2012 - link

    1. yes no reason to go further than a good 5850 right now . (I still have a 4850 and it's just gone into the 'bit too shitty' zone for me)

    2. 6950 murders a 5850, and the new 7-series does the same to the 6-series.

    3. The architecture just changed on AMD side, towards more of a vector processor, just like nVidia's Fermi, this is a good thing for compute and the future of IT in general (heterogeneous core blabla and stuff). (by the way, GCN is not all that new, it's just an adaptation of an older design)

    4. Obviously, all that power is becoming a bit useless to gamers since game devs have shifted most of their focus to consoles, and the last big PC game dev (blizzard) focuses on delivering "playable" games for everyone, thus limiting the computing requirements.

    5. This is 2011, You can expect that shading and tesselation will eventually enable games to look as good as your hardware can make them, with graphics settings limited to how many FPS minimum and average you want. This is especially true since future consoles will have some of that . and the next round even more.

    I'd expect that kind of stuff to be mainstream around 2013 - 2014, so if you're going to keep your new box for 3 years, why not ... (on the other hand, the best graphics board for money currently still is the 6950 of course...).

    And .. the tiny bit of truth : yes the gtx 580 is what the 480 was meant to be, with quite a few glitches fixed. It's obviously not meant for a 40nm process (500+ Watts in basic overclocking is a bit much) and that was part of nVidia's strategy : develop fermi on 40 nm then port it to its intended size : 28nm.

    AMD waited for 28nm and I believe that was the best choice, seeing how they managed to deliver (much more easily than Fermi by the way .. which had major issues for its first release).. Just looking at the power requirements for the 7970 tells the whole story . That thing is downclocked to hell just to remain under 300 Watts ...
  • theprodigalrebel - Monday, January 09, 2012 - link

    Should come with a warning: No motorboating the card Reply
  • IceDread - Tuesday, January 10, 2012 - link

    I purchased XFX radeon hd 5970 a couple of years ago. XFX are inferior when it comes to overclocking. They were more restrictive and values had to be adjusted each restart even if I remember correctly.

    The result was that everyone who wanted to overclock their xfx cards installed firmware from other companies to successfully be able to overclock their cards.
  • piroroadkill - Tuesday, January 10, 2012 - link

    This was true for me, too. I had an XFX 4890 (which is dying now, despite running at stock for a long time).

    It had a custom (and worse) vrm section of the board, and didn't overclock worth a single damn. Those promises of 1GHz 4890s? Haha, not if you owned one of these.
  • Mjello - Tuesday, January 10, 2012 - link

    Is it possible to control the fan manually. That would fix the idle noice easily Reply
  • wifiwolf - Tuesday, January 10, 2012 - link

    The article stated it's already at its minimum rate since it's using the same controller as the reference card. Reply
  • fausto412 - Tuesday, January 10, 2012 - link

    Can we get more detail info on this newly added tech?

    I read the AMD whitepaper buy I am curious as to real world impact of it.
  • Ryan Smith - Tuesday, January 10, 2012 - link

    Actually PowerTune is not new. It was first introduced on Cayman (6900 series); the Tahiti (7900 series) implementation is no different.
  • dcompart - Tuesday, January 10, 2012 - link

    It'd be nice to see the overclock performance of a standard cooled card for comparison. I'd like to be able to balance the price justification not only on noise but performance. The comparison between a regular 7970 against a stock overclocked card doesn't persuade me, then it irritates me further to see the XFX overclocked further, without even playing devils advocate and showing the overclock potential of a regular 7970 with standard cooling. It doesn't make want to buy the Overclocked card, it makes me feel deceived! Reply
  • Cepak - Wednesday, January 11, 2012 - link

    It would be nice to see "Higher is Better" or "Lower is Better" on every chart. I've been using onboard video, but in order to play BF3 at it's full potential, I'm in the market for a new video card and seeing "Higher or Lower is better" on every chart would help me understand the benchmarks better.

  • spambonk - Saturday, February 11, 2012 - link

    But what is the max temperature limit?
    (Or won't AMD tell you either?)
  • Zaris24 - Tuesday, April 03, 2012 - link

    An when i just played Dead Island
    In either Low Medium or high
    it went to 78+ heat fan speed 37%
    an shut down

    it cant handle dead island game

    i used to owne a Radeon HD 5870 it had 1 GB
    it ran the game in High had no problem at all

    how come this grafic card cant handle it ?
    Nothing else in my computer gets that hot !

Log in

Don't have an account? Sign up now