POST A COMMENT

291 Comments

Back to Article

  • Wreckage - Thursday, December 22, 2011 - link

    That's kind of disappointing. Reply
  • atticus14 - Thursday, December 22, 2011 - link

    oh look its that guy that was banned from the forums for being an overboard nvidia zealot. Reply
  • medi01 - Tuesday, January 03, 2012 - link

    Maybe he meant "somebody @ anandtech is again pissing on AMDs cookies"?

    I mean "oh, it's fastest and coolest single GPU card on the market, it is slightly more expensive than competitor's, but it kinda sucks since AMD didn't go "significantly cheaper than nVidia" route" is hard to call unbiased, eh?

    Kind of disappointing conclusion, indeed.
    Reply
  • ddarko - Thursday, December 22, 2011 - link

    To each their own but I think this is undeniable impressive:

    "Even with the same number of ROPs and a similar theoretical performance limit (29.6 vs 28.16), 7970 is pushing 51% more pixels than 6970 is" and

    "it’s clear that AMD’s tessellation efficiency improvements are quite real, and that with Tahiti AMD can deliver much better tessellation performance than Cayman even at virtually the same theoretical triangle throughput rate."
    Reply
  • Samus - Thursday, December 22, 2011 - link

    I prefer nVidia products, mostly because the games I play (EA/DICE Battlefield-series) are heavily sponsered by nVidia, giving them a developement-edge.

    That out of the way, nVidia has had their problems just like this card is going to experience. Remember when Fermi came out, it was a performance joke, not because it was slow, but because it used a ridiculous amount of power to do the same thing as an ATI card while costing substantially more.

    Fermi wasn't successful until second-generation products were released, most obviously the GTX460 and GT430, reasonably priced cards with quality drivers and low power consumption. But it took over a year for nVidia to release those, and it will take over a year for ATI to make this architecture shine.
    Reply
  • kyuu - Thursday, December 22, 2011 - link

    Wat? The only thing there might be an issue with is drivers. As far as power consumption goes, this should be better than Cayman. Reply
  • CeriseCogburn - Sunday, March 11, 2012 - link

    He's saying the 28mn node will have further power improvements. Take it as an amd compliment - rather you should have. Reply
  • StriderTR - Thursday, December 22, 2011 - link

    EA/Dice are just as heavily sponsored by AMD, more in fact. Not sure where your getting your information, but its .. well ... wrong. Nvidia bought the rights to advertize the game with their hardware, AMD is heavily sponsoring BF3 and related material. Example, The Controller.

    Also, the GTX 580 and HD 6970 perform within a few FPS of each other on BF3. I run dual 6970's, by buddy runs dual 580's, we are almost always within 2 FPS of one and other at any given time.

    AMD will have the new architecture "shining" in far under a year. They have been focused on it for a long time already.

    Simple bottom line, both Nvidia and AMD make world class cards these days. No matter your preference, you have cards to choose from that will rock any games on the planet for a long time to come.
    Reply
  • deaner - Thursday, December 22, 2011 - link

    Umm, yea no. Not so much with nvidia and EA/DICE Batttlefield series giving nvidia a development edge. (if it does, the results are yet to be seen)
    Facts are facts, the 5 series to our current review today, the 7970, do and again continue to edge the Nvidia lines. The AMD Catalyst performance of particular note, BF3, has been far superior.

    Reply
  • RussianSensation - Thursday, December 22, 2011 - link

    ."..most obviously the GTX460 and GT430, reasonably priced cards with quality drivers and low power consumption. But it took over a year for nVidia to release those"

    GTX470/480 launched March 26, 2010
    GTX460 launched July 12, 2010
    GT430 launched October 11, 2010

    Also, Fermi's performance at launch was not a joke. GTX470 delivered performance between HD5850 and HD5870, priced in the middle. Looking now, GTX480 ~ HD6970. So again, both of those cards did relatively well at the time. Once you consider overclocking of the 470/480, they did extremely well, both easily surprassing the 5870 in performance in overclocked states.

    Sure power consumption was high, but that's the nature of the game for highest-end GPUs.
    Reply
  • GTVic - Thursday, December 22, 2011 - link

    The first Fermi version they demo'd was a mock-up held together with wood screws. That is not a good launch... Reply
  • RussianSensation - Thursday, December 22, 2011 - link

    And the real launch version produced Tessellation performance that took HD7970 to pass, had compute performance that HD7970 can barely best today, had Mega Texture support that HD7970 just added now 2 years later, had scalar SIMD architecture that took AMD 2 years to release. Reply
  • Scali - Friday, December 23, 2011 - link

    HD7970 doesn't actually surpass Fermi's tessellation, apart from tessellation factors 10 and below:
    http://www.pcgameshardware.de/aid,860536/Test-Rade...
    From factor 11 to 64, Fermi still reigns supreme.

    (This is with AMD's SubD11 sample from the DirectX 11 SDK).
    Reply
  • Scali - Friday, December 23, 2011 - link

    Uhhh no. They demo'ed a real Fermi obviously.
    It was just a development board, which didn't exactly look pretty, and was not in any way representative of the card that would be available to end-users.
    So they made a mock-up to show what a retail Fermi WOULD look like, once it hits the stores.
    Which is common practice anyway in the industry.
    Reply
  • fllib19554 - Thursday, January 12, 2012 - link

    off yourself cretin. Reply
  • futurepastnow - Thursday, December 22, 2011 - link

    You misspelled "impressive." Reply
  • slayernine - Thursday, December 22, 2011 - link

    What Wreckage really meant to say was that it was disappointing for nVidia to get pummelled so thoroughly. Reply
  • unaligned - Friday, December 23, 2011 - link

    A year old card pummeled by the newest technology? I would hope so. Reply
  • MagickMan - Thursday, December 22, 2011 - link

    Go shoot yourself in the face, troll. Reply
  • rs2 - Thursday, December 22, 2011 - link

    Yes, yes. 4+ billion transistors on a single chip is not impressive at all. Why, it's not even one transistor for every person on the planet yet. Reply
  • CeriseCogburn - Sunday, March 11, 2012 - link

    We'll have to see if amd "magically changes that number and informs Anand it was wrong like they did concerning their failed recent cpu.... LOL
    That's a whole YEAR of lying to everyone trying to make their cpu look better than it's actual fail, and Anand shamefully chose to announce the number change "with no explanation given by amd"... -
    That's why you should be cautious - we might find out the transistor count is really 33% different a year from now.
    Reply
  • piroroadkill - Thursday, December 22, 2011 - link

    Only disappointing if you:

    a) ignored the entire review
    b) looked at only the chart for noise
    c) have brain damage
    Reply
  • Finally - Thursday, December 22, 2011 - link

    In Eyefinity setups the new generation shines: http://tinyurl.com/bu3wb5c Reply
  • wicko - Thursday, December 22, 2011 - link

    I think the price is disappointing. Everything else is nice though. Reply
  • CeriseCogburn - Sunday, March 11, 2012 - link

    The drivers suck Reply
  • RussianSensation - Thursday, December 22, 2011 - link

    Not necessarily. The other possibility is that being 37% better on average at 1080P (from this Review) over HD6970 for $320 more than an HD6950 2GB that can unlock into a 6970 just isn't impressive enough. That should be d). Reply
  • piroroadkill - Friday, December 23, 2011 - link

    Well, I of course have a 6950 2GB that unlocked, so as far as I'm concerned, that has been THE choice since the launch of the 6950, and still is today.

    But you have to ignore cost at launch, it's always high.
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    I agree RS, as these amd people are constantly screaming price percentage increase vs performance increase... yet suddenly applying the exact combo they use as a weapon against Nvidia to themselves is forbidden, frowned upon, discounted, and called unfair....
    Worse yet, according to the same its' all Nvidia's fault now - that amd is overpriced through the roof...LOL - I have to laugh.
    Also, the image quality page in the review was so biased toward amd that I thought I was going to puke.
    Amd is geven credit for a "perfect algorythm" that this very website has often and for quite some time declared makes absolutely no real world difference in games - and in fact, this very reviewer admitted the 1+ year long amd failure in this area as soon as they released "the fix" - yet argued everyone else was wrong for the prior year.
    The same thing appears here.
    Today we find out the GTX580 nvidia card has much superior anti-shimmering than all prior amd cards, and that finally, the 7000 high end driver has addressed the terrible amd shimmering....
    Worse yet, the decrepit amd low quality impaired screens are allowed in every bench, with the 10% amd performance cheat this very site outlined them merely stated we hope Nvidia doesn't so this too - then allowed it, since that year plus ago...
    In the case of all the above, I certainly hope the high end 797x cards aren't CHEATING LIKE HECK still.
    For cripe sakes, get the AA stuff going, stop the 10% IQ cheating, and get our bullet physics or pay for PhysX, and stabilize the drivers .... I am sick of seeing praise for cheating and failures - if they are (amd) so great let's GET IT UP TO EQUIVALENCY !
    Wow I'm so mad I don't have a 7970 as supply is short and I want to believe in amd for once... FOR THE LOVE OF GOD DID THEY GET IT RIGHT THIS TIME ?!!?
    Reply
  • slayernine - Thursday, December 22, 2011 - link

    Holy fan boys batman!

    This comment thread reeks of nvidia fans green with jealousy
    Reply
  • Hauk - Thursday, December 22, 2011 - link

    LOL, Wreckage first!

    Love him or hate him, he's got style..
    Reply
  • GenSozo - Thursday, December 22, 2011 - link

    Style? Another possibility is that he has no life, a heavily worn F5 key, and lots of angst. Reply
  • Blaster1618 - Monday, December 26, 2011 - link

    One request when diving into acronyms (from the “quick refresher”), first one is followed by (definition in parenthesis) or hyperlink. Your site does the best on the web at delving into and explaining the technical evolution of computing. You maybe even able to tech the trolls and shills a thing or to they can regurgitate at there post X-mas break circle jerk. Never underestimate the importance or reach of your work. Reply
  • Concillian - Thursday, December 22, 2011 - link

    Page 1
    Power Consumption Comparison: Columns: AMD / Price / NVIDIA

    Presumably mislabeled.
    Reply
  • Anand Lal Shimpi - Thursday, December 22, 2011 - link

    Fixed, thank you!

    Take care,
    Anand
    Reply
  • Penti - Thursday, December 22, 2011 - link

    Will the new video decode engine either add software accelerated gpu or fixed function hardware WebM/VP8 video decode? ARM SoC's basically already has those capabilities with rock-chip including hw-decoding, TI OMAP IVA3 DSP-video processor supporting VP8/WebM, Broadcom supporting it in their video processor and others to come. Would be odd to be able to do smooth troublefree 1080p WebM on a phone or tablet, but not a desktop and laptop computer without taxing the cpu and buses like crazy. It's already there hardware-wise in popular devices to do if they add software/driver support for it.

    Nice to see a new generation card any how.
    Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    It's UVD3, the same decoder that was on Cayman. So if Cayman can't do it, Tahiti can't either. Reply
  • MadMan007 - Thursday, December 22, 2011 - link

    Pretty sure the chart on the first page should be labeled Price Comparison not Power Consumption Comparison.

    Unless perhaps this was a sly way of saying money is power :)
    Reply
  • descendency - Thursday, December 22, 2011 - link

    You list the HD 6870 as 240 on the first page ("AMD GPU Specification Comparison" chart) but then list it as around 160 in the "Winter 2011 GPU Pricing Comparison" chart. 80 dollars is quite a difference. Reply
  • Anand Lal Shimpi - Thursday, December 22, 2011 - link

    Fixed, sorry those were older numbers.

    Take care,
    Anand
    Reply
  • gevorg - Thursday, December 22, 2011 - link

    37.9dB is a horrible testbed for noise testing! WTF! Reply
  • mavere - Thursday, December 22, 2011 - link

    Seriously!

    With the prevalence of practically silent PSUs, efficient tower heatsinks, and large quiet fans, I cannot fathom why the noise floor is 37.9 dB.
    Reply
  • Finally - Thursday, December 22, 2011 - link

    As usual, AT is shooting straight for the brain-dam, I mean, ENTHUSIAST crowd feat. a non-mentioned power supply that should be well around 1000W in order to drive over-priced CPUs as well as quadruple GPU setups.
    If you find that horrendous they will offer you not to read this review, but their upcoming HTPC review where they will employ the same 1000W power supply...
    Reply
  • B3an - Thursday, December 22, 2011 - link

    *face palm*

    1: 1000+ Watt PSU's are normally more quiet if anything as they're better equipped to deal with higher power loads. When a system like this uses nowhere near the PSU's full power the fan often spins at a very low RPM. Some 1000+ PSU's will just shut the fan off completely when a system uses less than 30% of it's power.

    2: It's totally normal for a system to be around 40 dB without including the graphics cards. Two or 3 fans alone normally cause this much noise even if they're large low RPM fans. Then you have noise levels from surroundings which even in a "quiet" room are normally more than 15 dB.

    3: Grow some fucking brain cells kids.
    Reply
  • andymcca - Thursday, December 22, 2011 - link

    1) If you were a quiet computing enthusiast, you would know that the statement
    "1000+ Watt PSU's are normally more quiet if anything"
    is patently false. 1000W PSUs are necessarily less efficient at realistic loads (<600W at full load in single GPU systems). This is a trade-off of optimizing for efficiency at high wattages. There is no free lunch in power electronics. Lower efficiency yields more heat yields more noise, all else being equal. And I assure you that a high end silent/quiet PSU is designed for low air flow and uses components at least as high in quality as their higher wattage (non-silent/non-quiet) competitors. Since the PSU is not decribed (a problem which has been brought up many times in the past concerning AT reviews), who knows?

    2) 40dB is fairly loud if you are aiming for quiet operation. Ambient noise in a quiet room can be roughly 20dB (provided there is not a lot of ambient outdoor noise). 40dB is roughly the amplitude of conversation in a quiet room (non-whispered). A computer that hums as loud as I talk is pretty loud! I'm not sure if you opinion is informed by any empirical experience, but for precise comparison of different sources the floor should be at minimum 20dB below the sources in question.

    3) You have no idea what the parent's age or background is, but your comment #3 certainly implies something about your maturity.
    Reply
  • formulav8 - Tuesday, February 21, 2012 - link

    Seriously grow up. Your a nasty mouth as well. Reply
  • piroroadkill - Thursday, December 22, 2011 - link

    Haha, yeah.

    Still, I guess we have to leave that work to SPCR.
    Reply
  • Kjella - Thursday, December 22, 2011 - link

    High-end graphics cards are even noisier, so who cares? A 250W card won't be quiet no matter what. Using an overclocked Intel Core i7 3960X is obviously so the benchmarks won't be CPU limited, not to make a quiet PC. Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    Our testing methodology only has us inches from the case (an open case I should add), hence the noise from our H100 closed loop radiator makes itself known. In any case these numbers aren't meant to be absolutes, we only use them on a relative basis. Reply
  • MadMan007 - Thursday, December 22, 2011 - link

    [AES chart] on page 7? Reply
  • MadMan007 - Thursday, December 22, 2011 - link

    More stuff missing on page 9:

    [AF filter test image] [download table]
    Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    Yep. Still working on it. Hold tight Reply
  • MadMan007 - Thursday, December 22, 2011 - link

    Np, just not used to seeing incomplete articles publsihed on Anandtech that aren't clearly 'previews'...wasn't sure if you were aware of all the missing stuff. Reply
  • DoktorSleepless - Thursday, December 22, 2011 - link

    Crysis won't be defeated until we're able to play at a full 60fps with 4x super sampling. It looks ugly without the foliage AA. Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    I actually completely agree. That's even farther off than 1920 just with MSAA, but I'm looking forward to that day. Reply
  • chizow - Thursday, December 22, 2011 - link

    Honestly Crysis may be defeated once Nvidia releases its driver-level FXAA injector option. Yes, FXAA can blur textures but it also does an amazing job at reducing jaggies on both geometry and transparencies at virtually no impact on performance.

    There's leaked driver versions (R295.xx) out that allow this option now, hopefully we get them officially soon as this will be a huge boon for games like Crysis or games that don't support traditional AA modes at all (GTA4).

    Check out the results below:

    http://www.hardocp.com/image.html?image=MTMyMjQ1Mz...
    Reply
  • AnotherGuy - Thursday, December 22, 2011 - link

    If nVidia released this card tomorrow they woulda priced it easily $600... The card succeeds in almost every aspect.... except maybe noise... Reply
  • chizow - Thursday, December 22, 2011 - link

    Funny since both of Nvidia's previous flagship single-GPU cards, the GTX 480 and GTX 580, launched for $499 and were both the fastest single-GPU cards available at the time.

    I think Nvidia learned their lesson with the GTX 280, and similarly, I think AMD has learned their lesson as well with underpricing their HD 4870 and HD 5870. They've (finally) learned that in the brief period they hold the performance lead, they need to make the most of it, which is why we are seeing a $549 flagship card from them this time around.
    Reply
  • 8steve8 - Thursday, December 22, 2011 - link

    waiting for amd's 28nm 7770.

    this card is overkill in power and money.
    Reply
  • tipoo - Thursday, December 22, 2011 - link

    Same, we're not going to tax these cards at the most common resolutions until new consoles are out, such is the blessing and curse of console ports. Reply
  • CrystalBay - Thursday, December 22, 2011 - link

    Hi Ryan , All these older GPUs ie (5870 ,gtx570 ,580 ,6950 were rerun on the new hardware testbed ? If so GJ lotsa work there. Reply
  • FragKrag - Thursday, December 22, 2011 - link

    The numbers would be worthless if he didn't Reply
  • Anand Lal Shimpi - Thursday, December 22, 2011 - link

    Yep they're all on the new testbed, Ryan had an insane week.

    Take care,
    Anand
    Reply
  • Lifted - Thursday, December 22, 2011 - link

    How many monitors on the market today are available at this resolution? Instead of saying the 7970 doesn't quite make 60 fps at a resolution maybe 1% of gamers are using, why not test at 1920x1080 which is available to everyone, on the cheap, and is the same resolution we all use on our TV's?

    I understand the desire (need?) to push these cards, but I think it would be better to give us results the vast majority of us can relate to.
    Reply
  • Anand Lal Shimpi - Thursday, December 22, 2011 - link

    The difference between 1920 x 1200 vs 1920 x 1080 isn't all that big (2304000 pixels vs. 2073600 pixels, about an 11% increase). You should be able to conclude 19x10 performance from looking at the 19x12 numbers for the most part.

    I don't believe 19x12 is pushing these cards significantly more than 19x10 would, the resolution is simply a remnant of many PC displays originally preferring it over 19x10.

    Take care,
    Anand
    Reply
  • piroroadkill - Thursday, December 22, 2011 - link

    Dell U2410, which I have :3

    and Dell U2412M
    Reply
  • piroroadkill - Thursday, December 22, 2011 - link

    Oh, and my laptop is 1920x1200 too, Dell Precision M4400.
    My old laptop is 1920x1200 too, Dell Latitude D800..
    Reply
  • johnpombrio - Wednesday, December 28, 2011 - link

    Heh, I too have 3 Dell U2410 and one Dell 2710. I REALLY want a Dell 30" now. My GTX 580 seems to be able to handle any of these monitors tho Crysis High-Def does make my 580 whine on my 27 inch screen! Reply
  • mczak - Thursday, December 22, 2011 - link

    The text for that test is not really meaningful. Efficiency of ROPs has almost nothing to do at all with this test, this is (and has always been) a pure memory bandwidth test (with very few exceptions such as the ill-designed HD5830 which somehow couldn't use all its theoretical bandwidth).
    If you look at the numbers, you can see that very well actually, you can pretty much calculate the result if you know the memory bandwidth :-). 50% more memory bandwidth than HD6970? Yep, almost exactly 50% more performance in this test just as expected.
    Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    That's actually not a bad thing in this case. AMD didn't go beyond 32 ROPs because they didn't need to - what they needed was more bandwidth to feed the ROPs they already had. Reply
  • mczak - Thursday, December 22, 2011 - link

    Oh yes _for this test_ certainly 32 ROPs are sufficient (FWIW it uses FP16 render target with alpha blend). But these things have caches (which they'll never hit in the vantage fill test, but certainly not everything will have zero cache hits), and even more important than color output are the z tests ROPs are doing (which also consume bandwidth, but z buffers are highly compressed these days).
    You can't really say if 32 ROPs are sufficient, nor if they are somehow more efficient judged by this vantage test (as just about ANY card from nvidia or amd hits bandwidth constraints in that particular test long before hitting ROP limits).
    Typically it would make sense to scale ROPs along with memory bandwidth, since even while it doesn't need to be as bad as in the color fill test they are indeed a major bandwidth eater. But apparently AMD disagreed and felt 32 ROPs are enough (well for compute that's certainly true...)
    Reply
  • cactusdog - Thursday, December 22, 2011 - link

    The card looks great, undisputed win for AMD. Fan noise is the only negative, I was hoping for better performance out the new gen cooler but theres always non-reference models for silent gaming.

    Temps are good too so theres probably room to turn the fan speed down a little.
    Reply
  • rimscrimley - Thursday, December 22, 2011 - link

    Terrific review. Very excited about the new test. I'm happy this card pushes the envelope, but doesn't make me regret my recent 580 purchase. As long as AMD is producing competitive cards -- and when the price settles on this to parity with the 580, this will be the market winner -- the technology benefits. Cheers! Reply
  • nerfed08 - Thursday, December 22, 2011 - link

    Good read. By the way there is a typo in final words.

    faster and cooler al at once
    Reply
  • Anand Lal Shimpi - Thursday, December 22, 2011 - link

    Fixed, thank you :)

    Take care,
    Anand
    Reply
  • hechacker1 - Thursday, December 22, 2011 - link

    I think most telling is the minimum FPS results. The 7970 is 30-45% ahead of the previous generation; in a "worse case" situation were the GPU can't keep up or the program is poorly coded.

    Of course they are catching up with Nvidia's already pretty good minimum FPS, but I am glad to see the improvement, because nothing is worse than stuttering during a fasted pace FPS. I can live with 60fps, or even 30fps, as long as it's consistent.

    So I bet the micro-stutter problem will also be improved in SLI with this architecture.
    Reply
  • jgarcows - Thursday, December 22, 2011 - link

    While I know the bitcoin craze has died down, I would be interested to see it included in the compute benchmarks. In the past, AMD has consistently outperformed nVidia in bitcoin work, it would also be interesting to see Anandtech's take as to why, and to see if the new architecture changes that. Reply
  • dcollins - Thursday, December 22, 2011 - link

    This architecture will most likely be a step backwards in terms of bitcoin mining performance. In the GCN architecture article, Anand mentioned that buteforce hashing was one area where a VLIW style architecture had an advantage over a SIMD based chip. Bitcoin mining is based on algorithms mathematically equivalent to password hashing. With GCN, AMD is changing the very thing that made their card better miners than Nvidia's chips.

    The old architecture is superior for "pure," mathematically well defined code while GCN is targeted at "messy," more practical and thus widely applicable code.
    Reply
  • wifiwolf - Thursday, December 22, 2011 - link

    a bit less than expected, but not really an issue:

    http://www.tomshardware.co.uk/radeon-hd-7970-bench...
    Reply
  • dcollins - Thursday, December 22, 2011 - link

    You're looking at a 5% increase in performance for a whole new generation with 35% more compute hardware, increased clock speed and increased power consumption: that's not an improvement, it's a regression. I don't fault AMD for this because Bitcoin mining is a very niche use case, but Crossfire 68x0 cards offer much better performance/watt and performance/$. Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Interesting, amd finally copied nvidia...
    " This problem forms the basis of this benchmark, and the NQueen test proves once more that AMD's Radeon HD 7970 tremendously benefits from leaving behind the VLIW architecture in complex workloads. Both the HD 7970 and the GTX 580 are nearly twice as fast as the older Radeons. "

    When we show diversity we should also show that amd radeon has been massively crippled for a long time except when "simpleton" was the key to speed. "Superior architecture" actually means "simple and stupid" - hence "fast" at repeating simpleton nothings, but unable to handle "complex tasks".
    LOL - the dumb gpu by amd has finally "evolved".
    Reply
  • chizow - Thursday, December 22, 2011 - link

    ....unfortunately its going to be pitted against Kepler for the long haul.

    There's a lot to like about Southern Islands but I think its going to end up a very similar situation as Evergreen vs. Fermi, where Evergreen released sooner and took the early lead, but Fermi ultimately won the generation. I expect similar with Tahiti holding the lead for the next 3-6 months until Kepler arrives, but Kepler and its refresh parts winning this 28nm generation once they hit the streets.

    Overall the performance and changes AMD made with Tahiti look great compared to Northern Islands, but compared to Fermi parts, its just far less impressive. If you already owned an AMD NI or Evergreen part, there'd be a lot of reason to upgrade, but if you own a Fermi generation Nvidia card there's just far less reason to, especially at the asking price.

    I do like how AMD opened up the graphics pipeline with Tahiti though, 384-bit bus, 3GB framebuffer, although I wonder if holding steady with ROPs hurts them compared to Kepler. It would've also been interesting to see how the 3GB GTX 580 compared at 2560 since the 1.5GB model tended to struggle even against 2GB NI parts at that resolution.
    Reply
  • ravisurdhar - Thursday, December 22, 2011 - link

    My thoughts exactly. Can't wait to see what Kepler can do.

    Also...4+B transistors? mind=blown. I remember when we were ogling over 1B. Moore's law is crazy.... :D
    Reply
  • johnpombrio - Wednesday, December 28, 2011 - link

    Exactly. If you look at all the changes that AMD did on the card, I would have expected better results: the power consumption decrease with the Radeon 7970 is mainly due to the die shrink to 28nm. NVidia is planning on a die shrink of their existing Fermi architecture before Kepler is released:

    http://news.softpedia.com/news/Nvidia-Kepler-Is-On...

    Another effect of the die shrink is that clock speed usually increases as there is less heat created at the lower voltage needed with a smaller transistor.

    The third change that is not revolutionary is the bump of AMD's 7970's memory bus from 384 bits (matching the 580) from the 6970's 256 bits along with 3GB DDR5 memory vs the GTX580's 1.5GB and the 6970's 2GB.

    The final non revolutionary change is bumping the number of stream processors by 33% from 1,536 to 2,048.

    Again, breaking out my calculator, the 35% bump in the number of stream processors ALONE causes the increase in the change in the benchmark differences between the 7970 and the 6970.

    The higher benchmark, however, does not show ANY OTHER large speed bumps that SHOULD HAVE OCCURED due to the increase in the memory bus size, the higher amount of memory, compute performance, texture fill rate, or finally the NEW ARCHITECTURE.

    If I add up all the increases in the technology, I would have expected benchmarks in excess of 50-60% over the previous generation. Perhaps I am naive in how much to expect but, hell, a doubling of transistor count should have produced a lot more than a 35% increase. Add the new architecture, smaller die size, and more memory and I am underwhelmed.
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Well, we can wait for their 50%+ driver increase package+ hotfixes - because after reading that it appears they are missing the boat in drivers by a wide margin.
    Hopefully a few months after Kepler blows them away, and the amd fans finally allow themselves to complain to the proper authorities and not blame it on Nvida, they will finally come through with a "fix" like they did when the amd (lead site review mastas) fans FINALLY complained about crossfire scaling....
    Reply
  • KaarlisK - Thursday, December 22, 2011 - link

    What is the power consumption with multiple monitors? Previously, you could not downclock GDDR5, so the resulting consumption was horrible. Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    "On that note, for anyone who is curious about idle clockspeeds and power consumption with multiple monitors, it has not changed relative to the 6970. When using a TMDS-type monitor along with any other monitor, AMD has to raise their idle clockspeeds from 350MHz core and 600Mhz memory to 350MHz core and the full 5.5GHz speed for memory, with the power penalty for that being around 30W. Matched timing monitors used exclusively over DisplayPort will continue to be the only way to be able to use multiple monitors without incurring an idle penalty." Reply
  • KaarlisK - Thursday, December 22, 2011 - link

    Thank you for actually replying :)
    I am so sorry for having missed this.
    Reply
  • ltcommanderdata - Thursday, December 22, 2011 - link

    Great review.

    Here's hoping that AMD will implement 64-bit FP support across the whole GCN family and not just the top-end model. Seeing AMD's mobile GPUs don't use the highest-end chip, settling for the 2nd highest and lower, there hasn't been 64-bit FP support in AMD mobile GPUs since the Mobility HD4800 series. I'm interested in this because I can then dabble in some 64-bit GPGPU programming on the go. It also has implications for Apple since their iMacs stick to mobile GPUs, so would otherwise be stuck without 64-bit FP support which presumably could be useful for some of their professional apps.

    In regards to hardware accelerated Megatexture, is it directly applicable to id Tech 5's OpenGL 3.2 solution? ie. Will id Tech 5 games see an immediate speed-up with no recoding needed? Or does Partially Resident Texture support require a custom AMD specific OpenGL extension? If it's the later, I can't see it going anywhere unless nVidia agrees to make it a multivendor EXT extension.
    Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    Games will need to be specifically coded for PRT; it won't benefit any current games. And you are correct in that it will require and AMD OpenGL extension to use (it won't be accessible from D3D at this time). Reply
  • Zingam - Thursday, December 22, 2011 - link

    And at the time when it is available in D3D. AMD's implementation won't be compatible... :D That's sounds familiar. So will have to wait for another generation to get the things right. Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    As for your question about FP64, it's worth noting that of the FP64 rates AMD listed for GCN, "0" was not explicitly an option. It's quite possible that anything using GCN will have at a minimum 1/16th FP64. Reply
  • Sind - Thursday, December 22, 2011 - link

    Excellent review thanks Ryan. Looking forward to see what the 7950 performance and pricing will end up. Also to see what nv has up their sleeves. Although I can't shake the feeling amd is holding back. Reply
  • chizow - Thursday, December 22, 2011 - link

    Another great article, I really enjoyed all the state-of-the-industry commentary more than the actual benchmarks and performance numbers.

    One thing I may have missed was any coverage at all of GCN. Usually you guys have all those block diagrams and arrows explaining the changes in architecture. I know you or Anand did a write-up on GCN awhile ago, but I may have missed the link to it in this article. Or maybe put a quick recap in there with a link to the full write-up.

    But with GCN, I guess we can close the book on AMD's past Vec5/VLIW4 archs as compute failures? For years ATI/AMD and their supporters have insisted it was the better compute architecture, and now we're on the 3rd major arch change since unified shaders, while Nvidia has remained remarkably consistent with their simple SP approach. I think the most striking aspect of this consistency is that you can run any CUDA or GPU accelerated apps on GPUs as old as G80, while you even noted you can't even run some of the most popular compute apps on 7970 because of arch-specific customizations.

    I also really enjoyed the ISV and driver/support commentary. It sounds like AMD is finally serious about "getting in the game" or whatever they're branding it nowadays, but I have seen them ramp up their efforts with their logo program. I think one important thing for them to focus on is to get into more *quality* games rather than just focusing on getting their logo program into more games. Still, as long as both Nvidia and AMD are working to further the compatibility of their cards without pushing too many vendor-specific features, I think that's a win overall for gamers.

    A few other minor things:

    1) I believe Nvidia will soon be countering MLAA with a driver-enabled version of their FXAA. While FXAA is available to both AMD and Nvidia if implemented in-game, providing it driver-side will be a pretty big win for Nvidia given how much better performance and quality it offers over AMD's MLAA.

    2) When referring to active DP adapter, shouldn't it be DL-DVI? In your blurb it said SL-DVI. Its interesting they went this route with the outputs, but providing the active adapter was definitely a smart move. Also, is there any reason GPU mfgs don't just add additional TMDS transmitters to overcome the 4x limitation? Or is it just a cost issue?

    3) The HDMI discussion is a bit fuzzy. HDMI 1.4b specs were just finalized, but haven't been released. Any idea whether or not SI or Kepler will support 1.4b? Biggest concern here is for 120Hz 1080p 3D support.

    Again, thoroughly enjoyed reading the article, great job as usual!
    Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    Thanks for the kind words.

    Quick answers:

    2) No, it's an active SL-DVI adapter. DL-DVI adapters exist, but are much more expensive and more cumbersome to use because they require an additional power source (usually USB).

    As for why you don't see video cards that support more than 2 TMDS-type displays, it's both an engineering and a cost issue. On the engineering side each TMDS source (and thus each supported TMDS display) requires its own clock generator, whereas DisplayPort only requires 1 common clock generator. On the cost side those clock generators cost money to implement, but using TMDS also requires paying royalties to Silicon Image. The royalty is on the order of cents, but AMD and NVIDIA would still rather not pay it.

    3) SI will support 1080P 120Hz frame packed S3D.
    Reply
  • ericore - Thursday, December 22, 2011 - link

    Core Next: It appears AMD is playing catchup to Nvidia's Cuda, but to an extent that halves the potential performance metrics; I see no other reason why they could not have achieved at varying 25-50% improvement in FPS. That is going to cost them, not just for marginally better performance 5-25%, but they are price matching GTX 580 which means less sales though I suppose people who buy 500$ + GPUs buy them no matter what. Though in this case, they may wait to see what Nvidia has to offer.

    Other New AMD GPUs: Will be releasing in February and April are based on the current architecture, but with two critical differences; smaller node + low power based silicon VS the norm performance based silicon. We will see very similar performance metrics, but the table completely flips around: we will see them, cheaper, much more power efficient and therefore very quiet GPUs; I am excited though I would hate to buy this and see Nvidia deliver where AMD failed.

    Thanks Anand, always a pleasure reading your articles.
    Reply
  • Angrybird - Thursday, December 22, 2011 - link

    any hint on 7950? this card should go head to head with gtx580 when it release. good job for AMD, great review for Ryan! Reply
  • ericore - Thursday, December 22, 2011 - link

    I should add with over 4 billion transistors, they've added more than 35% more transistors but only squeeze 5-25% improvement; unacceptable. That is a complete fail in that context relative to advancement in gaming. Too much catchup with Nvidia. Reply
  • Finally - Thursday, December 22, 2011 - link

    ...that saying? It goes like this:
    If you don't show up for a race, you lose by default.
    Your favourite company lost, so their fanboys may become green of envydia :)

    Besides that - I'd never shell out more than 150€ for a petty GPU, so neither company's product would have appealed to me...
    Reply
  • piroroadkill - Thursday, December 22, 2011 - link

    Wait, catchup? In my eyes, they were already winning. 6950 with dual BIOS, unlock it to 6970.. unbelievable value.. profit??

    Already has a larger framebuffer than the GTX580, so...
    Reply
  • Esbornia - Thursday, December 22, 2011 - link

    Fan boy much? Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Finally, piroroadkill, Esbornia - the gentleman ericore merely stated what all the articles here have done as analysis while the radeonite fans repeated it ad infinitum screaming nvidia's giant core count doesn't give the percentage increase it should considering transistor increase.
    Now, when it's amd's turn, we get ericore under 3 attacks in a row...---
    So you three all take it back concerning fermi ?
    Reply
  • maverickuw - Thursday, December 22, 2011 - link

    I want to know when the 7950 will come out and hopefully it'll come out at $400 Reply
  • duploxxx - Thursday, December 22, 2011 - link

    Only the fact that ATI is able to bring a new architecture on a new process and result in such a performance increase for that power consumption is a clear winner.

    looking at the past with Fermy 1st launch and even Cayman VLIW4 they had much more issues to start with.

    nice job, while probably nv680 will be more performing it will take them at least a while to release that product and it will need to be also huge in size.
    Reply
  • ecuador - Thursday, December 22, 2011 - link

    Nice review, although I really think testing 1680x1050 for a $550 is a big waste of time, which could have to perhaps multi-monitor testing etc. Reply
  • Esbornia - Thursday, December 22, 2011 - link

    Its Anand you should expect this kind of shiet. Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    In this case the purpose of 1680 is to allow us to draw comparisons to low-end cards and older cards, which is something we consider to be important. The 8800GT and 3870 in particular do not offer meaningful performance at 1920. Reply
  • poohbear - Thursday, December 22, 2011 - link

    Why do you bencmark @ 1920x1200 resolution? according to the Steam December survey only 8% of gamers have that resolution, whereas 24% have 1920x1080 and 18% use 1680x1050 (the 2 most popular). Also, minimum FPS would be nice to know in your benchmarks, that is really useful for us! just a heads up for next time u benchmark a video card! Otherwise nice review! lotsa good info at the beginning!:) Reply
  • Galcobar - Thursday, December 22, 2011 - link

    Page 4, comments section. Reply
  • Esbornia - Thursday, December 22, 2011 - link

    They dont want to show the improvements on min FPS cause they hate AMD, you should know that already. Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    Since 1920x1200 has already been commented on elsewhere I'm just going to jump right to your comment on minimum FPS.

    I completely agree, and we're trying to add it where it makes sense. A lot of benchmarks are wildly inconsistent about their minimum FPS, largely thanks to the fact that minimum FPS is an instantaneous data point. When your values vary by 20%+ per run (as minimums often do), even averaging repeated trials isn't nearly accurate enough to present meaningful results.
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    HardOCP shows long in game fps per second charts that show dips and bottom outs are more than one momentary lapse and often are extended time periods of lowest frame rate runs, so I have to respectfully disagree.
    Perhaps the fault is fraps can show you a single instance of lowest frame rate number, and hence it's the analysis that utterly fails - given the time constraints that were made obvious, it is also clear that the extra work it would take for an easily reasoned and reasonable result that is actually of worthy accuracy is not in the cards here.
    Reply
  • thunderising - Thursday, December 22, 2011 - link

    Okay. This card has left me thrilled, but wanting for more. Why?

    Well, for example, every reviewer has hit the CCC Core and Memory Max Limits, which turns into a healthy 10-12% performance boost, all for 10W.

    What, legit reviews got it to 1165MHz core and 6550Mhz memory for a 21-24% increase in performance. Now that's HUGE!

    I think AMD could have gone for something like this with the final clocks, to squeeze out every last bit of performance from this amazing card:

    Core - 1050 MHz
    Memory - 1500 MHz (6000MHz QDR)

    This was not only easily achievable, but would have placed this card at a 8-10% increase in performance all for a mere <10W rise in Load Power.

    Hoping for AIBs like Sapphire to show their magic! HD7970 Toxic, MmmmmmM...

    Otherwise, fantastic card I say.
    Reply
  • Death666Angel - Friday, December 23, 2011 - link

    Maybe they'll do a 4870/4890 thing again? Launch the HD7970 and HD7970X2 and then launch a HD7990 with higher clocks later to counter nVidia.... Who knows. :-) Reply
  • Mishera - Sunday, December 25, 2011 - link

    They've been doing it for quite some time now. Their plan has been to release a chip balancing die size, performance, and cost. Then later to compete on high end release a dual-chip card. Anand wrote on this a while ago with the rv770 story (http://www.anandtech.com/show/2679).

    Even looking at the picture of chip sizes, the 7970 is still a reasonable size. And this really was a brilliant move as though Nvidia has half the marketshare and does make a lot of money from their cards, their design philosophy has been hurting them a lot from a business standpoint.

    On a side note, Amd really made a great choice by choosing to wait until now to push for general computing. Though that probably means more people to support development and drivers, which means more hiring which is the opposite way Amd has been going. It will be interesting to see how this dichotomy will develop in the future. But right now kudos to Amd.
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Does that mean amd is abandoning gamers as we heard the scream whilst Nvidia was doing thus ?
    I don't quite get it - now what nvidia did that hurt them, is praise worthy since amd did it, finally.
    Forgive me as I scoff at the immense dichotomy...
    "Perfect ripeness at the perfect time" - sorry not buying it....
    Reply
  • privatosan - Thursday, December 22, 2011 - link

    PRT is a nice feature, but there is an failure in the article:

    'For AMD’s technology each tile will be 64KB, which for an uncompressed 32bit texture would be enough room for a 4K x 4K chunk.'

    The tile would be 128 x 128 texels; 4K x 4K would be quite big for a tile.
    Reply
  • futrtrubl - Thursday, December 22, 2011 - link

    I was going to comment on that too. A 4k x 4k x 32bit (4byte) texture chunk would be around 67MB uncompressed. For a 32bit texture you could only fit a 128x128 array in a 64KB chunk. An 8bit/pixel texture could be 4k*4k Reply
  • Stonedofmoo - Thursday, December 22, 2011 - link

    Thanks for the review. A request though...
    To the hardware sites doing these reviews, many of us in this day and age run dual monitor or more. It always frustrates in me in these reviews that we get a long write up on the power saving techniques the new cards use, and never any mention of it helps those of us running more than one display.

    For those not in the know, if you run more than one display on all the current generations the cards do NOT downclock the GPU and memory nearly as much as they do on single montor configurations. This burns quite a lot more power and obviously kicks out more heat. No site ever mentions this which is odd considering so many of us have more than one display these days.

    I would happily buy the card that finally overcomes this and actually finds a way of knocking back the clocks with multi-monitor setups. Is the new Radeon 7xxx series that card?
    Reply
  • Galcobar - Thursday, December 22, 2011 - link

    It's in the article, on the page entitled "Meet the Radeon 7970."

    Ryan also replied to a similar comment by quoting the paragraph addressing multi-monitor setups and power consumption at the top of page of the comments.

    That's two mentions, and the answer to your question.
    Reply
  • chiddy - Thursday, December 22, 2011 - link

    Ryan,

    Thanks for the great review. My only gripe - and I've been noticing this for a while - is the complete non-mention of drivers or driver releases for Linux/Unix and/or their problems.

    For example, Catalyst drivers exhibit graphical corruption when using the latest version (Version 3) of Gnome Desktop Environment since its release before April. This is a major bug which required most users of AMD/ATI GPUs to either switch desktop environments, switch to Nvidia or Intel GPUs, or use the open source drivers which lack many features. A partial fix appeared in Catalyst 11.9 making Gnome3 usable but there are still elements of screen corruption on occassion. (Details in the "non-official" AMD run bugzilla http://ati.cchtml.com/show_bug.cgi?id=99 ).

    AMD have numerous other issues with Linux Catalyst drivers including buggy openGL implementation, etc.

    Essentially, as a hardware review, a quick once over with non-Microsoft OSs would help alot, especially for products which are marketed as supporting such platforms.

    Regards,
    Reply
  • kyuu - Thursday, December 22, 2011 - link

    Why in the heck would they mention Linux drivers and their issues in an article covering the (paper) release and preliminary benchmarking of AMD's new graphics cards? It has nada to do with the subject at hand.

    Besides, hardly anyone cares, and those that do care already know.
    Reply
  • chiddy - Thursday, December 22, 2011 - link

    And I guess that AMD GPUs are sold as "Windows Only"?

    Thanks for your informative insight.
    Reply
  • MrSpadge - Thursday, December 22, 2011 - link

    There are no games for *nix and everything always depends on your distribution. The problems are so diverse and numerous.. it would take an entire article to briefly touch this field.
    Exagerating, but I really wouldn't be interested in endless *nix troubleshooting. Hell, I can't even get nVidia 2D acceleration in CentOS..
    Reply
  • chiddy - Thursday, December 22, 2011 - link

    You have a valid point on that front and I agree, nor would I expect such an article any time soon.

    However, on the other hand, one would at the very least expect a GPU using manufacturer released drivers to load a usable desktop. This is an issue that was distro agnostic and instantly noticeable, and only affected AMD hardware, as do most *nix GPU driver issues!

    If all that was done during a new GPU review was fire it up in any *nix distribution of choice for just a few minutes (even Ubuntu as I think its the most popular at the moment) to ensure that the basics work it would still be a great help.

    I will have to accept though that there is precious little interest!
    Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    Hi Chiddy;

    It's a fair request, so I'll give you a fair answer.

    The fact of the matter is that Linux drivers are not a top priority for either NVIDIA or AMD. Neither party makes Linux drivers available for our launch reviews, so I wouldn't be able to test new cards at launch. Not to speak for either company, but how many users are shelling out $550 to run Linux? Cards like the 7970 have a very specifically defined role: Windows gaming video card, and their actions reflect this.

    At best we'd be able to look at these issues at some point after the launch when AMD or NVIDIA have added support for the new product to their respective Linux drivers. But that far after the product's launch and for such a small category of users (there just aren't many desktop Linux users these days), I'm not sure it would be worth the effort on our part.
    Reply
  • chiddy - Friday, December 23, 2011 - link

    Hi Ryan,

    Thanks very much for taking the time to respond. I fully appreciate your position, particularly as the posts above very much corroborate the lack of interest!

    Thanks again for the response, I very much appreciate the hard work yourself and the rest of the AT team are doing, and its quality speaks for itself in the steady increase in readers over the years.

    If you do however ever find the time to do a brief piece on *nix GPU support after launch of the next generation nVidia and AMD GPUs that would be wonderful - and even though one would definately not buy a top level GPU for *nix, it would very much help those of us who are dual booting (in my case Windows for gaming / Scientific Linux for work), and somewhat remove the guessing game during purchase time. If not though I fully understand :-).

    Regards,
    Ali
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Nvidia consistenly wins over and over again in this area, so it's "of no interest", like PhysX... Reply
  • AmdInside - Thursday, December 22, 2011 - link

    I won't be getting much sleep tonight since that article took a long time to read (can't imagine how long it must have taken to write up). Great article as usual. While it has some very nice features, all in all, it doesn't make me regret my purchase of a Geforce GTX 580 a couple of months ago. Especially since I mainly picked it up for Battlefield 3. Reply
  • ET - Thursday, December 22, 2011 - link

    The Cayman GPU's got quite a performance boost from drivers over time, gaining on NVIDIA's GPU since their launce. The difference in architecture between the 79x0 and 69x0 is higher than the 69x0 and 58x0, so I'm sure there's quite a bit of room for performance improvement in games.

    Have to say though that I really hope AMD stops increasing the card size each generation.
    Reply
  • haukionkannel - Thursday, December 22, 2011 - link

    Well, 7970 and other GCN based new cards are not so much driver depended as those older radeons. So the improvements are not going to be so great, but surely there will be some! So the gap between 580 or 6970 vs 7970 is going to be wider, but do not expect as big steps as 6970 got via new sets of drivers. Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    This is actually an excellent point. Drivers will still play a big part in performance, but with GCN the shader compiler in particular is now no longer the end all and be all of shader performance as the CUs can do their own scheduling. Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    I hate to say it but once you implement a 10% IQ cheat, it's though to do it again and get away with it again in stock drivers.
    I see the 797x has finally got something to control the excessive shimmering... that's about 5 years of fail finally contained...that I've more or less been told to ignore.... until the 100+ gig zip download here... to prove amd has at least finally dealt with one IQ epic fail... (of course all the reviewers claim there are no differences all the time - after pointing out the 10% cheat, then forgetting about it, having the shimmer, then "not noticing it in game" - etc).
    I'm just GLAD amd finally did something about that particular one of their problems.
    Halleluiah !
    Now some PhysX (fine bullet or open cl but for pete sakes nvidia is also ahead on both of those!) and AA working even when cranking it to 4X plus would be great... hopefully their new arch CAN DO.
    If I get a couple 7970's am I going to regret it is my question - how much still doesn't work and or is inferior to nvidia... I guess I'll learn to ignore it all.
    Reply
  • IceDread - Thursday, December 22, 2011 - link

    It's a good card, but for me it's not worth it to upgrade from a 5970 to a 7970. Looks like that would be about the same performance. Reply
  • Scali - Thursday, December 22, 2011 - link

    This is exactly the reason why I made Endless City available for Radeons:
    http://scalibq.wordpress.com/2010/11/25/running-nv...

    Could you run it and give some framerate numbers with FRAPS or such?
    Reply
  • Boissez - Thursday, December 22, 2011 - link

    What many seem to be missing is that it is actually CHEAPER than the current street prices on the 3GB-equiped GTX 580. IOW it offers superior performance, features, thermals, etc. at a lower price than current gen at a lower price.

    What AMD should do is get a 1.5 GB model out @450$ ASAP.
    Reply
  • SlyNine - Thursday, December 22, 2011 - link

    Looks like I'll be sticking with my 5870. I upgraded from 2 8800GT's ( that in SLI never functioned quite right because they were hitting over 100C ever with after market HSF) and enjoyed over 2x the performance.

    When I upgraded from a 1900XT to the 8800GT's same thing, 800XT-1900XT, 9700pro - 800XT, 4200(nvidia)-9700pro. The list goes on to my first Geforce 256 card.

    Whats the point, My 5870 is 2! generations behind the 7970 yet this would be the worst $per increase in performance yet. Bummer I really want something to drive a new 120hz monitor, if I ever get one. But then thats kinda dependent on whether or not a single GPU can push it.
    Reply
  • Finally - Thursday, December 22, 2011 - link

    Since when do top-of-the-line cards give you the best FPS/$?
    For the last few months the HD6870+HD6850 were leading all those comparisons by quite some margin. The DH7970 will not change that.
    Reply
  • SlyNine - Thursday, December 22, 2011 - link

    If you read my post, you will notice that I'm compairing it to the improvments I have paid for in the past.

    40-60% Better than a 2 YO 5870 Is much worse than I have seen so far. Considering that its not just one generation but 2 generations beyond and for 500+$ to boot. This is the worst upgrade for the cost I have seen.....
    Reply
  • SlyNine - Thursday, December 22, 2011 - link

    The 6870 would not lead the cost per upgrade in performance at all, It would be in the negitives for me. Reply
  • RussianSensation - Thursday, December 22, 2011 - link

    I think his comment still stands. In terms of a performance leap, at 925mhz speeds at least, this is the worst improvement from 1 major generation to the next since X1950XTX -->2900XT. Going from 5870 to 6970 is not a full generation, but a refresh. So for someone with an HD5870 who wants 2x the speed increase, this card isn't it yet. Reply
  • jalexoid - Thursday, December 22, 2011 - link

    How's OpenCL on Linux/*BSD? Because I fail to see real high performance use in Windows environments for any GPGPU.

    For GPGPU the biggest target should be still Linux/*BSD because they are the dominating platforms there....
    Reply
  • R3MF - Thursday, December 22, 2011 - link

    "Among the features added to Graphics Core Next that were explicitly for gaming, the final feature was Partially Resident Textures, which many of you are probably more familiar with in concept as Carmack’s MegaTexture technology."

    Is this feature exclusive to gaming, or is it an extension of a visualised GPU memory feature?

    i.e. if running Blender on the GPU via the cycles renderer will i be able to load scenes larger than local graphics memory?
    Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    It's exclusive to graphics. Virtualized GPU memory is a completely different mechanism (even if some of the cache concepts are the same).

    With that said I see no reason it couldn't benefit Blender, but the benefits would be situational. Blender would only benefit in situations where it can't hold the full scene, but can somehow hold the visibly parts of the scene by using tiles.
    Reply
  • R3MF - Friday, December 23, 2011 - link

    cheers Ryan Reply
  • Finally - Thursday, December 22, 2011 - link

    ...the 2nd generation HD8870 feat. GCN, 3W idle consumption and hopefully less load consumption than my current HD6870. Just let a company like Sapphire add a silent cooler and I'm happy. Reply
  • poohbear - Thursday, December 22, 2011 - link

    Btw why didnt Anandtech overclock this card? it overclocks like a beast according to all the other review sites! Reply
  • Esbornia - Thursday, December 22, 2011 - link

    Cause they want you to think this card sucks come on guys everybody in the internet knows this site sucks for reviews that are not from Intel products. Reply
  • SlyNine - Thursday, December 22, 2011 - link

    lol troll. This site has prefered who ever had the advantage in what ever area. They will do a follow up of its OCing and when they first show a card they show it at stock only.

    I do not OC my videocards, whats the point in adding 5% more gain in games that are running maxed anyways.
    Reply
  • RussianSensation - Thursday, December 22, 2011 - link

    Is this comment supposed to be taken seriously? Go troll somewhere else. Reply
  • Iketh - Thursday, December 22, 2011 - link

    As mentioned several times in the article and in the comments, time was an issue. You can rest assured that follow-up articles are in the works. Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    Indeed it is. Reply
  • Malih - Thursday, December 22, 2011 - link

    dude, awesome in-depth (emphasizing on depth) review, thank you very much for the excellent work Ryan. Reply
  • Esbornia - Thursday, December 22, 2011 - link

    After reading a half ass misinforming review full of errors and typos, I think you didn't read it to say something like that. Reply
  • Iketh - Thursday, December 22, 2011 - link

    It is full of typos, but that has nothing to do with in-depth. It was certainly in-depth and a joy to read despite the typos.

    I'd like to know what you believe is misinformation though.
    Reply
  • SlyNine - Thursday, December 22, 2011 - link

    He probably couldn't understand alot of it and though they were all typo's. Reply
  • WhoBeDaPlaya - Thursday, December 22, 2011 - link

    Sod off you wanker. Go and read Walmart reviews for this cart - they're probably more at your level ;) Reply
  • Marburg U - Thursday, December 22, 2011 - link

    Does Eyefinity Technology 2.0 allow me to launch an application within Windows ON WHICH MONITOR I WANT? Reply
  • NikosD - Thursday, December 22, 2011 - link

    It seems that nobody noticed but where is FP64 = 1/2 FP32 performance that AMD said back in June when they first introduced CGN architecture ?

    I copy from Ryan's June article:

    "One thing that we do know is that FP64 performance has been radically improved: the GCN architecture is capable of FP64 performance up to ½ its FP32 performance. For home users this isn’t going to make a significant impact right away, but it’s going to help AMD get into professional markets where such precision is necessary."

    The truth is that FP64 is 1/4 of FP32 eventually!

    Big Loss in GPGPU community even if 7970 is capable of 3.79Tflops of FP32 compared to 2.7Tflops of 6970
    Reply
  • R3MF - Thursday, December 22, 2011 - link

    it says 1/2 in the architecture article, but 1/4 in the consumer product review, is this AMD taking a leaf from Nvidia's (shitty) book of using drivers to disable features in non-professional (price-tag) products? Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Well, since you brought that up it reminded me of the PRAISE this or the 7870 article gave amd for "getting rid of all it's own competing cards" and zeroing their distribution so that there was only one choice and it wasn't competing with itself.
    I didn't hear a single Nvidia basher cry foul, that amd was playing with the market just to slam dunk some dollars on a new release...
    I do wonder why when amd pulls the dirtiest of dirty, they are praised...
    Reply
  • MrSpadge - Thursday, December 22, 2011 - link

    They'd loose the entire market of Milkyway@Home crunchers if it was just 1/4. Reply
  • RussianSensation - Thursday, December 22, 2011 - link

    FP32 = 3.79 TFlops
    FP64 = 0.948 Tflops, which is about 40% faster than HD6970.
    Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    The keyword there was "up to". GCN actually supports 3 different FP64 configurations: 1/2, 1/4, and 1/16. 1/4 may be an artificial limitation on the 7970 or it may be the native speed Tahiti was designed for; it's not clear yet which of those is the case. Reply
  • WhoBeDaPlaya - Thursday, December 22, 2011 - link

    Ryan, any chance you could try running a Bitcoin client on the card to see what kind of hash rates we'd get? Reply
  • Esbornia - Thursday, December 22, 2011 - link

    Man this site is so biased against AMD it hurts, here we have a new architecture thats beats GTX580 in everything and sometimes GTX590 with almost half the power comsumption and they say the only thing 7970 does right is compute? That it is not a great product but only mediocre? Come on AnandTech we know intel owns you but this is getting ridiculous, see the other neutral sites reviews you will clearly see what AnandTech does if you are even a little smart. Reply
  • SlyNine - Thursday, December 22, 2011 - link

    And a product that is 2 years removed from the 5870 and only performing about 40-60% better... and on a new 28nm chip to boot.

    It's just not that great.
    Reply
  • RussianSensation - Thursday, December 22, 2011 - link

    For almost $200 more as well.... Reply
  • MrSpadge - Thursday, December 22, 2011 - link

    Reading the entire review (apart from the conclusion) I came to the conclusion that GCN is really cool. Give it some more software support (=time), a cool BOINC project and the HD7950 and I may have found my next card. Can't say AT talked me off of it.

    MrS
    Reply
  • WhoBeDaPlaya - Thursday, December 22, 2011 - link

    WWYBYWB? Reply
  • RussianSensation - Thursday, December 22, 2011 - link

    That's not what the review says. The review clearly explains that it's the best single-GPU for gaming. There is nothing biased about not being mind-blown by having a card that's only 25% faster than GTX580 and 37% faster than HD6970 on average, considering this is a brand new 28nm node. Name a single generation where AMD's next generation card improved performance so little since Radeon 8500?

    There isn't any!
    Reply
  • SlyNine - Friday, December 23, 2011 - link

    2900XT ? But I Don't remember if that was a new node and what the % of improvement was beyond the 1950XT.

    But still this is a 500$ card, and I don't think its what we have come to expect from a new node and generation of card. However some people seem more then happy with it, Guess they don't remember the 9700PRO days.
    Reply
  • takeulo - Thursday, December 22, 2011 - link

    as ive read the review this is not a disappointment infact its only a single gpu card but it toughly competing or nearly chasing with the dual gpu's graphics card like 6990 and gtx 590 performance...
    imagine that 7970 is also a dual gpu?? it will tottally dominate the rest... sorry for my bad english..
    Reply
  • eastyy - Thursday, December 22, 2011 - link

    the price vs performance is the most important thing for me at the moment i have a 460 that cost me about £160 at the time and that was a few years ago...seems like the cards now for the same price dont really give that much of a increase Reply
  • Morg. - Thursday, December 22, 2011 - link

    What seems unclear to the writer here is that in fact 6-series AMD was better in single GPU than nVidia.

    Like miles better.

    First, the stock 6970 was within 5% of the gtx580 at high resolutions (and excuse me, but if you like a 500 bucks graphics board with a 100 bucks screen ... not my problem -- ).

    Second, if you put a 6970 OC'd at GTX580 TDP ... the GTX580 is easily 10% slower.

    So overall . seriously ... wake the f* up ?

    The only thing nVidia won at with fermi series 2 (gtx5xx) is making the most expensive highest TDP single GPU card. It wasn't faster, they just picked a price point AMD would never target .. and they got i .. wonderful.

    However, AMD raped nVidia all the way in perf/watt/dollar as they did with Intel in the Server CPU space since Opteron Istanbul ...

    If people like you stopped spouting random crap, companies like AMD would stand a chance of getting the market share their products deserve (sure their drivers are made of shit).
    Reply
  • Leyawiin - Thursday, December 22, 2011 - link

    The HD 7970 is a fantastic card (and I can't wait to see the rest of the line), but the GTX 580 was indisputably better than the HD 6970. Stock or OC'd (for both). Reply
  • Morg. - Friday, December 23, 2011 - link

    Considering TDP, price and all - no.

    The 6970 lost maximum 5% to the GTX580 above full HD, and the bigger the resolution, the smaller the GTX advantage.

    Every benchmark is skewed, but you should try interpreting rather than just reading the conclusion --

    Keep in mind the GTX580 die size is 530mm² whereas the 6970 is 380mm²

    Factor that in, aim for the same TDP on both cards . and believe me .. the GTX580 was a complete total failure, and a total loss above full HD.

    Yes it WAS the biggest single GPU of its time . but not the best.
    Reply
  • RussianSensation - Thursday, December 22, 2011 - link

    Your post is ill-informed.

    When GTX580 and HD6970 are both overclocked, it's not even close. GTX580 destroyed it.

    http://www.xbitlabs.com/articles/graphics/display/...

    HD6950 was an amazing value card for AMD this generation, but HD6970 was nothing special vs. GTX570. GTX580 was overpriced for the performance over even $370 factory preoverclocked GTX570 cards (such as the almost eerily similar in performance EVGA 797mhz GTX570 card for $369).

    All in all, GTX460 ~ HD6850, GTX560 ~ HD6870, GTX560 Ti ~ HD6950, GTX570 ~ HD6970. The only card that had really poor value was GTX580. Of course if you overclocked it, it was a good deal faster than the 6970 that scaled poorly with overclocking.
    Reply
  • Morg. - Friday, December 23, 2011 - link

    I believe you don't get what I said :

    AT THE SAME TDP, THE HD6xxx TOTALLY DESTROYED THE GTX 5xx

    THAT MEANS : the amd gpu was better even though AMD decided to sell it at a TDP / price point that made it cheaper and less performing than the GTX 5xx

    The "destroyed it" statement is full HD resolution only . which is dumb . I wouldn't ever get a top graphics board to just stick with full HD and a cheap monitor.
    Reply
  • Peichen - Friday, December 23, 2011 - link

    According to your argument, all we'd ever need is IGP because no stand-alone card can compete with IGP at the same TDP / price point. Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    The gentleman should get one of the high end cards, and run on the common 19x0 x 1xxx monitors that poor people have several of nowadays, then crank up a decent 3d game and get back to us.
    These cards are not as powerful as the naysayers think - there's a lot of SUCK left in them even in cheap setups, let alone high end everything X2 or X4.
    Reply
  • Ph0b0s - Thursday, December 22, 2011 - link

    This was one of the most interesting parts of the article. So the only thing that requires new hardware is "Target Independent Rasterization". What does that do, so we know how much we are missing out on from only having Directx 11 hardware. Reply
  • SlyNine - Thursday, December 22, 2011 - link

    Second that. Reply
  • Sgt. Stinger - Thursday, December 22, 2011 - link

    I would like to call attention to that the reviewer at www.sweclockers.com noticed a full 10° C temperature drop on full load after removing and re-mounting the cards cooler with a quality thermal paste. This is quite significant, and the card should be more quiet after doing this. Reply
  • Chloiber - Thursday, December 22, 2011 - link

    Computerbase noticed the exact same thing. They said, that AMD even told them, that there was a problem with the thermal paste (that's why they tested it specifically).
    It's still as loud as a GTX590 with new thermal paste though... :>
    Reply
  • Sgt. Stinger - Thursday, December 22, 2011 - link

    Well, yes, unless you can use custom fan profiles. I hope it is possible at least via third party software.

    If not, the cheapest way to get a reference cooler quiet is to remove the shroud and original fan, and strap on a couple of 120mm fans :D
    Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    This is the first I've heard of this. I've pinged AMD, but if they believed it was a problem for us I expect we would have heard about it by now. Reply
  • Sgt. Stinger - Friday, December 23, 2011 - link

    Well, maybe, but your load temps seem to correspond to sweclockers load temps as they were before re-mounting the cooler. (here's their temp graph, before: http://www.sweclockers.com/image/diagram/2621?k=3f... ) and after remounting they got 66° C instead. Reply
  • SlyNine - Friday, December 23, 2011 - link

    Honestly I'd rather have these benchmarks now than have the benchmarks cut any shorter becuase he had to remount the fan.

    Worth looking in to moving forward though.
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Oh, so another amd defect in released cards problem... so did we ever get the same treatment for the 480 or the 470 or the 580 or the 570 ?
    NO !
    Of course not silly rabbit, screw nvidia and excuse amd and blame anything and anyone but.....
    Thank you this is hilarious...
    Reply
  • nitro912gr - Thursday, December 22, 2011 - link

    I was planning a switch from AMD (4850) to a(n) nVidia GPU for my next upgrade, because they perform well both in computing and in gaming, and I need both fields to be filled here.

    But now I'm not sure about that, I will wait a bit to see how the software will welcome the new architecture first.

    I hope they work as well so I can just pick the cheapest GPU.
    Reply
  • Chloiber - Thursday, December 22, 2011 - link

    The thing I still don't like about the new AMD cards is their massive problems with anisotropic filtering. AMD promised twice (with Cayman and Tahiti) that the "AF-Bug" is gone. But it's still mediocre to NV and worse than older cards (pre-R600).
    The bad thing about this is, that it's easily detectable in games and not just a theoretical flaw. It got better than Cayman, but it's still worse than NVs AF.
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Yes, thank you for that. We are supposed to ignore all amd radeon issues and failures and less thans, though, so we can extoll the greatness....
    Then when they "finally catch up to nvidia years later with some fix", the reviewers can tell us and admit openly that amd radeon has a years long problem of inferiority to nvidia that they "finally solved" and then we can get a gigantic zipped up download to show what "was for years amd fail hidden and not spoken of" is gone !
    Hurrah !
    Wow it's so much fun seeing it happen, again.
    Reply
  • KoVaR - Thursday, December 22, 2011 - link

    Awesome job on power consumption and noise levels. If only AMD did so well in the CPU realm... Reply
  • alpha754293 - Thursday, December 22, 2011 - link

    Can you play a game while running a compute job?

    There's word that even for the nVidia Tesla compute accelerators (based on Fermi) that it stutters when you try to play a game or video while it is actively computing/working on something else.

    Is that the case here too?
    Reply
  • SlyNine - Thursday, December 22, 2011 - link

    I'm sure it does, Context switching still occures a huge penalty. Reply
  • MrSpadge - Thursday, December 22, 2011 - link

    GCN won't be able to help this on its own. The software needs to catch up. It's a major concern for true GP-GPU and heterogenous computing, though! And not even just launching a game, trying to use your desktop is enough of a problem already..

    MrS
    Reply
  • shin0bi272 - Thursday, December 22, 2011 - link

    Id really like to see is when you guys bench with an nvidia physx game... run the bench with physx on (maxed out if there's an option) once and off once.

    I know everyone is going to claim that physx is a gimmick but a good portion of that reason is because when a game supports it NO ONE BENCHMARKS IT WITH IT ON! That's like buying a big screen tv and covering half of it with duct tape. And lets not forget AMD opted to not use the tech when nvidia offered it to them... so AMD's loss is Nvidia's gain and no one uses it in their reviews because its not hardware neutral. That's partial favoritism IMHO.

    Also why wasnt the gtx590 or the 6990 tested @ 16x10 dx10 HQ 16xaf on Metro2033? The 580 was tested and the 6970 were but not the dual chip cards. Whats up with that?
    Reply
  • Finally - Thursday, December 22, 2011 - link

    Spoken like a true Nvidia viral marketing shill Reply
  • shin0bi272 - Friday, December 23, 2011 - link

    So because I prefer the extra eye candy physx offers I cant ask a question about a testing methodology? Sounds like someone has physx envy. Reply
  • SlyNine - Friday, December 23, 2011 - link

    Not really, If Nvidia didn't handicap the CPU version of physx so bad than I'd be fine with it, But Nvidia purposely made the CPU version of phsyx worse totally gimped. Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    I agree, but that's the way it guy. The amd fans don't care what they and their reviewers pull, and frankly the reviewers would recieve death threats if they didn't comply with amd fanboy demands....
    So when nvidia had ambient occlusion active for several generations back in a driver add, we were suddenly screamed at that shadows in games suck.... because of course amd didn't have that feature...
    That's how the whole thing is set up - amd must be the abused underdog, nvidia must be the evil mis-implementer, until of course amd gets and actual win, or even any win even with 10% IQ performance cheat solidly in place, and any other things like failed AA, poor tessellation performance, no PhysX, etc, etc, etc...
    We just must hate nvidia for being better and of course it's all nvidia's fault as they are keeping the poor red radeon down....
    If amd radeon has " a perfectly circular algorithm " and it does absolutely nothing and even worse in all games, it is to be praised as an advantage anyway.... and that is still happening to this very day... we ignore shimmer until now, when amd 79xx has a fix for it.... etc..
    Dude, that's the way it is man....
    Nvidia is the evil, and they're keeping the radeon down...
    They throw around money too ( that's unfair as well - and evil ...)
    See?
    So just pretend anything radeon cannot do that nvidia can doesn't count and is bad, and then make certain nvidia is cut down to radeon level, IQ cheat, no PhysX, AA not turned on, Tesselation turned down, default driver hacks left in place for amd, etc....
    Then be sure to cheer when some price perf calc ignoring all the above shows a higher and or lower and card to have a few cents advantage... no free game included, no eyefinity cables... etc.
    Just dude... amd = good / nvidia=evil ...
    Cool ?
    Reply
  • shin0bi272 - Thursday, December 22, 2011 - link

    Since I cant edit my comments I have to post this in a second comment instead.

    According to the released info, Nvidia’s Next Gen flagship GK-100/GK-112 chip which will feature a total f 1024 Shaders (Cuda Cores), 128 texture units (TMUs), 64 ROP’s and a 512-bit GDDR5 Memory interface. The 28nm Next Gen beast would outperform the current Dual chip Geforce GTX590 GPU.
    Reply
  • shaboinkin - Thursday, December 22, 2011 - link

    Can someone tell me why GPUs tend to have much more transistors than a CPU? I never knew why. Reply
  • Boushh - Thursday, December 22, 2011 - link

    Basically it has to do with the difference between programs (= CPU instructions) and graphics (= pixels):

    A program consists of CPU intructions, many of these instructions depend on output from the previous instruction, Therefore adding more pipelines that can work on the instructions doen't realy work.

    A picture consists of pixels, these can be processed in parrallel. So if you double the number of pipelines (= pixels you can work on at the same time), you double the performance.

    Therefore CPU's don't have that many transistors. In fact, most transistors in a CPU are in the cache memory not in the actual CPU cores. And GPU's do.

    Of course this is hust a simple explenation, the through is much much more complex ;-)
    Reply
  • Boushh - Thursday, December 22, 2011 - link

    That last line should read:

    'Of course this is just a simple explanation, the reality is much much more complex'

    Reminds me to yet again vote for an EDIT button !!!! Maybe as a christmas present ? PLEASE !!!
    Reply
  • shaboinkin - Thursday, December 22, 2011 - link

    Interesting...
    Do you know of a site that goes into the finer details?
    Reply
  • Mishera - Wednesday, December 28, 2011 - link

    If you're looking for something to specifically answer you question the checking different tech sites. I think realworldtech addressed tis to a degree. Jon Stokes at arstechnica from what I heard wrote some pretty good articles on chip design as well. But if it's a question on chip architecture, reading some textbooks is your best bet. I asked a similar question in the forums before and got some great responses just check my posts.

    I add to what Boushh said in that for the type of information they process, it's beneficial to have more performance (and not just for graphics). That's why Amd has been pushing to integrate the gpu into the CPU. That's also to a degree show the different philosophy right now between intel and Amd in multicore computing (or the difference between Amd's new gpu architecture vs their previous one).

    What it comes down to is optimizing chip design to make use of programs, vice versa. There really is now absolute when dealing with this.
    Reply
  • MrSpadge - Thursday, December 22, 2011 - link

    It's not like - as stated several times in the article - AMD is wrong about the power target of the HD7970, if they mean the PowerTune limit. Think of it as "the card is built to handle this much heat, and is guaranteed not to exceed it". That doesn't forbid drawing less power. And that's exactly what the HD6970 does: it's got the same "power target", but it uses less of its power budget than the HD7970.

    Like CPUs, whose real world power consumption is often much less than the TDP.

    MrS
    Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    PowerTune is a hard cap on power consumption. Given a sufficient workload (i.e. FurMark or OCCT), you can make the card try to consume more power than it is allowed, at which point PowerTune kicks in. Or to put this another way, PowerTune doesn't kick in unless the card is at its limit.

    PowerTune kicked in for both the 6970 and 7970. In which case both cards should have be limited to 250W.
    Reply
  • tw99 - Thursday, December 22, 2011 - link

    I just wanted to say thank you for including the 8800 GT in some of your benchmark charts. Even though its dated hardware, including it in your comparisons illustrates the punch that the newer hardware has and assists in decision making for people like myself looking to upgrade from their current setup, unlike most benchmarking articles on other sites that like to compare only the very recent generations, not taking consideration what people would have now. Reply
  • Leyawiin - Thursday, December 22, 2011 - link

    I wonder if the Arctic Cooling Twin Turbo II I have sitting in the closet (and haven't ever used) would fit on one of these? Its compatible for up to an HD 6970 so I know it can cool one of these sufficiently (if the mounting holes match their old cards). Maybe I should wait to see what the HD 7950 is like - buying the top of the line card at launch usually isn't smart from a value standpoint. Reply
  • Leyawiin - Thursday, December 22, 2011 - link

    Its all a moot point anyway. Damn "soft launch" not available for at least three weeks. Just a marketing ploy to keep people from buying Nvidia's top cards at the moment. If you aren't ready to sell your cards, keep your mouth shut. Reply
  • james.jwb - Thursday, December 22, 2011 - link

    I have an Arctic Cooling Extreme Plus II on a 6970 and wouldn't use the lower sized versions. But Im also interest to know if it'll fit the 7970. But in all honesty, until these prices come down I won't go near this card, the performance increases just aren't worth it for most people. Reply
  • Dark Man - Thursday, December 22, 2011 - link

    It looks like page 7 and 8 got the same content ? Reply
  • Dark Man - Thursday, December 22, 2011 - link

    Sorry, page 8 and 9 Reply
  • Dark Man - Thursday, December 22, 2011 - link

    Page 13 and 14, too Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    We added a couple of pages this morning; you're probably seeing the cascade effect of the rest of the pages being pushed back. Reply
  • evilspoons - Thursday, December 22, 2011 - link

    I'd just like to say that I found this review harder to read than the usual stuff on Anandtech. Everything seemed wordy - if there was an opportunity to use a sentence instead of a word, the sentence was used.

    Good job on the comprehensive information, but trim the fat off the writing next time, please!
    Reply
  • RussianSensation - Thursday, December 22, 2011 - link

    Even if it's a 6 months lead, 2012 is so far looking like a year full of console ports. We have Syndicate (February 21, 2012), then Mass Effect 3, Max Payne 3 (both on March 6). Those games will get crushed by modern GPUs. HD7970 is an amazing buy for those who are building a new system now/soon and planned to spend $500+ on a GPU. But for current GPU owners, it's not enough of a performance boost imho. And on its own, it's still not fast enough for 2560x1600 either. It's a good card, but since modern GPU generations last 18-24 months, it's too early to call it great. Reply
  • Ananke - Thursday, December 22, 2011 - link

    "The 7970 leads the 5870 by 50-60% here and in a number of other games"...and as I see it also carries 500-600% of price premium over the 5870.

    Meh, this is so so priced for a FireGL card, but very badly placed for a consumer market. Regardless, CUDA is getting more open meanwhile. AMD is still several generations/years behind in the HPC market and marketing a product above the NVidia price targets will not help AMD to make it popular.

    Having say so, I am using ATI cards for gaming for several years already, and I am very pleased with their IQ and performance. I have always pre-purchased my ATI cards... What I am missing though is teh promised and never materialized consumer level software that can utilize its calculation ability, aka CyberLink and other video transcoders. If it was not for the naughty Nvidia power draw in the 5th series, I would've gone green to have CUDA. Hence, considering SO MUCH MONEY, I am waiting at least 6 months from now to see what the prices will be for the both new contenders in next GPU architectures :).
    Reply
  • Dangerous_Dave - Thursday, December 22, 2011 - link

    Seems like AMD can't do anything right these days. Bulldozer was designed for a world that doesn't exist, and now we have this new GPU stinking up the place. I'm sorry but @28nm you have double the transistors per area compared with @40nm, yet the performance is only 30% better for a chip that is virtually the same size! It should be at least twice as far ahead of the 6970 as that, even on immature drivers. As it stands, AMD @ 28nm is only just ahead of Nvidia @ 40nm as far as minimums go (the only thing that matters).

    I shudder to think how badly AMD is going to get destroyed when Nvidia release their 28nm GPU.
    Reply
  • Finally - Friday, December 23, 2011 - link

    I shudder to think how badly one Nvidia fanboy's ego is going to get scratched if team red released a better GPU and his favourite team has nothing to offer.

    Oh... they did?
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    We have to let amd "go first" now since they have been so on the brink of bankruptcy collapse for so long that they've had to sell off most of their assets... and refinance by AbuDhabi oil money...
    I think it's nice our laws and global economy puts pressure on the big winners to not utterly crush the underdogs...
    Really, if amd makes another fail it might be the last one before collapse and "restructuring" and frankly not many of us want to see that...
    They already made the "last move" a dying company does and slashed with the ax at their people...
    If the amd fans didn't constantly demand they be given a few dollars off all the time, amd might not be failing - I mean think about it - a near constant loss, because the excessive demand for price vs perf vs the enemy is all the radeon fans claim to care about.
    It would be better for us all if the radeon fans dropped the constant $ complaints and just manned up and supported AMD as real fans, with their pocketbooks... instead of driving their favorite toward bankruptcy and cooked books filled with red ink...
    Reply
  • Dangerous_Dave - Thursday, December 22, 2011 - link

    On reflection this card is even worse than my initial analysis. For 3.4billion transistors AMD could have done no research at all and simply integrated two 6870s onto a single die (a la 5870 vs 4870) and ramped up the clock speed to somewhere over 1Ghz (since 28nm would have easily allowed that). This would have produced performance similar to a 6990, and far in excess of the 7970.

    Instead we've done a lot of research and spent 4.1billion transistors creating a card that is far worse than a 6990!

    That's the value of AMD's creative thinking.
    Reply
  • cknobman - Thursday, December 22, 2011 - link

    The sad part is your likely too stupid to realize just how idiotic your post sounds.

    They introduced a new architecture that facilitates much better compute performance as well as giving more gaming performance.

    Did you read the article and look at the compute benchmarks or did you just flip through the game benchmark pages and look at numbers without reading?
    Reply
  • Zingam - Thursday, December 22, 2011 - link

    Or maybe you just don't realize that they've added another 2 billion transistors for minimal graphics performance increase over the previous generation.

    That's almost as if you buy a new generation BMW that has instead 300 hp, 600hp but is not able to drag a bigger trailer.
    The only benefit for you would be that you can brag that you've just got the most expensive and useless car available.
    Reply
  • Finally - Friday, December 23, 2011 - link

    Rule 1A:
    The frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.
    Reply
  • cknobman - Friday, December 23, 2011 - link

    Holy sh!t are you not reading and understanding the article and posts here??????????

    The extra transistors and new architecture were to increase COMPUTE PERFORMANCE as well as graphics.

    Think bigger picture here dude not just games. Think of fusion and how general computing and graphics computing will merge into one.

    This architecture is much bigger than just being a graphics card for games.

    This is AMD's fermi except they did it about 100x better than Nvidia keeping power in check and still having amazing performance.

    Plus your looking at probably beta drivers (heck maybe alpha) so there could very will be another 10+% increase in performance once this thing hit retail shelves and gets some driver improvements.
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    I see. So when nvidia did it, it was abandoning gamers for 6 months of ripping away and gnawing plus... but now, since it's amd, amd has done it 100X better... and no abandonment...
    Wow.
    I love hypocrisy in it's full raw and massive form - it's an absolute wonder to behold.
    Reply
  • Zingam - Thursday, December 22, 2011 - link

    I think this card is a kinda fail. Well, maybe it is a driver issue and they'll up the performance 20-25% in the future but it is still not fast enough for such huge jump - 2 nodes down!!!
    It smell like a graphics Bulldozer for AMD. Good ideas on paper but in practice something doesn't work quite right. Raw performance is all that counts (of course raw performance/$).
    If NVIDIA does better than usual this time. AMD might be in trouble. Well, will wait and see.
    Hopefully they'll be able to release improved CPUs and GPUs soon because this generation does not seem to be very impressive.

    I've expected at least triple performance over the previous generation. Maybe the drivers are not that well optimized yet. After all it is a huge architecture change.

    I don't really care that much about that GPU generation but I'm worried that they won't be able to put something impressively new in the next generation of consoles. I really hope that we are not stuck with obsolete CPU/GPU combination for the next 7-8 years again.

    Anyway: massively parallel computing sounds tasty!
    Reply
  • B3an - Thursday, December 22, 2011 - link

    You dont seem to understand that all them extra transistors are mostly there for computing. Thats mostly what this was designed for. Not specifically for gaming performance. Computing is where this card will offer massive increases over the previous AMD generation.
    Look at Nvidia's Fermi, that had way more transistors than the previous generation but wasn't that much faster than AMD's cards at the time. Because again all the extra transistors were mainly for computing.

    And come on LOL, expecting over triple the performance?? That has never happened once with any GPU release.
    Reply
  • SlyNine - Friday, December 23, 2011 - link

    The 9700pro was up to 4x faster then the 4600 in certian situations. So yes it has happened. Reply
  • tzhu07 - Thursday, December 22, 2011 - link

    LOL, triple the performance?

    Do you also have a standard of dating only Victoria's Secret models?
    Reply
  • eanazag - Thursday, December 22, 2011 - link

    I have a 3870 which I got in early 2007. It still does well for the main games I play: Dawn of War 2 and Starcraft 2 (25 fps has been fine for me here with settings mostly maxed). I have eyeing a new card. I like the power usage and thermals here. I am not spending $500+ though. I am thinking they are using that price to compensate for the mediocre yields they getting on 28nm, but either way the numbers look justified. I will be look for the best card between $150-$250, maybe $300. I am counting on this cards price coming down, but I doubt it will hit under $400-350 next year.

    No matter what this looks like a successful soft launch of a video card. For me, anything smokes what I have in performance but not so much on power usage. I'd really not mind the extra noise as the heat is better than my 3870.

    I'm in the single card strategy camp.

    Monitor is a single 42" 1920x1200 60 Hz.
    Intel Core i5 760 at stock clocks. My first Intel since the P3 days.

    Great article.
    Reply
  • Death666Angel - Thursday, December 22, 2011 - link

    Can someone explain the different heights in the die-size comparison picture? Does that reflect processing-changes? I'm lost. :D Otherwise, good review. I don't see the HD7970 in Bench, am I blind or is it just missing. Reply
  • Ryan Smith - Thursday, December 22, 2011 - link

    The Y axis is the die size. The higher a GPU the bigger it is (relative to the other GPUs from that company). Reply
  • Death666Angel - Friday, December 23, 2011 - link

    Thanks! I thought the actual sizes were the sizes and the y-axis meant something else. Makes sense though how you did it! :-) Reply
  • MonkeyPaw - Thursday, December 22, 2011 - link

    As a former owner of the 3870, mine had the short-lived GDDR4. That old card has a place in my nerd heart, as it played Bioshock wonderfully. Reply
  • Peichen - Thursday, December 22, 2011 - link

    The improvement is simply not as impressive as I was led to believed. Rumor has it that a single 7970 would have the power of a 6990. In fact, if you crunch the numbers, it would be at least 50% faster than 6970 which should put it close to 6990. (63.25% increase in transistors, 40.37% in TFLOP and 50% increase in memory bandwidth.)

    What we got is a Fermi 1st gen with the price to match. Remember, this is not a half-node improvement in manufacturing process, it is a full-node and we waited two years for this.

    In any case, I am just ranting because I am waiting for something to replace my current card before GTA 5 came out. Nvidia's GK104 in Q1 2012 should be interesting. Rumored to be slightly faster than GTX 580 (slower than 7970) but much cheaper. We'll see.
    Reply
  • B3an - Thursday, December 22, 2011 - link

    Anyone with half a brain should have worked out that being as this was going to be AMD's Fermi that it would not of had a massive increase for gaming, simply because many of those extra transistors are there for computing purposes. NOT for gaming. Just as with Fermi.

    The performance of this card is pretty much exactly as i expected.
    Reply
  • Peichen - Friday, December 23, 2011 - link

    AMD has been saying for ages that GPU computing is useless and CPU is the only way to go. I guess they just have a better PR department than Nvidia.

    BTW, before suggesting I have suffered brain trauma, remember that Nvidia delivered on Fermi 2 and GK100 will be twice as powerful as GF110
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Well it was nice to see the amd fans with half a heart admit amd has accomplished something huge by abandoned gaming, as they couldn't get enough of screaming it against nvidia... even as the 580 smoked up the top line stretch so many times...
    It's so entertaining...
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    AMD is the dumb company. Their dumb gpu shaders. Their x86 copying of intel. Now after a few years they've done enough stealing and corporate espionage to "clone" Nvidia architecture and come out with this 7k compute.
    If they're lucky Nvidia will continue doing all software groundbreaking and carry the massive load by a factor of ten or forty to one working with game developers, porting open gl and open cl to workable programs and as amd fans have demanded giving them PhysX ported out to open source "for free", at which point it will suddenly be something no gamer should live without.
    "Years behind" is the real story that should be told about amd and it's graphics - and it's cpu's as well.
    Instead we are fed worthless half truths and lies... a "tesselator" in the HD2900 (while pathetic dx11 perf is still the amd norm)... the ddr5 "groundbreaker" ( never mentioned was the sorry bit width that made cheap 128 and 256 the reason for ddr5 needs)...
    Etc.
    When you don't see the promised improvement, the radeonites see a red rocket shooting to the outer depths of the galaxy and beyond...
    Just get ready to pay some more taxes for the amd bailout coming.
    Reply
  • durinbug - Thursday, December 22, 2011 - link

    I was intrigued by the comment about driver command lists, somehow I missed all of that when it happened. I went searching and finally found this forum post from Ryan:
    http://forums.anandtech.com/showpost.php?p=3152067...

    It would be nice to link to that from the mention of DCL for those of us not familiar with it...
    Reply
  • digitalzombie - Thursday, December 22, 2011 - link

    I know I'm a minority, but I use Linux to crunch data and GPU would help a lot...

    I was wondering if you guys can try to use these cards on Debian/Ubuntu or Fedora? And maybe report if 3d acceleration actually works? My current amd card have bad driver for Linux, shearing and glitches, which sucks when I try to number crunch and map stuff out graphically in 3d. Hell I try compiling the driver's source code and it doesn't work.

    Thank you!
    Reply
  • WaltC - Thursday, December 22, 2011 - link

    Somebody pinch me and tell me I didn't just read a review of a brand-new, high-end ATi card that apparently *forgot* Eyefinity is a feature the stock nVidia 580--the card the author singles out for direct comparison with the 7970--doesn't offer in any form. Please tell me it's my eyesight that is failing, because I missed the benchmark bar charts detailing the performance of the Eyefinity 6-monitor support in the 7970 (but I do recall seeing esoteric bar-chart benchmarks for *PCIe 3.0* performance comparisons, however. I tend to think that multi-monitor support, or the lack of it, is far more an important distinction than PCIe 3.0 support benchmarks at present.)

    Oh, wait--nVidia's stock 580 doesn't do nVidia's "NV Surround triple display" and so there was no point in mentioning that "trivial fact" anywhere in the article? Why compare two cards so closely but fail to mention a major feature one of them supports that the other doesn't? Eh? Is it the author's opinion that multi-monitor gaming is not worth having on either gpu platform? If so, it would be nice to know that by way of the author's admission. Personally, I think that knowing whether a product will support multi monitors and *playable* resolutions up to 5760x1200 ROOB is *somewhat* important in a product review. (sarcasm/massive understatement)

    Aside from that glaring oversight, I thought this review was just fair, honestly--and if the author had been less interested in apologizing for nVidia--we might even have seen a better one. Reading his hastily written apologies was kind of funny and amusing, though. But leaving out Eyefinity performance comparisons by pretending the feature isn't relative to the 7970, or that it isn't a feature worth commenting on relative to nVidia's stock 580? Very odd. The author also states: "The purpose of MST hubs was so that users could use several monitors with a regular Radeon card, rather than needing an exotic all-DisplayPort “Eyefinity edition” card as they need now," as if this is an industry-standard component that only ATi customers are "asking for," when it sure seems like nVidia customers could benefit from MST even more at present.

    I seem to recall reading the following statement more than once in this review but please pardon me if it was only stated once: "... but it’s NVIDIA that makes all the money." Sorry but even a dunce can see that nVidia doesn't now and never has "made all the money." Heh...;) If nVidia "made all the money," and AMD hadn't made any money at all (which would have to be the case if nVidia "made all the money") then we wouldn't see a 7970 at all, would we? It's possible, and likely, that the author meant "nVidia made more money," which is an independent declaration I'm not inclined to check, either way. But it's for certain that in saying "nVidia made all the money" the author was--obviously--wrong.

    The 7970 is all the more impressive considering how much longer nVidia's had to shape up and polish its 580-ish driver sets. But I gather that simple observation was also too far fetched for the author to have seriously considered as pertinent. The 7970 is impressive, AFAIC, but this review is somewhat disappointing. Looks like it was thrown together in a big hurry.
    Reply
  • Finally - Friday, December 23, 2011 - link

    On AT you have to compensate for their over-steering while reading. Reply
  • Death666Angel - Thursday, December 22, 2011 - link

    "Intel implemented Quick Sync as a CPU company, but does that mean hardware H.264 encoders are a CPU feature?" << Why is that even a question. I cannot use the feature unless I am using the iGPU or use the dGPU with Lucid Virtu. As such, it is not a feature of the CPU in my book. Reply
  • Roald - Thursday, December 22, 2011 - link

    I don't agree with the conclusion. I think it's much more of a perspective thing. Comming from the 6970 to the 7970 it's not a great win in the gaming deparment. However the same can be said from the change from 4870 to 5870 to 6970. The only real benefit the 5870 had over the 4870 was DX11 support, which didn't mean so much for the games at the time.

    Now there is a new architechture that not only manages to increase FPS in current games, it also has growing potential and manages to excell in the compute field aswell at the same time.

    The conclusion made in the Crysis warhead part of this review should therefore also have been highlighted as finals words.

    Meanwhile it’s interesting to note just how much progress we’ve made since the DX10 generation though; at 1920 the 7970 is 130% faster than the GTX 285 and 170% faster than the Radeon HD 4870. Existing users who skip a generation are a huge market for AMD and NVIDIA, and with this kind of performance they’re in a good position to finally convince those users to make the jump to DX11.
    Reply
  • SlyNine - Friday, December 23, 2011 - link

    Are you nuts, the 5870 was nearly 2x as fast in DX 10/9 stuff, not to mention DX11 was way ahead of DX10. Sure the 6970 isn't a great upgrade from a 5870, but neither is the 7970.

    Questionable Premise
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    That happened at the end of 2006 with the G80 Roald. That means AMD and their ATI Radeon aquisition crew are five years plus late to the party.
    FIVE YEARS LATE.
    It's nice to know that what Nvidia did years ago and recently as well is now supported by more people as amd copycats the true leader.
    Good deal.
    Reply
  • Hauk - Thursday, December 22, 2011 - link

    A stunningly comprehensive analysis of this new architecture. This is what sets Anandtech apart from its competition. Kudos Ryan, this is one of your best.. Reply
  • eastyy - Thursday, December 22, 2011 - link

    its funny though when it comes to new hardware you read these complicated technical jargon and lots of detailed specs about how cards do things different how much more technically complicated and in the end for me all it means is...+15fps and thats about it

    as soon as a card comes out for say 150 and the games i play become slow and jerky on my 460 then i will upgrade
    Reply
  • Mockingbird - Thursday, December 22, 2011 - link

    I'd like to see some benchmarks on FX-8150 based system (990fx) Reply
  • piroroadkill - Friday, December 23, 2011 - link

    Haha, the irony is that AMD is putting out graphics cards that would be bottlenecked HARDCORE by ANY of their CPUs, overclocked as much as you like.

    It's kind of tragic...
    Reply
  • Pantsu - Friday, December 23, 2011 - link

    The performance increase was as expected, at least for me, certainly not for all those who thought this would double performance. Considering AMD had a 389mm^2 chip with Cayman, they weren't going to double the transistor count again. That would've meant the next gen after this would be Nvidia class huge ass chip. So 64% more transistors on a 365mm^2 chip. Looks like transistor density increase took a bit of a hit on 28nm, perhaps because of 384-bit bus? Still I think AMD is doing better than Nvidia when it comes to density.

    As far as the chip size is concerned, the performance is OK, but I really question whether 32 ROPs is enough on this design. Fermi has 48 ROPs and about a billion transistors less. I think AMD is losing AA performance due to such a skimpy ROP count.

    Overall the card is good regardless, but the pricing is indeed steep. I'm sure people will buy it nonetheless, but as a 365mm^2 chip with 3GB GDDR5 I feel like it should be 100$ cheaper than what it is now. I blame lack of competition. It's Nvidia's time to drop the prices. GTX 580 is simply not worth that much compared to what 6950/560Ti are going for these days. And in turn that should drop 7970/50 price.
    Reply
  • nadavvadan - Friday, December 23, 2011 - link

    Am I really tired, or is:
    " 3.79TFLOPs, while its FP64 performance is ¼ that at 947MFLOPs"
    supposed to be:
    " 3.79TFLOPs, while its FP64 performance is ¼ that at 947-G-FLOPs"?

    Enjoyed the review as always.
    Reply
  • Death666Angel - Friday, December 23, 2011 - link

    Now that you have changed the benchmark, would it be possible to publish a .pdf with the relevant settings of each game? I would be very interested to replicate some of the tests with my home system to better compare some results. If it is not too much work that is (and others are interested in this as well). :D Reply
  • marc1000 - Friday, December 23, 2011 - link

    What about juniper? Could it make it's way to the 7000 series as a 7670 card? Of course, upgraded to GCN, but with same specs as current cards. I guess that at 28nm it would be possible to abandon the pci-e power requirement, making it the go-to card for oem's and low power/noise systems.

    I would not buy it because I own one now, but I'm looking forward to 7770 or 7870 and their nvidia equivalent. It looks like next year will be a great time to upgrade for who is in the middle cards market.
    Reply
  • Scali - Saturday, December 24, 2011 - link

    I have never heard Jen-Hsun call the mock-up a working board.
    They DID however have working boards on which they demonstrated the tech-demos.
    Stop trying to make something out of nothing.
    Reply
  • Scali - Saturday, December 24, 2011 - link

    Actually, since Crysis 2 does not 'tessellate the crap' out of things (unless your definition of that is: "Doesn't run on underperforming tessellation hardware"), the 7970 is actually the fastest card in Crysis 2.
    Did you even bother to read some other reviews? Many of them tested Crysis 2, you know. Tomshardware for example.
    If you try to make smart fanboy remarks, at least make sure they're smart first.
    Reply
  • Scali - Saturday, December 24, 2011 - link

    But I know... being a fanboy must be really hard these days..
    One moment you have to spread nonsense about how Crysis 2's tessellation is totally over-the-top...
    The next moment, AMD comes out with a card that has enough of a boost in performance that it comes out on top in Crysis 2 again... So you have to get all up to date with the latest nonsense again.
    Now you know what the AMD PR department feels like... they went from "Tessellation good" to "Tessellation bad" as well, and have to move back again now...
    That is, they would, if they weren't all fired by the new management.
    Reply
  • formulav8 - Tuesday, February 21, 2012 - link

    Your worse than anything he said. Grow up Reply
  • CeriseCogburn - Sunday, March 11, 2012 - link

    He's exactly correct. I quite understand for amd fanboys that's forbidden, one must tow the stupid crybaby line and never deviate to the truth. Reply
  • crazzyeddie - Sunday, December 25, 2011 - link

    Page 4:

    " Traditionally the ROPs, L2 cache, and memory controllers have all been tightly integrated as ROP operations are extremely bandwidth intensive, making this a very design for AMD to use. "
    Reply
  • Scali - Monday, December 26, 2011 - link

    Ofcourse it isn't. More polygons is better. Pixar subdivides everything on screen to sub-pixel level.
    That's where games are headed as well, that's progress.

    Only fanboys like you cry about it.... even after AMD starts winning the benchmarks (which would prove that Crysis is not doing THAT much tessellation, both nVidia and new AMD hardware can deal with it adequately).
    Reply
  • Wierdo - Monday, January 02, 2012 - link

    http://techreport.com/articles.x/21404

    "Crytek's decision to deploy gratuitous amounts of tessellation in places where it doesn't make sense is frustrating, because they're essentially wasting GPU power—and they're doing so in a high-profile game that we'd hoped would be a killer showcase for the benefits of DirectX 11
    ...
    But the strange inefficiencies create problems. Why are largely flat surfaces, such as that Jersey barrier, subdivided into so many thousands of polygons, with no apparent visual benefit? Why does tessellated water roil constantly beneath the dry streets of the city, invisible to all?
    ...
    One potential answer is developer laziness or lack of time
    ...
    so they can understand why Crysis 2 may not be the most reliable indicator of comparative GPU performance"

    I'll take the word of professional reviewers.
    Reply
  • CeriseCogburn - Sunday, March 11, 2012 - link

    Give them a month or two to adjust their amd epic fail whining blame shift.
    When it occurs to them that amd is actually delivering some dx11 performance for the 1st time, they'll shift to something else they whine about and blame on nvidia.
    The big green MAN is always keeping them down.
    Reply
  • Scali - Monday, December 26, 2011 - link

    Wrong, they showed plenty of demos at the introduction. Else the introduction would just be Jen-Hsun holding up the mock card, and nothing else... which was clearly not the case.
    They demo'ed Endless City, among other things. Which could not have run on anything other than real Fermi chips.
    And yea, I'm really going to go to SemiAccurate to get reliable information!
    Reply
  • Scali - Monday, December 26, 2011 - link

    Lol, how's that, when I'm the one saying that AMD's cards are the best performers in Crysis 2?
    I'm neutral, a concept that is obviously alien to you. Idiots...
    Reply
  • Scali - Monday, December 26, 2011 - link

    Heck, I'm also the guy who made Endless City run on non-nVidia cards. How does that make me an nVidia fanboy? Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    That's sad when an nvidia fanboy has to help all the amd fannies with software coding so they can run a benchmark, then after all that work to help the underprivileged, nothing but attacks after the facts... finally silence them.
    It's really sad when the truth is so far from the pop culture mind that actually speaking it is nearly forbidden.
    Thank you for helping them with the benchmark. Continue to be kind in such ways to the sour whining and disgruntled, as it only helped prove how pathetic amd dx11 was...
    Reply
  • james007 - Friday, December 30, 2011 - link

    This sounded like such an awesome card and I was psyched to get it the moment it comes out -- until reading the part about dropping the 2nd DVI port. A DisplayPort-to-SLDVI doesn't do it, for me, because my desktop has to drive two 30" displays. In fact, I would love to be able to drive a third display so I can have a touch-screen also. My current (previous-generation) VDC does drive both displays just fine.

    This does not seem like such an infrequent requirement, especially for high-end users. Why would they drop the ability to drive the 2nd display? !!!

    Argh!
    Reply
  • The_Countess666 - Saturday, December 31, 2011 - link

    not trying to sell you anything but, HDMI to dual-link dvi does exist (see link, or google yourself for other shops).
    http://sewelldirect.com/hdmi-to-dvi-dual-link-cabl...

    and these cards do have 1 HDMI-out so that should work for you.
    Reply
  • Penti - Wednesday, January 04, 2012 - link

    It's the IHV that makes those decisions any way, just because it's not on a reference card doesn't mean they won't show up or that you can't build a card with it. But the HDMI supports more then 1920x1200 finally on this card any how. I guess they could deliver a card with the old type of DVI>HDMI adapters. Obviously opting for HDMI and multidisplaycapable displayport 1.2 makes more sense though. It's been around for years now. Reply
  • Penti - Wednesday, January 04, 2012 - link

    Just make sure you actually has the number of connections you need when buying the card, many 7970 bords only appear to support single-link DVI on the DVI-connector. Reply
  • poordirtfarmer2 - Wednesday, January 04, 2012 - link

    Enjoyed the article.

    So this new 79XX architecture is about a GPU architecture that’s also good for “compute work”. The reference to NVIDIA ‘s professional video cards (Quadro ; Telsa), implies to me that this might mean video cards viable for use both in gaming and in engineering / video work stations.

    I’m not a pro, but do a lot of video editing, rendering and encoding. I’ve avoided dedicating a machine with an expensive special purpose QUADRO video card. Am I reading the wrong thing into this review, or might the new 79XX and the right driver give folks like me the best of both worlds?
    Reply
  • radojko - Thursday, January 05, 2012 - link

    UVD 3 in NextGen is a disappointing. Nvidia is two generation in front with PureVideo HD 5. Reply
  • psiboy - Monday, January 09, 2012 - link

    Well Mr Ryan Smith I must ask why the omission of 1920 x 1080 in al lbenchmarks... given that almost every new monitor for quite some time has been natively 1920 x 1080... what is it with you guys and Tom's lately.. you both seem to have been ignoring the reality of what most of your readers are using! Reply
  • RussianSensation - Saturday, January 14, 2012 - link

    BF3 is not a 2012 game.......

    Also, most of us have been gaming on our older cards. Who in the world who has a previous high-end card is going to drop $600 for BF3 alone? No thanks.
    Reply
  • SSIV - Saturday, February 18, 2012 - link

    Since there's a new driver out for there cards we can now regard these results with a grain of salt. Revise the benchmarks! Reply
  • DaOGGuru - Thursday, March 01, 2012 - link

    I don't know why people keep forgeting about the 560ti 2win. Yes I said 2win = 2 560ti processors on one card. It still kills the 7970 numbers in BF3 by 20Fps. and is same price. It also beats the 580 and is cheaper. It's a single card with 50amp min. draw and it will smoke anything except 590 and the 6990...

    http://www.guru3d.com/article/evga-geforce-gtx-560...
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Oh, right, well this isn't an nvidia card review, so we won't hear from 50 posts about how some CF (would be SLI of course in this case) combo will whip the crap out of it in performance and price...
    You know ?
    That's how it goes...
    Usually the articel itself rages on about how some amd CF combo is really so much good and better and blah blah blah.... then the rpice perf, then the results - on and on and on ....
    ---
    The angry ankle biters are swarmed up on the under red dog radeon side...
    --
    So you made a very good point, I'm just sorry it took 29 pages of reading to get to it, in it's glorious singularity.... you shouldn't strike out in independent thought like that it's dangerous.... not allowed unless the card being reviewed is an nvidia !!!!
    Reply
  • DaOGGuru - Thursday, March 01, 2012 - link

    oops... forgot to say look at previous post links BF3 rating for the 560ti 2win and compare to this charts 7970 fps. The 2win is pumping out @20 more FPS and is $50.00 - $100.00 cheaper than the 7970... lame.. ATi is still behind Nvidia but proud of it! lol They are just now catching up to Nvidia's tessellations and oh and AFTER they changed to a "cuda core copy" architecture and posting it as big news... Evga's older 560ti 2win still dusts it by 20FPS.. lame. Reply
  • DaOGGuru - Thursday, March 01, 2012 - link

    sorry 10FPS not 20.. it's late. Reply
  • DaOGGuru - Thursday, March 01, 2012 - link

    I don't get what's the hub-bub about the 7970.. sure it's the fastest single cpu;BUT, for $50.00-$100.00 less you can get the 560Ti 2win (dual cpu) that smokes the 7970 and the 2win PCB does have an SLI bridge and is cabapable of doing SLI to a second card but it's currently locked by Nvidia (see paragraph 3).

    Also, the 2win draws a min of only 50amps (way less than most sli configurations) 1. has a considerably lower noise dba, 2. runs cooler and with less power than almost all the high end cards and 3. will run 3 montiors in Nvidia 2D and 3D surround off a single card! 4.Will kill the GTX 580 by @33-23% (depending on review) 5. Will beat the 590 in some sample testing for TDP. And finally 6. will kill the 7970 by 10-20FPS in BF3 including by 10FPS in 1920x1200 4AA-16AF Ultra high mode. So, why have people forgotten the 2win? It's a singlecard, multi-GPU, full 3D/2D surround without a second card in SLI, $500.00USD beast !

    OH and for those that say you can't SLI with a second 2win.... http://www.guru3d.com/article/evga-geforce-gtx-560... (this review states on conclusion page) > quote " you will have noticed there is a SLI connector on the PCB. Unfortunately you can not add a second card to go for quad-SLI mode. It's not a hardware limitation, yet a limitation set by NVIDIA, the GTX 560 Ti series is only allowed in 2-way SLI mode, which this card already is."

    ... So actually, the card is cabale 2card SLI but Nvidia for some (gosh aweful reason) won't let the dog off the chain. Probably because it will absolutely kill the need for a GTX580, 570, 560 Ti SLI configuration for ever!

    Resources: (pay attention to BF3 FPS and compare to 7970 FPS in this article.)
    http://www.anandtech.com/show/5048/evgas-geforce-g...
    http://www.guru3d.com/article/evga-geforce-gtx-560...
    Peace...
    Reply
  • DaOGGuru - Thursday, March 01, 2012 - link

    I don't get what's the hub-bub about the 7970.. sure it's the fastest single CPU; BUT, for $50.00-$100.00 less you can get the 560Ti 2win (dualCPU) that smokes the 7970 and the 2win PCB does have an SLI bridge and is capable of doing SLI to a second card but it's currently locked by Nvidia (see paragraph 3).

    Also, the 2win draws a min of only 50amps (way less than most sli configurations) 1. Has a considerably lower noise DBA, 2. runs cooler and with less power than almost all the high end cards and 3. Will run 3 monitors in Nvidia 2D and 3D surround off a single card! 4.Will kill the GTX 580 by @33-23% (depending on review) 5. Will beat the 590 in some sample testing for TDP. And finally 6. will kill the 7970 by 10-20FPS in BF3 including by 10FPS in 1920x1200 4AA-16AF Ultra high mode. So, why have people forgotten the 2win? It's a single card, multi-GPU, full 3D/2D surround without a second card in SLI, $500.00USD beast !

    OH and for those that say you can't SLI with a second 2win.... http://www.guru3d.com/article/evga-geforce-gtx-560... (this review states on conclusion page) > quote " you will have noticed there is a SLI connector on the PCB. Unfortunately you cannot add a second card to go for quad-SLI mode. It's not a hardware limitation, yet a limitation set by NVIDIA, the GTX 560 Ti series is only allowed in 2-way SLI mode, which this card already is."

    ... So actually, the card is capable 2card SLI but Nvidia for some (gosh awful reason) won't let the dog off the chain. Probably because it will absolutely kill the need for a GTX580, 570, 560 Ti SLI configuration forever!

    Resources: (pay attention to BF3 FPS and compare to 7970 FPS in this article.)
    http://www.anandtech.com/show/5048/evgas-geforce-g...
    http://www.guru3d.com/article/evga-geforce-gtx-560...
    Peace...
    Reply
  • CeriseCogburn - Thursday, March 08, 2012 - link

    Ummm.... I read you, I see your frustration with all the posts - just refer to my one above there - you really should not be dissing the new amd like that - they like are 1st and uhh... nvidia is evil... so no comparisons like that are allowed when the fanboy side content is like 100 to 1....
    Now next nvidia card review you will notice a hundred posts on how this or that CF beats the nvidia in price perf and overall perf, etc, and it will be memorized and screamed far and wide...
    Just like... your point "doesn't count", okay ?
    It's best to ignore you GREEN fanboy types... ( yes even if you point out gigantic savings, or rather especially when you do...)
    Thanks for waiting till page 30 - a wise choice.
    Reply
  • CeriseCogburn - Sunday, March 11, 2012 - link

    Southern Islands is a whole generation late. AMD promised us this SI in the last generation 6000 series. Then right before that prior release, they told us they had changed everything and 6000 was not Southern Islands anymore. LOL
    Talk about late - it's what two years late ?
    Maybe it's three years....
    In every case here, Nvidia beat them to the core architecture by two years. Now amd is merely late to the party crashing copycats....
    That's late son, that's not original, that's not innovative, that's not superior, it's tag a long tu loo little sister style.
    Reply
  • warmbit - Tuesday, April 10, 2012 - link

    Here is the link to an interesting overview performance Radeon 7970 of 5 Web sites competing GTX580 and 6970.

    Analysis of the results of the Radeon 7970 in 18 games and 6 resolutions:
    http://translate.google.pl/translate?hl=pl&sl=...

    You will know the average relationship rates between these interest cards and you will find out which graphics card is better in the game and resolution.
    Reply
  • Herman_Monster - Thursday, January 03, 2013 - link

    Quite strange that AMD keeps mum about required conditions for ZeroCore Power, such as, e.g., OS.
    Since there is and others OSs except MS Windows 7/8... Yes.
    Reply

Log in

Don't have an account? Sign up now