POST A COMMENT

130 Comments

Back to Article

  • mpokwsths - Tuesday, April 18, 2017 - link

    Good job Anandtech! Didn't expect it so quickly.

    P.S.: First! ;)
    Reply
  • Ryan Smith - Tuesday, April 18, 2017 - link

    Second!

    (Hey, wait a sec, isn't this my site?!)
    Reply
  • ddriver - Tuesday, April 18, 2017 - link

    You own it? Reply
  • at80eighty - Wednesday, April 19, 2017 - link

    He's the boss. you're not. do the math. Reply
  • AndrewJacksonZA - Tuesday, April 18, 2017 - link

    Hehe. :-) Reply
  • rocky12345 - Tuesday, April 18, 2017 - link

    Ryan I think because you did the review and posted that makes you first post no matter what. Good review by the way thank you. Reply
  • theangryintern - Tuesday, April 18, 2017 - link

    You do realize that this review has probably been done for at least a week, right? They were under NDA until this morning. Reply
  • Drumsticks - Tuesday, April 18, 2017 - link

    That doesn't mean a review was guaranteed. Anandtech, while putting out phenomenal reviews, occasionally delivered them later than launch day.

    This one was great too, by the way, thanks!
    Reply
  • Drumsticks - Tuesday, April 18, 2017 - link

    Also, I should add that they've been way more timely lately, which is great. Reply
  • Samus - Tuesday, April 18, 2017 - link

    It's amazing people will find any excuse to dismiss a launch day review... Reply
  • ddriver - Wednesday, April 19, 2017 - link

    Not as amazing as trying to make it sound like an achievement. Wake up Dorothy, every review site got a launch day review on the 580 adoy.... Reply
  • Ian Cutress - Tuesday, April 18, 2017 - link

    Actually, they're usually still being finished as the embargo lifts. :D Nowadays it's such a short time from the hardware arriving to the review deadline. Reply
  • Ryan Smith - Tuesday, April 18, 2017 - link

    Aye. Let's just say that Easter was not a holiday around here. Reply
  • Meteor2 - Wednesday, April 19, 2017 - link

    :( Reply
  • VoraciousGorak - Tuesday, April 18, 2017 - link

    It's like a reverse Intel. Every 1% increase in performance brings with it a 2% increase in power draw. Based on these tests, this refresh brought out - at best - an 8% performance increase, with typical gains in the 0-3% range, over a GPU that's been out almost a year. These gains are even worse than the Hawaii refresh, and that refresh saw prices drop by twenty percent, not ten dollars.

    Still waiting on the AMD GPU that will get me to upgrade from my R9 290....
    Reply
  • VoraciousGorak - Tuesday, April 18, 2017 - link

    I mean, I guess we get an improvement on the stock cooler (I think? Thank god if so), which was probably needed with the 25W typical use power draw increase, likely enough to overwhelm the quiet-mode on the old stock cooler.

    Also WTB comment edit feature.
    Reply
  • bill.rookard - Tuesday, April 18, 2017 - link

    That would be Vega.

    I will say it's good to see that the power consumption of the cards is down to a reasonable level (even with the increase in the 500 series) so that those wanting more performance can crossfire a pair without turning their computer case into the equivalent of a blast furnace.

    Still - the problem is that most of these results with the 580 still puts them behind the GTX 1060 and the pricing is still pretty close. Given a choice between a $240ish RX580 and a $240ish GTX 1060, I'd have to still take the 1060. It's quiet, cooler, and faster.

    Of course, if the rumors are to be believed, Vega should finally get AMD back in the high end, and Vega can't hit soon enough...
    Reply
  • Meteor2 - Wednesday, April 19, 2017 - link

    It's all about Vega at this point. Launching as high-end this year, the RX500 series will be replayed by Vega cards next year. Of course, Nvidia should be on Volta by that point, but the Vega-Volta efficiency gap should be much smaller than the Polaris-Pascal gap. Pascal probably represents the last of the low-hanging efficiency fruit, which AMD will be picking with Vega. Reply
  • Mugur - Thursday, April 20, 2017 - link

    You think? From the review, I gather that it's the other way around: RX580 was cooler and quieter that the FR GTX1060 in all the tests. Reply
  • nathanddrews - Tuesday, April 18, 2017 - link

    Those noise levels though. Damn! Nice work AMD. Reply
  • BrokenCrayons - Tuesday, April 18, 2017 - link

    The acoustics are superb! Buuuuut, the problem is that we're already looking at vendor boards so that might not be the case for every 580 and 570 out there since there'll probably be a variety of different cooling solutions in the wild. Reply
  • nathanddrews - Tuesday, April 18, 2017 - link

    Most definitely, but for an aggressively OC air-cooled model to be quieter at load than a 950 and be over 10dBA quieter than the reference 470/480, AIB partners would have to really do something wrong to make a loud card. I'm sure it will happen, but it would have to be like a 40mm, single-slot design or something... haha Reply
  • Mr Perfect - Tuesday, April 18, 2017 - link

    What I'd like to know is why did a 6% increase in clocks cost a 23% power increase? That seems unusually high, unless Polaris was up against the wall already in the 400 series. Reply
  • rarson - Tuesday, April 18, 2017 - link

    I think you answered your own question...

    "Polaris was up against the wall already in the 400 series."
    Reply
  • Samus - Tuesday, April 18, 2017 - link

    I think the most revolutionary inclusion in this Polaris revision is the memory state (especially since little else is changed)

    And I use the word revolutionary because this is an eye opening reception for AMD and Intel (and nVidia) on the computing front. Imagine variable clock speeds for memory, and even overboost/turbo mode for memory for momentary spikes in demand. The voltage savings combined with the marginally added boost performance wouldn't be something to right off in the grand scheme of things. I suspect we will see this technique adopted across the board soon.
    Reply
  • JoyTech - Tuesday, April 18, 2017 - link

    Is there any data for release of Vega GPUs? Reply
  • Ryan Smith - Tuesday, April 18, 2017 - link

    AMD's last comment was Q2 of this year. Reply
  • vladx - Wednesday, April 19, 2017 - link

    So 99% it will be released in June. Reply
  • hoohoo - Tuesday, April 18, 2017 - link

    Still should have called them 475 and 485. This is underwhelming. It seems like the performance bump is all from clocks. The process is perhaps able to handle higher voltages, thus slight clock bump. 2.5 slot coolers are not good. Reply
  • DanNeely - Tuesday, April 18, 2017 - link

    These cards are to keep the OEMs happy. Even if it's the exact same product under the hood, they need a new model year to feed their marketing/sales channels. The people in them aren't gamers and PC enthusiasts who roll their eyes at this sort of thing; they're rebooting their tablets by flipping them upside down and shaking them.

    http://dilbert.com/strip/1995-04-03

    The Vega cards we're all looking forward to will presumably be launching under 590 and Fury brandings later.
    Reply
  • Lord-Bryan - Tuesday, April 18, 2017 - link

    That was cold! Reply
  • Meteor2 - Wednesday, April 19, 2017 - link

    Vega is launching with Vega branding (replacing the Fury branding). AMD obviously knows that the people in the market for such cards are all over review sites, so know the product as Vega already. Why change a good name? Reply
  • Mugur - Thursday, April 20, 2017 - link

    Yes, they officially named the product RX Vega at the end of the Capsaicin&Cream event. Reply
  • lordken - Tuesday, April 18, 2017 - link

    @hoohoo: exactly, that's what I was thinking right from beginning when they announced this nonsense. This was totally lame move and just irritates ppl like me.

    I would just have hopped that Ryan nails AMD a little bit more for this stupid move and not using 4x5 names which they had present in RX400 slides as possible revisions names.

    @DanNeely: no OEM doesn't need new numbers, that's bullshit story. Or if they does how 485 isn't new number to 480?
    OEM wants to sell stuff, and if AMD cant produce good enough product they need to resort to misleading customers and playing stupid "new gen/new number" game.
    What I mean by misleading customers? Apart having us for idiots, well I can imagine that some 480 (higher end) will have better perf than some 580 (lower end). Average joe is not gonna compare that his 580 has 20mhz lower clock then 480 OC/whatever edition. Because in honest word it couldn't happen that next gen same tier card have lower perf than previous gen under any circumstances...
    Reply
  • tehidiot - Tuesday, April 18, 2017 - link

    Meet the cards page isn't up :)

    What are the clocks of the red devil card?
    Reply
  • HOOfan 1 - Tuesday, April 18, 2017 - link

    Well...they did refresh the R9 200 series with the R9 300 series, just before Fury launched....maybe this means Vega will be out soon... Reply
  • Meteor2 - Wednesday, April 19, 2017 - link

    Vega is coming before 1 June. Good spot -- it's exactly like the 300 series and Fury, isn't it! Reply
  • FireSnake - Tuesday, April 18, 2017 - link

    Nice review. Keep up the good work!
    And nice refresh also :)
    Reply
  • jcwagers - Tuesday, April 18, 2017 - link

    Ryan, the clockspeed chart is showing 1680 mhz for Ashes while all the rest are showing around 1380 mhz. Is that a typo? Just letting you know so you can fix it. Thanks for the article. :) Reply
  • jcwagers - Tuesday, April 18, 2017 - link

    Oops....it was for the 580 card. Reply
  • Ryan Smith - Tuesday, April 18, 2017 - link

    While 1680MHz would be fantastic, in this case it's meant to be 1360MHz. Thanks for the heads up. Reply
  • Shadowmaster625 - Tuesday, April 18, 2017 - link

    The powercolor card pulls 100 watts more than the 1060 yet gets totally destroyed by the 1060 in BF4 and GTA V. AMD is a shakespearean tragedy. Reply
  • docbones - Tuesday, April 18, 2017 - link

    So wait for Vega then. Reply
  • HomeworldFound - Tuesday, April 18, 2017 - link

    "So wait for Vega then." Who says they're going to execute Vega any better than their recent history. Reply
  • MajGenRelativity - Tuesday, April 18, 2017 - link

    I think AMD committed two blunders here.
    1. These should have been called 485/475/465/450, and not in the 500 series. OEM's still get their shiny new cards for consumers, but it doesn't look as bad to people expecting something more different from an entirely new series number.

    2. AMD completely threw out power efficiency, and their partners seem to be taking that even further. I understand that Polaris wasn't as power efficient as Pascal, but it did come fairly close. This refresh seems to completely abandon AMD's previous message of power efficiency.

    That being said, I will definitely put these cards in people's computers because of price/performance. I just feel like AMD could have done a bit better.
    Reply
  • Drumsticks - Tuesday, April 18, 2017 - link

    As somebody who has a 480 in their PC right now, I'm not sure. I was looking at a build for a friend last night, and was surprised to learn you can pick up a 6GB 1060 Zotac Mini for about $219. It's not going to win any 1060 performance awards, but it has performance in the realm of the FE (probably not lower), which makes it an impressive play from Nvidia in price/$. It looks like a good deal compared to what you see here.

    I really hope Vega pans out. I don't see any reason for it to be as disappointing as the 500 series; I think it has a chance. At possibly 225W and on a new Arch, it should eclipse the 1080 easily (remember it's on HBM2 with the noted power savings), and maybe be at the least a value spoiler for the 1080 Ti.
    Reply
  • MajGenRelativity - Tuesday, April 18, 2017 - link

    I'll definitely be keeping an eye on the market as I hold no allegiance to either side. I do share your hopes for Vega :) Reply
  • sonicmerlin - Sunday, May 07, 2017 - link

    They were close? The 1070 uses less power than the 480 and is 50% faster. Reply
  • theangryintern - Tuesday, April 18, 2017 - link

    Anyone know the feasibility of doing CrossFire with a 480 and a 580? Reply
  • Flunk - Tuesday, April 18, 2017 - link

    Just buy a 480 when they fire-sale them and spare yourself the hassle. Reply
  • TheinsanegamerN - Tuesday, April 18, 2017 - link

    So this is ReBrandeon 2 - the search for more money?

    There is almost 0 improvement here. The XFX 480 GTR 1338 edition was already capable of hitting these performance numbers, with a better PCB and lower power consumption to boot.

    Calling this a 580 is a mistake. You'd think they would have learned from the 300 series's massive mistake.
    Reply
  • ABR - Thursday, April 20, 2017 - link

    Might mean that they are planning on rebooting the naming scheme for Vega. Reply
  • willis936 - Tuesday, April 18, 2017 - link

    *sighs audibly Reply
  • goodtofufriday - Tuesday, April 18, 2017 - link

    rx580 @ 185w - "look elsewhere for mitx"? r9 nano is 275w. 1070 strix 180w is in my ncase m1 itx with a 700w psu. please explain your reasoning. Reply
  • ajlueke - Tuesday, April 18, 2017 - link

    "r9 nano is 275w". Actually the TDP of the R9 Fury Nano is listed at 175W. Which is actually an interesting point, as it still outperforms the RX 580 while using less power.

    AMDs recent architectures, lie Polaris and Fiji don't really seem to benefit much from throwing more juice at them. The Fury Nano is a great example. A 175W TDP vs 275W on the Fury X with only about 10% difference in performance.

    I haven't been following to closely, but perhaps there will also be a Vega "Nano" variant? That card may easily be a performance per watt champion. Hopefully they don't price it the same as the full speed variant like AMD did for the Fury Nano originally.
    Reply
  • goodtofufriday - Tuesday, April 18, 2017 - link

    Typo on the 175/275 but yes. I do agree with all you said well. Which begs the question of why the writer says to look elsewhere for ITX soley based on the 35w increased tdp. Reply
  • Orumus - Tuesday, April 18, 2017 - link

    I work 2 jobs and don't have a lot of time to read full reviews which is why I really appreciate the great "Final thoughts" section on most of your reviews. I can almost always count on it to give a concise yet nuanced overview of the overall review and technology in question. Thanks for making my life just that little bit easier. Reply
  • Yojimbo - Tuesday, April 18, 2017 - link

    Good review, but I have a couple comments. Firstly it would be nice to have 1060 3GB benchmarks as well as a few more 1050 Ti benchmarks. Secondly, I don't think it makes sense to clock the Powercolor and Sapphire cards to AMD stock frequencies and list them as if they are AMD stock cards going up against the NVIDIA reference designs in the "Power, Temperature, and Noise" charts. Since these results are highly board-specific, I think you should explicitly write in the charts that it's a Powercolor clocked at stock rather than "AMD Radeon RX 580". An apples to apples comparison would be some factory overclocked board partner card based on the NVIDIA GPUs similarly downclocked to reference clock speeds. Reply
  • milli - Tuesday, April 18, 2017 - link

    Isn't the game selection getting a bit old? Nothing newer than year old games.
    Crysis 3 and Battlefield 4 are (almost) 4 years old.
    Add Doom to have one Vulcan game.
    Computerbase.de tests with more and newer games. The RX580 ends up 1% faster than the GTX1060FE there.

    https://www.computerbase.de/2017-04/radeon-rx-580-...
    Reply
  • webdoctors - Tuesday, April 18, 2017 - link

    If you're looking at these cards, you're likely not going to be buying the newer games. You'll likely be buying humble bundle or bundle star class games for your kids or feed your gaming addiction. Those generally have pretty reasonable GPU requirements. Games less than a year old are gonna be $40 a pop, 5 of those are equal to the price of a 580 which is unrealistic for someone buying a 580 to spend on SW.

    If you weren't cheap you'd go the Nvidia route, but at the sale prices the AMD cards are giving great value. Value gamers aren't buying the latest and greatest games.
    Reply
  • ToTTenTranz - Tuesday, April 18, 2017 - link

    "If you're looking at these cards, you're likely not going to be buying the newer games. You'll likely be buying humble bundle or bundle star class games for your kids or feed your gaming addiction."

    I don't even..
    Reply
  • paulemannsen - Tuesday, April 18, 2017 - link

    You might be on to something there. 1050ti owners probably only play doom 1 or games from CDs found in the trash bin. Reply
  • Icehawk - Tuesday, April 18, 2017 - link

    Tell that to my friends who buy x60 series of NV cards and tons of new, AAA, titles. They don't have QHD/UHD monitors and for most games the x60s are "enough". Reply
  • Meteor2 - Wednesday, April 19, 2017 - link

    Exactly. Lots of people play new games at 1080p and want high quality graphics at 60 fps.

    Ryan makes the point that in the last year or two not many games have pushed graphics further, especially in FPSs.
    Reply
  • Yojimbo - Tuesday, April 18, 2017 - link

    I don't think it makes sense to claim Doom is a representation of Vulkan. Add Doom to add Doom, OK, but not because it's a Vulkan game. BTW, that site has the stock clocked Radeon RX 570 outperforming the GTX 1060 FE 6GB in Witcher 3 at 1080p, which seems rather odd. I can't read German, but I don't see where they tell what settings they used to achieve that. Reply
  • milli - Tuesday, April 18, 2017 - link

    I can't tell you what settings they've used for that game. But one thing that most people don't take into account is that measured performance on a certain map of a game, doesn't automatically translate into universal/general performance of a card in that game. Often a game will require different performance characteristics just by using a different map. Computerbase seems to be using a heavier map or settings since average frame rates seem to be lower.

    One other difference is that Anand is testing with a Intel Core i7-4960X @ 4.2GHz and Computerbase is testing with Intel Core i7-6850K @ 4.3GHz. I'm pretty sure the AMD cards benefit more from the higher single thread performance of the 6850K.
    Reply
  • Yojimbo - Tuesday, April 18, 2017 - link

    Perhaps, but I've seen Witcher 3 benchmarks comparing the 1060 with the 480 and 470 from maybe half a dozen sites and have never seen anything like that before. It's an outlier. Reply
  • HomeworldFound - Tuesday, April 18, 2017 - link

    I brought that up with the 1080ti review. I was told that they'll modernise their testing suite at some point. Reply
  • Meteor2 - Wednesday, April 19, 2017 - link

    For Vega, I think Ryan said. Reply
  • Meteor2 - Wednesday, April 19, 2017 - link

    <reads two comments down> oh. Reply
  • milli - Tuesday, April 18, 2017 - link

    I'm looking at the TPU review and the RX580 is winning by a big margin in RE7, Battlefield 1, COD, Deus Ex, Doom, F1 2016, Tomb Raider DX12 & Sniper Elite 4. All games from the past 6 months (more or less).
    Anandtech needs to urgently renew the tested games because these new games give a different view of these cards. In these games, even the Fury X often beats the GTX1070.
    Reply
  • Ryan Smith - Tuesday, April 18, 2017 - link

    The game selection gets updated roughly once a year. The current suite was rolled out for the launch of Pascal, and we'll be updating the benchmark suite for the launch of Vega a bit later this year. Reply
  • Phiro69 - Wednesday, April 19, 2017 - link

    I appreciate your measured pace in updating your benchmarks. I frequently need to justify - even if to just myself - hardware upgrades and if you are constantly tweaking/changing your benchmark suite I can't do a fruit to fruit comparison between a product you reviewed a couple years ago vs a brand new review, let alone apple to apple comparison.

    So when you do refresh your benchmark suites, I know it's a huge time sink to include older hardware in the new suite, but it's very appreciated and it's the difference between the level of detail that Anandtech has vs the other sites with launch day benchmarks.
    Reply
  • Azix - Tuesday, April 18, 2017 - link

    should include clockspeed of the 1060 cards Reply
  • Ryan Smith - Tuesday, April 18, 2017 - link

    Stock. So 1733MHz boost. Reply
  • Leyawiin - Tuesday, April 18, 2017 - link

    The TechPowerUp review is a little more pointed where power consumption is concerned in that it removes the rest of the system to show the card only. Basically, the factory overclocked Sapphire RX 580 is using twice as power as the stock GTX 1060 FE in their basket of games for a 5% increase in performance (use a factory OC'd GTX 1060 and that gap is closed). That's kind of pitiful This is like the FX-9590 of the current midrange GPUs. Reply
  • Icehawk - Tuesday, April 18, 2017 - link

    AMD seems to have a serious problem, both Ryzen and Polaris are pretty much maxed out from the factory clock & volt-wise whereas both Intel and NV have plenty of headroom. Reply
  • Lolimaster - Tuesday, April 18, 2017 - link

    I don't really feel the need to OC the 1700. In fact I disable turbo and undervolted for sub 45°C load temps :D

    You get so much cpu resources that OC is just an inefficient way to throw energy
    Reply
  • BrokenCrayons - Wednesday, April 19, 2017 - link

    Overclocking is usually a sub-optimal and inelegant solution to the need for more computer resources. Before the experience was curated and limited to price premium parts people could realize a benefit by wringing more from low budget components. Presently, with the bulk of overclockable components residing on the high end where there's already sufficient compute power available for tasks to perform adequately, overclocking is nothing but a corporate sales gimmick that appeals to people that want to needlessly tinker or to people who feel compelled to do so in order to be braggarts. For those attempting to overclock the few curated parts in lower price brackets, they're better served simply purchasing a marginally more expensive next higher tier component and running it within spec.

    As for underclocking and undervolting, I can see a possible advantage in greater longevity, higher reliability, and lower cost of operation. It just simply no longer makes sense to bother going through the fruitless trouble of reaching for more with little practical reward to reap from the effort and resources expended along the way.
    Reply
  • Mugur - Thursday, April 20, 2017 - link

    Yes, the problem is Global Foundry and their 14nm process... Reply
  • JasonMZW20 - Thursday, April 20, 2017 - link

    Tile-based rendering does wonders for power consumption. AMD is still doing full scene rendering, which is expensive and inefficient.

    Vega will join the TBR party. We'll see if power consumption has dropped significantly then.

    But, isn't that the general rule when you OC? Throw power saving out the window. I have my old R9 280 at 1200MHz. Draws about 230W. Don't really care.
    Reply
  • Tewt - Tuesday, April 18, 2017 - link

    I was hoping to read something about Freesync 2 and moreso now after reading the review to find the performance gains weren't as exciting and not without caveats. Since there currently are no Freesync 2 monitors though, there is no point in reviewing this feature. It would have been a nice value added feature to help offset the minimal gains in performance. Too bad AMD couldn't capitalize on this feature to make this release more exciting. Reply
  • Cygni - Tuesday, April 18, 2017 - link

    Keeping the memory specs down will help them move to undercutting the 1060 on price, which they will need to do pretty heartily to get some sales volume. Hopefully this is enough to encourage Nvidia into a price war, as they have been milking the 1060's pricepoint pretty heavily for a long time now. Other than a brief dip to the $210 range on black friday, prices have stayed pretty high on 6GB cards. Reply
  • brucek2 - Tuesday, April 18, 2017 - link

    Most of us learn as small children the peril of crying wolf. You'll get a lot of attention the first time you do it, but if there's no actual wolf you'll soon get a bad reputation and no further attention.

    AMD is crying wolf here. These are not new generation products. The new numbering is essentially a lie. I still very much want there to be a real competitor to Nvidia / Intel, but crud like this is not making it easy to root for them.
    Reply
  • fanofanand - Tuesday, April 18, 2017 - link

    New to the tech world? This is pretty standard stuff after a new arch. Nvidia has done it several times as have AMD. Guess what, Intel and AMD do it with their CPUs too. Oh my, looks like it happens with mobile SOCs too. Reply
  • haukionkannel - Wednesday, April 19, 2017 - link

    It is norm to have rebrands by both makers. Same is true to cpus even more... Reply
  • tipoo - Thursday, April 20, 2017 - link

    They were pretty clear early on that it was just an overclock of the 480, and that's what it is. Not sure what people expected. Would 485 be a better name? Probably, but that's the GPU world for ya. Reply
  • AbbieHoffman - Tuesday, April 18, 2017 - link

    And yet the old R9 290X and R9 390X are still better cards. Reply
  • sonichedgehog36 - Tuesday, April 18, 2017 - link

    Typo: TBPs on page one. Reply
  • sonichedgehog36 - Tuesday, April 18, 2017 - link

    Please delete the post above. Reply
  • BrokenCrayons - Wednesday, April 19, 2017 - link

    Eh, the use of TBP did leap out as an error right away until I spent a few minutes thinking about it. I get that the industry (well the industry of all two of the world's GPU companies) is constantly trying to buzzword its way into presenting a product in the best light possible, but it seems like a stupid and pointless terminology change from over here. Reply
  • Wineohe - Tuesday, April 18, 2017 - link

    I drank the koolaid and bought a pair of RX480's when they first came out. Bad idea. I followed this up with a GTX 1080, which is what I should have bought in the first place. If my budget was a single RX480/580 then I would definitely opt for a 1060 instead. Reply
  • Lolimaster - Tuesday, April 18, 2017 - link

    RX570 4GB is damn good deal for 1080p or 1600x1200 CRT- Reply
  • Lolimaster - Tuesday, April 18, 2017 - link

    Then sell it in 2019 for a nice RX770 10nm gpu. Reply
  • Pork@III - Wednesday, April 19, 2017 - link

    Excessive consumption of electricity for devices from the middle class as performance. AMD again began to offer plates, which can fry eggs. Reply
  • SydneyBlue120d - Wednesday, April 19, 2017 - link

    Is VP9 with HDR decoding enabled with these cards? Reply
  • slickr - Wednesday, April 19, 2017 - link

    Good job? Anandtech has become the laughing stock of hardware reviews. The suite they are using is painfully outdated, with games 3-4 years old on average, the number of games is so small, the titles are outdated, they tested Battlefield 4, instead of Battlefield 1. BF1 supports DX12 as well, its the new engine that most EA games use and are going to use in the future and they are testing BF4 which is a very old title with an old engine that no one is using anymore!

    Dirt Rally, Crysis 3, all old games. Even Hitman is an old game now and should be dropped. Where is For Honor, Mass Effect Andromeda, Ghost Recon Wildlands, Deus EX: MD, Watch Dogs 2, Mafia 3, Forza 3, Sniper Elite 4, BF1, etc...

    The only relevant games they use are ROTR, AOTS, The Division and Witcher 3. No minimums tested, no maximums, no frametime, no overclocking, no custom clocked cards on the Nvidia side, etc...

    This is a barebones review!
    Reply
  • milkod2001 - Wednesday, April 19, 2017 - link

    Im afraid you are right. Anand is no longer in charge of this site, it is owned by Purch, advertising company which only care about ad clicks. They could not give a slightest sh.t. about your latest games benchmarks. Reply
  • Ryan Smith - Wednesday, April 19, 2017 - link

    True, the site is owned by Purch. But it is run by me, and when it comes to Editorial, the buck stops here.

    Every article you see posted here and every choice made in how we benchmark is my responsibility. Even on those articles I don't write, the editor in charge has spoken to me at some point to gather my feedback and to solicit my advice. This is to ensure that the articles you guys get live up to the quality that AnandTech is known for. And I do that precisely because I do care; I care about bringing you guys the information and analysis you need to see, and I care about trying to bring you the things you'd like to see.
    Reply
  • Ryan Smith - Wednesday, April 19, 2017 - link

    I commented on the game selection elsewhere in the thread: http://www.anandtech.com/comments/11278/amd-radeon...

    In short we refresh once per year, and the next refresh will be Vega (the current suite was rolled out with the Pascal launch). This ensures consistency between articles, and makes Bench more useful for you guys. There are other sites out there that do differently - and it's totally a valid way to test - but it's not how we want to do things. Our goal is apples-to-apples, and sometimes that requires being methodical and a bit slow. The benefit is that we can stand behind our data knowing full well that the results make sense, and that we have a very good understanding of the tests used.

    As for the number of games, there are 9 games here. On the one hand this is more than more sites, so I'd like to think we're doing well enough here, and on the other hand there is a practical limit to how many games we can have, due to how long it takes to run all of those games. If we added more games, we'd have to give up something else. And I should note that every data point you see here was collected or validated for this article, so you're looking at 9 games, 13 video cards, and multiple resolutions per card. It adds up very quickly.

    As for custom clocked cards, this has a bit of a history to it:

    http://www.anandtech.com/show/3988/the-use-of-evga...

    The last time we included the opposition's factory overclocked cards, you guys rightfully called us out on it, and made it clear that you wanted apples-to-apples testing. Since then, this is exactly what we've delivered: reviews and their conclusions are based around stock-clocked cards/configurations. This ensures that what you see is the baseline performance of a card, and that no retail card should be slower than the card we've tested. Especially when most buyers purchase the cheapest card they can find, it's not the fastest card that matters, it's the slowest.
    Reply
  • sandman74 - Wednesday, April 19, 2017 - link

    I agree with Ryan on the benchmarks. I often look back at older reviews to find a card that compares with my current one and seeing a game like BF4 in there helps me compare easily across both spectrums of my old card (a 980) and the newer cards.

    If they just showed newer games I wouldn't have a reference point for my older card.

    Plus I still play BF4 !

    Thanks for a detailed review.
    Reply
  • cjpp78 - Wednesday, April 19, 2017 - link

    They should have gave us the GPU in the Xbox Scorpio as the rx580..Of course with higher clocks Reply
  • Sajidrh - Wednesday, April 19, 2017 - link

    I have a 450 watts 80+ PSU form Antec Is it enough for RX 580 8gb??? Reply
  • Sajidrh - Wednesday, April 19, 2017 - link

    its 80+ Bronze Reply
  • Kakti - Wednesday, April 19, 2017 - link

    You'd have to tell us a lot more about your system. The main power hogs besides a dGPU are going to be the CPU (and if it's OC'd) and how many disk drives/dvd readers/etc you have. Assuming a stock i5 and two SATA drives max, I think 450 would be ok. A 90+watt CPU and a half dozen hard drives would probably be cutting it close at peak usage with a 580 Reply
  • Haawser - Wednesday, April 19, 2017 - link

    https://www.youtube.com/watch?v=MQ9ro5pwfXY

    People so need to watch this before buying a 1060. The RX 580 + Freesync = Holy crap !
    Reply
  • Hrel - Wednesday, April 19, 2017 - link

    Wow, Nvidia is just demolishing them. That card will need to be at least $50 cheaper than the GTX1060 just to make any sense at all, and that's for the 580. Which is a problem since it's priced directly against the 1060. Reply
  • CiccioB - Wednesday, April 19, 2017 - link

    "we'll give you better cards, our efficiency has improved 2.5x over GCN, we will lead the revolution to new performance for games, we will end the increasing cost run, we will bla bla bla"

    "Ermmm, in reality we cannot do anything of the above. As we already delivered a sub standard GPU than expected, we just decided to make more fools of ourselves and take that underwhelming power hungry sh*t called Polaris 10 that we can't sell even when given away for free and we are going to make you believe we have done a new revision with better characteristics by overclocking it a little making its inefficiency even better.
    We could do exactly the same in July 2016, but we like when we ordinarily fool you and you happily cry for happiness in seeing those colored bars going a bit longer. Never mind the 240W needed to make the same work as a 1060... Remember... Look at my hypnotic eyes... We are going to deliver a more efficient product 2.5x better than GCN that will close the gap with the concurrent's now legendary performance/watt ratio."

    If GCN was highly inefficient with respect to Maxwell, this sh*t is not even comparable to Pascal. AMD could not do worse than this.
    The suspect is that this way Vega will look better than expected with respect to this... well... sh*t. There's no other way to call this faked improvements.
    Reply
  • Pork@III - Thursday, April 20, 2017 - link

    Yes but with a little fix: "2.5x >oven< GCN" :D
    I think I'll take this card to bake steaks on her. I will just remove the fans.
    Reply
  • Outlander_04 - Thursday, April 20, 2017 - link

    AMD improves its lead in DX 12 .
    No surprises
    Reply
  • CiccioB - Thursday, April 20, 2017 - link

    Yes, better in the few DX12 games optimized for AMD architecture. Where it gains at most 10%... yes, a really selling point up to now, until real DX12 games with no ad-hoc AMD optimization will be released making many user wake up from their wet dreams. Reply
  • Outlander_04 - Thursday, April 20, 2017 - link

    Its not optimization its asynchronous compute . The nVidia architecture cant do it and will never be able to keep up in DX12 Reply
  • tipoo - Thursday, April 20, 2017 - link

    Define "can't do it". Pascal does async, just not with per-clock interleaving like AMD Reply
  • Outlander_04 - Thursday, April 20, 2017 - link

    Then it is not asynchronous which quite literally means "at the same time".
    AMD's compute strength is well established by the legions of people who wisely use their cards for bitcoin mining .
    Reply
  • CiccioB - Friday, April 21, 2017 - link

    Async doesn't really mean "at the same time" at all.
    Possibly, the opposite.
    Reply
  • CiccioB - Thursday, April 20, 2017 - link

    No optimizations?
    Tell me why DICE's engine runs better on AMD GPUs even in DX11 while all other engines do not.
    Async in DX11? A miracle that suddenly allowed AMD drivers to pass nvidia one in draw calls? Better geometry handling? Better memory and bandwith handling?
    Come on. You AMD fanboy are all looking to the first games in (pseudo) DX12 sponsored by AMD. The future ones will be different (maybe also using nvidia functionalities that AMD does not support and not biased on AMD HW.. AMD can't surely support all AAA developer for working more to use Async, which is not a free functionality, did you know? and tune it for all cards) and for the time DX12 will become mainstream Volta will be old.
    But it's nice that you all go and suggest to buy AMD HW. It should make nvidia one cheaper... should in theory,... probably you do not advertise too much as the prices keeps on staying at the high level. Please suggest to buy CrossFire solutions, so that AMD will sell double the HW and all those new AMD customers can enjoy double performance in..ermm... welll... yes, you know, DX12 does not support CF/SLI natively, so they'll happily play DX11 games at nvidia levels with their CF configurations.

    I bet the Async thing you just said was heard from an AMD friend... wasn't it?
    Reply
  • Outlander_04 - Thursday, April 20, 2017 - link

    Why is game optimization in DX11 in various game engines [ which could favor either AMD or nvidia] of any relevance to me pointing out the strengths of AMD's architecture in DX12?

    Please try and address what is said, not what you want to think is said . Thanks
    Reply
  • CiccioB - Friday, April 21, 2017 - link

    It's you that is looking at what you want.
    There are 2 scenarios to analyze:
    DX11 and DX12
    You just pick DX12 ignoring DX11 because it is what you want to advertise and to make you own consideration based only on what you want to see.
    I just made you notice that in DX11 the game is well optimized for AMD architecture seen the performances it obtains, performances that with respect to nvidia no other games have ever reached in DX11.
    So you can't dismiss the simple and clear assertion thati it is an AMD optimized game (engine).
    It is and DX11 demonstrates it. What you see in DX12 is what will be if ALL future games will be optimized for AMD architecture this way. Which won't happen. Other games (always supporting DX12) just shows that they can run better on nvidia HW. Both because they do not have all those work payed by AM to make the game run better on AMD HW and because not all games take advantage of the Async compute (which costs in terms of development, did you understand this or you are living in your own world of bunnies and rainbows?)

    So extrapolating that AMD work well in DX12 just by looking at one engine that is created for running better on their HW (and as I said it is a fact seen also in DX11) it is stupid and just demonstrates a pure lie.
    Reply
  • Mugur - Thursday, April 20, 2017 - link

    I'm sorry to be another one that points out that the testbed is obsolete (the best approach should be 2 testbeds with i7 7700k and R7 1800X or R5 1600X) and it's missing a few new games (Doom, Battlefield 1, etc.).

    About the cards: they are ok-ish, in my opinion. Nothing spectacular, but it's still a refresh, same price or a bit lower than last year, both cool and quiet even factory overclocked. Nobody should care for a few Watts more than 1060 (which was actually warmer and noisier in the tests), as long as they have a decent PSU.

    As an owner of 2 Freesync monitors, I may go for a 580 8 GB to replace my 470 that would go into the kid's PC. After I see Vega, of course. :-)
    Reply
  • CiccioB - Thursday, April 20, 2017 - link

    "Few watts"
    It uses double the power for the same work!
    And yes, a bit warmer and noisier.. it was the FE with the blower solution. Take a custom card, it will be still faster than this OC over OC sh*t and with use half the power and be much more cool with less than half the noise.

    It is fascinating to try to understand how people can justify certain incomprehensible choices.
    Reply
  • Outlander_04 - Thursday, April 20, 2017 - link

    The power difference you are talking about is roughly equivalent to that use by a dim light bulb.

    And yes lower is better ,but in many situations its not wasted power at all .
    If, for instance, you are heating your home, then the extra wattage is always offset by the thermostat which will add less heat and use less power .
    Mostly this is not a huge deal that fanboys try to make it
    Reply
  • silverblue - Sunday, April 23, 2017 - link

    The difference in rendering approaches (and, consequently, power consumption) reminds me of the GeForce 2 GTS - Kyro II days where the GTS could win more than its fair share of battles, albeit with much higher power requirements and more complex hardware due to its traditional rendering engine. The 1060, in a similar comparison, would be like Kyro II on steroids - the Kyro II was conservatively clocked, had minimal rendering pipelines, and had standard memory because it didn't need expensive hardware to run rings around its real competitor, the GeForce 2 MX, so in this comparison, due to a more elegant and sophisticated solution, the 1060 has the power to beat out the 580 more often than not in 6GB form. However, whereas the whole industry didn't change its implementations to compete back in 2001 (STMicro cancelled the Kyro III), we now know that this has to happen if AMD is to compete at the very top end, meaning that NVIDIA's efficiency advantage may be short-lived.

    If STMicro hadn't killed off the Kyro III, perhaps we could've seen an industry shift much sooner?

    I understand the argument about more heat from your PC offsetting heating requirements in the home, but I suppose it depends what the room is like. Additionally, if more heat is being kicked out in summer, if you live in a very warm place then surely it's going to add to the cooling requirements, not only of your PC but of the room itself? Higher energy requirements mean a more powerful PSU, of course. What's more, considering the lack of SLi on lower tier NVIDIA products, you either have to go high-end or get two Polaris-based cards, both of which are going to use a lot more energy and thus produce more heat. That might help keep your room warm in winter, but I wouldn't like to be in a hot room on a hot day with those puppies at 100% load. I can imagine similar arguments about getting an FX 9 CPU; heat your room in winter, then conveniently forget about warmer months. :)

    Maybe I'm just splitting hairs here, any thoughts?
    Reply
  • Seikent - Thursday, April 20, 2017 - link

    I expected a mention to the great noise improvements in the conclusions compared to the last generation, even if the new card is very similar performance-wise. Reply
  • GoMoeJoe - Thursday, April 20, 2017 - link

    Vegas ,,, or nothing. Reply
  • Mr AMD - Friday, April 21, 2017 - link

    Recommending a 1060, while it's slower under DX12 and Vulkan....... Reply
  • Mr AMD - Friday, April 21, 2017 - link

    Try to use more modern games, with the new API's, than your recomendation for the 1060 will be turned back Reply
  • Outlander_04 - Wednesday, May 03, 2017 - link

    I think you hit the nail on the head . Reply
  • JRW - Friday, April 28, 2017 - link

    A video card in 2017 that can't maintain 60fps at 1080P on Very High in GTA 5? No Thanks. Reply
  • Diogo - Saturday, May 20, 2017 - link

    There are huge differences between your results and "The Guru of 3D" in some benchmarks.

    The video cards: GTX 1060 6 GB x Sapphire Radeon RX 570 Nitro+ 4GB

    Hitman(2016): the nvidia card in your test is 1,2% faster in 1080p, while in the other test, the amd card is 34% faster. I can't understant why it is happening. Looking at the fps it seems that the GTX 1060 6 GB behaves identically, but amd card does not.

    Ashes Of The Singularity: the amd card appear to be faster by 8% in the other test. Although, I am not sure if the two tests are the same because you have "Escalation" word in your benchmark.

    In Rise Of The Tomb Raider there is a difference on the directx version, the amd card at dx12 runs faster, but it does not win over GTX 1060 6 GB (9% faster).

    Looking at the test setup, one thing that i could find is that amd drivers are different.
    AMD Radeon Software Crimson Press Beta 17.10.1030 x AMD Radeon Software Crimson Driver 17.4.2.

    One point to notice: If "The Guru of 3D" results are right, by my calculation, the amd card is 8% faster than the nvidia card in dx12 games( 1080p ). And the performance is similar at 1440p.
    Reply
  • elizadd - Tuesday, June 20, 2017 - link

    Thanks for the post and great tips..even I also think that hard work is the most important aspect of getting success..
    http://echoplex.us/why-it-is-important-to-buy-auto...
    Reply

Log in

Don't have an account? Sign up now