During their 30 years of graphics celebration, today AMD announced a forthcoming addition to the Radeon R9 200 graphics card lineup. Launching on September 2nd will be the company’s new midrange enthusiast card, the Radeon R9 285.

The R9 285 will take up an interesting position in AMD’s lineup, being something of a refresh of a refresh that spans all the way back to Tahiti (Radeon 7970). Spec wise it ends up being extremely close on paper to the R9 280 (née 7950B) and it’s telling that the R9 280 is no longer being advertised by AMD as a current member of their R9 lineup. However with a newer GPU under the hood the R9 285 stands to eclipse the 280 in features, and with sufficient efficiency gains we hope to see it eclipse 280 in performance too.

AMD GPU Specification Comparison
  AMD Radeon R9 290 AMD Radeon R9 280X AMD Radeon R9 285 AMD Radeon R9 280
Stream Processors 2560 2048 1792 1792
Texture Units 160 128 112 112
ROPs 64 32 32 32
Core Clock 662MHz 850MHz ? 827MHz
Boost Clock 947MHz 1000MHz 918MHz 933MHz
Memory Clock 5GHz GDDR5 6GHz GDDR5 5.5GHz GDDR5 5GHz GDDR5
Memory Bus Width 512-bit 384-bit 256-bit 384-bit
VRAM 4GB 3GB 2GB 3GB
FP64 1/8 1/4 ? 1/4
TrueAudio Y N Y N
Typical Board Power 250W 250W 190W 250W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm? TSMC 28nm
Architecture GCN 1.1 GCN 1.0 GCN 1.1? GCN 1.0
GPU Hawaii Tahiti Tonga? Tahiti
Launch Date 11/05/13 10/11/13 09/02/14 03/04/14
Launch Price $399 $299 $249 $279

Looking at the raw specifications, the R9 285 is a 1792 stream processor Graphics Core Next product. Paired with these SPs are 112 texture units (in the standard 16:1 ratio), and on the backend of the rendering pipeline is 32 ROPs. As is unfortunately consistent for AMD, they are not disclosing the product’s base clockspeed, but they have published the boost clockspeed of 918MHz.

Meanwhile feeding R9 285’s GPU falls to the card’s 2GB of GDDR5. This is on a 256-bit bus, and is clocked at 5.5GHz for a total memory bandwidth of 176GB/sec.

The R9 285 will have a rated typical board power (AMD’s analogue for TDP) of 190W. Notably this is only 10W higher than the Pitcairn based R9 270X despite the 40% larger SP count, or alternatively is 60W lower than the Tahiti based R9 280. While we don’t have a ton of details on the GPU at this time, taking into consideration the R9 270X comparison in particular, it’s clear that AMD has done some work on efficiency to squeeze out more compared to the GCN 1.0 based Pitcairn and Tahiti parts that R9 285 is going to be placed between.

The GPU itself is based on a newer version of AMD’s architecture, at least GCN 1.1 based on the presence of TrueAudio support. AMD has not formally announced the underlying GPU at this time, but given the timing and the specifications we believe it’s based on the new Tonga GPU, which was first announced for the FirePro W7100 earlier this month. In any case we don’t have much in the way of details on Tonga at this time, though we expect AMD to flesh out those details ahead of R9 285’s September 2nd launch. The biggest question right now – besides whether this is a “full” Tonga configuration – is whether Tonga is based on GCN 1.1 or something newer.

Based on some prior AMD statements and information gleaned from AMD’s CodeXL tool, there is reason to suspect (but not confirm) that this is a newer generation design. AMD for their part has done something very similar in the past, launching GCN 1.1 back on the Radeon HD 7790, but essentially hiding access to and details of GCN 1.1’s feature set until the launch of the Hawaii based R9 290X later in the year. Whether AMD is doing this again remains to be seen, but it is something we have seen them do before and don’t doubt they could do again. Though whether they will confirm it is another matter, as the company does not like to publicly differentiate between GCN revisions, which is why even the GCN 1.1 name is unofficial.


Sapphire's Radeon R9 285 Dual-X

Working for the moment off of the assumption that R9 285 is Tonga based and that it’s a GCN 1.1 part, we expect that performance should be a wash with the R9 280 while the R9 285 has an advantage on features. GCN 1.1 does have some mild performance optimizations to it that will give the R9 285 an edge, though it remains to be seen what the impact will be of the narrower memory bus. The fact that the Tahiti based R9 280X remains in AMD’s lineup indicates that if nothing else, it won’t match the performance of a full Tahiti configuration. Otherwise when it comes to features, being GCN 1.1 based means that the R9 285 will bring with it support for True Audio, support for bridgeless CrossFire thanks to the XDMA engine, GCN 1.1’s superior boost mechanism, and full support for AMD’s upcoming FreeSync implementation of DisplayPort Adaptive Sync (GCN 1.0 GPUs are not fully adaptive).

As for AMD, this offers the chance to refresh some of their oldest GCN 1.0 products with a more capable GPU while also cutting costs. While we don’t have die size numbers for Tonga, it is reasonable to expect that it is smaller due to the narrower memory bus along with the die size optimizations that we saw go into Hawaii last year, which means it will be cheaper to manufacture than Tahiti. This also brings down board costs, again due to the narrower memory bus and the lower TDP allows for simpler power delivery circuitry.

AMD will be positioning the R9 285 to compete with NVIDIA’s GeForce GTX 760, the company’s second-tier GK104 part. The GTX 760 performs roughly the same as the R9 280, so AMD need only not regress to maintain their competitiveness, though any performance lead they can squeeze out will be all for the better. The GTX 760 is frequently found at $239 – a hair under the R9 285’s launch price – so NVIDIA will hold a very slight edge on price assuming they don’t adjust prices further (the GTX 760 launched at $249 almost 14 months ago).

The R9 285 for its part will be launching at $249 on September 2nd. This will be a hard launch, and with AMD’s partners already posting product pages for their designs we suspect this will be a pure virtual (no reference card) launch. AMD also tells us that there will be both 2GB and 4GB cards; we’re going to have to see what the price premium is, as the suitability of 2GB enthusiast cards has been challenged by the presence of so much RAM on the current-generation consoles, which will have a knock-on effect on console-to-PC ports.

Though with the launch of the R9 285 and impending discontinuation of the R9 280, buyers looking at picking up an R9 285 in the near term will have to be on the looking for R9 280 on clearance sale. It’s already regularly found for $220 and lower, making it $30 cheaper than the R9 285 and possessing 3GB of VRAM to the R9 285’s 2GB. This will make the R9 280 a strong contender, at least until supplies run out.

Fall 2014 GPU Pricing Comparison
AMD Price NVIDIA
Radeon R9 290 $400  
  $310 GeForce GTX 770
Radeon R9 280X $280  
Radeon R9 285 $250  
  $240 GeForce GTX 760
Radeon R9 280 $220  
Radeon R9 270X $180  
  $160 GeForce GTX 660

Finally, coinciding with the launch of the R9 285 will be a refresh of AMD’s Never Settle bundles. The details on this are still murky at this time, but AMD is launching what they call the Never Settle Space Edition bundle, which will see Alien Isolation and Star Citizen as part of a bundle for all R9 series cards. The lack of clarity is whether this replaces the existing Never Settle Forever bundle in this case, or if these games are being added to the Never Settle Forever lineup in some fashion. AMD has said that current Silver and Gold voucher holders will be able to get the Space Edition bundle with their vouchers, which lends credit to the idea that these are new games in the NSF program rather than a different program entirely.

Both Alien Isolation and Star Citizen are still-in-development games. Alien Isolation is a first person shooter and is expected in October of this year. Meanwhile the space sim Star Citizen does not yet have a release date, and as best as we can tell won’t actually be finished until late 2015 at the earliest. In which case the inclusion here is more about access to the ongoing beta, which is the first time we’ve seen beta access used as part of a bundle in this fashion.

Comments Locked

84 Comments

View All Comments

  • hojnikb - Sunday, August 24, 2014 - link

    Used ones obviously. Mining craze was really awsome if you think about it.
  • bwat47 - Sunday, August 24, 2014 - link

    @hojnikb

    There were some new ones on newegg for only 249 recently (think it was sapphire dual-x), seems to have gone back up to 289 now though
  • bwat47 - Sunday, August 24, 2014 - link

    Yeah they seem to be going for as cheap as ~249 dollars now which is a very nice deal for that card. I got mine for 299 when they first came out and am still very happy with that purchase.
  • OrphanageExplosion - Sunday, August 24, 2014 - link

    Over the longer term I can't help but feel that the 3GB 7950/R9 280 is going to be the better deal compared to a 2GB R9 285, regardless of the extra perf.

    As Ryan points out, the next-gen consoles are a game-changer. Available VRAM will trump small percentage boosts in performance - a situation that will only become more important once you scale up to 1440p and 4K.

    Adding Watch Dogs to the benchmark suite at ultra settings would be a good first step. Titanfall too - though repeatable benchmarks there are going to be quite a challenge. These two games are just going to be the beginning though.
  • B3an - Sunday, August 24, 2014 - link

    2GB is unacceptable on a card at this price. Some games already recommend 3GB and this will continue to become more standard as the consoles finally have a lot more RAM to work with.
  • Beany2013 - Sunday, August 24, 2014 - link

    Show me a game that needs more than 2gb of VRAM to keep a consistent framerate?

    Metro Last Light is one of the most graphically impressive games out there, and I've never seen it use more than a gig at any time - although it does have a very efficient way of streaming textures in. Crysis 3 seems to use about 1.5-2gb if available - it'll run happily with less, it just streams the textures in when required, rather than preloading them in VRAM first. Its not like when you hit the VRAM limit your start getting half the frame rate.

    Bear in mind that the consoles use shared RAM - the 8gb is split between the GPU and CPU, so the VRAM has to be considered against operational CPU ram for physics, AI, sound, and anything else that can't be accelerated in pure hardware, etc.

    2gb is perfectly acceptable IMHO - but in a couple of years time (once the XBone and PS4 start getting optimised and stretch their legs) it *might* not be, not unless the game requires 2gb of VRAM just to draw a the scene in front of/around you.
  • OrphanageExplosion - Sunday, August 24, 2014 - link

    Titanfall for one. Watch Dogs, for another.

    Titanfall has no background texture streaming. Everything is dumped into VRAM and 3GB is required just to house the highest quality artwork, regardless of resolution.

    Ubisoft has indicated that perf issues with Watch Dogs are down to a lack of unified RAM and suggest as much VRAM as possible for ultra settings.
  • tuxRoller - Sunday, August 24, 2014 - link

    At what resolution, though? Although, I do agree that it has enough ram, and if you need more, they mentioned a 4gb option.
  • _zenith - Monday, August 25, 2014 - link

    Bioshock Infinite eats up 2.5GB at 1080p with all settings maxed out (in-game UI settings only; no driver mods etc)...
  • Beany2013 - Monday, August 25, 2014 - link

    Sigh. One word, kids.

    Caching.

    Something that TF, BS:I and WD all do to varying degrees. They don't *need* 3gb VRAM otherwise you'd not see 2gb GPUs besting 3gb GPUs in the benchmarks when the 2gb card has a better GPU in it - the games are mostly GPU limited, not VRAM limited.

    To prove the point, have a look at the 260x 2gb and 7850 1gb in this benchmark
    http://www.gamersnexus.net/game-bench/1352-titanfa...

    The difference is not huge on average with insane graphics@1080p, which if it *needed* 3gb, would be unplayableon both cards - not from an elite gamers perspective who spits on anything less than 100fps, but from a technical perspective - it'd be sub 10fps as it constantly loads entire environments into VRAM as it fills up.

    We're talking 55fps average vs 50fps average, which is likely VRAM affected given the both run the same basic GPU, but could also be accounted for by the modernisation the 260X got over the 7850, higher clock speed, faster memory speed, better thermal management meaning more time at peak speed, etc but the 7850 isn't crippled by the lack of VRAM by any stretch of the imagination.

    Oh, and Ubisoft have already marked their coding incompetence by crippling the graphical fidelity from where it was on the E3 demo. Along with coding comments like 'is pc, who cares' which shows how little of a toss they gave to the PC version. So I'll be taking no lessons from them on system requirements and optimisations, thanks.

    The fact is that no game out there *needs* 3gb VRAM, unless you are playing at 4k, in full detail and expect silky smooth framerates. I don't think anyone paying $250 for a GPU expects to get that, as you can only really get that with a truly top flight card at this stage, which is well north of $300.

Log in

Don't have an account? Sign up now