Final Thoughts

Throughout our entire review we’ve been calling the Radeon R9 285 a lateral for AMD, and as we’ve seen in our results this is for a good reason. Despite all of the architectural and feature changes between the R9 285 and its R9 280 predecessor – everything from the GCN 1.2 feature set to color compression to the smaller VRAM pool – the R9 285 truly is a lateral for AMD. At the end of the day it brings a very minor 3-5% performance increase over the R9 280 with virtually no change in price or power consumption. Functionally speaking it’s just an R9 280 with more features.

To that end laterals like the R9 285 are currently an oddity in the video card landscape, but it’s something that we should expect to see more of in the future. As GPU architectures mature and the rate of progress on new manufacturing nodes continues to slow, we no longer have the same yearly or even biennial shakeup in the GPU landscape. Tahiti at this point is nearly three years old and is still going strong, and the 28nm process it’s built on is going to be with us for a while yet. Which means newer generations of video cards are going to be farther apart, and a new opening is created for smaller refreshes such as Tonga and GCN 1.2.

From a feature standpoint then, Tonga and the underlying GCN 1.2 architecture is a small but nonetheless impressive iteration on what AMD has already done with GCN 1.1. I think it’s going to take some time to really see the impact of the newer ISA, but the improvements to geometry performance and color compression are very immediate and very potent. The fact that AMD has been able to offset a roughly 30% bandwidth reduction just through the use of color compression is certainly a feather in AMD’s cap, and this is only going to get more important over time as we have hit a wall on GDDR5 clockspeeds and memory bus widths, especially on the high-end. Meanwhile AMD’s upgrades to their video decode and encode capabilities should not go unnoticed; AMD has finally caught up to NVIDIA on video decoding – especially in 4K H.264 compatibility – and the ability to encode 4K H.264 in hardware may yet prove advantageous.

As for R9 285’s customer base and its competition, AMD’s product positioning continues to be straightforward. AMD has continued to undercut NVIDIA on a price/performance basis across the entire Radeon 200 family, and R9 285 upholds this tradition. If we’re just looking for the card with the best performance for the price, the R9 285 solidly outperforms NVIDIA’s GTX 760 by 12-15%, and it’s by no mistake that GTX 760 prices have slid in the last week in response.

The ramification of this is that AMD no longer holds a real price/performance advantage – the price gap just about matches the performance gap at this point – but this does mean that the R9 285 is in its own little performance niche as a more powerful but more expensive video card compared to the GTX 760. The end result is that we have a tossup: you could buy either and be satisfied for the price.

AMD’s lineup on the other hand is a bit more volatile and will remain so until R9 280 stocks run out. With AMD’s partners selling off their remaining R9 280 cards at clearance sale prices, the R9 280 is a very strong value proposition at $210-$220, offering virtually identical performance to the R9 285 for $40 less. However like all GPU discontinuation clearance sales this situation will be fleeting, and at some point R9 280 will go away and $250 R9 285 will be the status quo. In the meantime however one is also left with the harder choice of picking price or features; the R9 285 has a few features that in the long run are going to make a difference, such as full support for DisplayPort Adaptive-Vsync (Freesync) and a 4K capable video decoder, but whether that’s worth a $40 premium is going to be very situational if not outright difficult to justify.

All things considered then the R9 285 is a solid card, however I remain unconvinced that AMD has equipped it with the right amount of memory. From a GPU performance perspective I feel that AMD is overshooting in promoting the R9 285 as a 2560x1440 card, as the raw performance to run at that resolution with high quality settings just isn’t there, but even as a 1080p card 2GB for $250 is tough to swallow and is made all the worse by the 3GB R9 280. 2GB for 1080p is enough for now, but whether that will still be true in 2-3 years seems unlikely. A 4GB R9 285 would be a much safer bet as a result, however it doesn’t necessarily follow that it would be worth a price premium at this time.

Switching gears for a moment, second-tier cards like the R9 285 are often not the strongest showing for a new GPU like Tonga. Given all the similarities between Tonga and Tahiti, it seems like it’s only a matter of time until R9 280X gets the Tonga treatment. And even though it would be the second Tonga card, I think it could prove to be just as interesting as the R9 285 (if not more so), as it will give us a chance to see just what an unrestricted Tonga product can do. To that end, I hope AMD doesn’t leave us waiting too long to release a fully enabled Tonga SKU.

Power, Temperature, & Noise
Comments Locked

86 Comments

View All Comments

  • chizow - Thursday, September 11, 2014 - link

    If Tonga is a referendum on Mantle, it basically proves Mantle is a failure and will never succeed. This pretty much shows most of what AMD said about Mantle is BS, that it takes LESS effort (LMAO) on the part of the devs to implement than DX.

    If Mantle requires both an application update (game patch) from devs AFTER the game has already run past its prime shelf-date AND also requires AMD to release optimized drivers every time a new GPU is released, then there is simply no way Mantle will ever succeed in a meaningful manner with that level of effort. Simply put, no one is going to put in that kind of work if it means re-tweaking every time a new ASIC or SKU is released. Look at BF4, its already in the rear-view mirror from DICE's standpoint, and no one even cares anymore as they are already looking toward the next Battlefield#
  • TiGr1982 - Thursday, September 11, 2014 - link

    Please stop calling GPUs ASICs - this looks ridiculous.
    Please go to Wikipedia and read what "ASIC" is.
  • chizow - Thursday, September 11, 2014 - link

    Is this a joke or are you just new to the chipmaking industry? Maybe you should try re-reading the Wikipedia entry to understand GPUs are ASICs despite their more recent GPGPU functionality. GPU makers like AMD and Nvidia have been calling their chips ASICs for decades and will continue to do so, your pedantic objections notwithstanding.

    But no need to take my word for it, just look at their own internal memos and job listings:

    https://www.google.com/#q=intel+asic
    https://www.google.com/#q=amd+asic
    https://www.google.com/#q=nvidia+asic
  • TiGr1982 - Thursday, September 11, 2014 - link

    OK, I accept your arguments, but I still don't like this kind of terminology. To me, one may call things like fixed-function video decoder "ASIC" (for example UVD blocks inside Radeon GPUs), but not GPU as a whole, because people do GPGPU for a number of years on GPUs, and "General Purpose" in GPGPU contradicts with "Aplication Specific" in ASIC, isn't it?
    So, overall it's a terminology/naming issue; everyone uses the naming whatever he wants to use.
  • chizow - Thursday, September 11, 2014 - link

    I think you are over-analyzing things a bit. When you look at the entire circuit board for a particular device, you will see each main component or chip is considered an ASIC, because each one has a specific application.

    For example, even the CPU is an ASIC even though it handles all general processing, but its specific application for a PC mainboard is to serve as the central processing unit. Similarly, a southbridge chip handles I/O and communications with peripheral devices, Northbridge handles traffic to/from CPU and RAM and so on and so forth.
  • TiGr1982 - Thursday, September 11, 2014 - link

    OK, then according to this (broad) understanding, every chip in silicon industry may be called ASIC :)
    Let it be.
  • chizow - Friday, September 12, 2014 - link

    Yes, that is why everyone in the silicon industry calls their chips that have specific applications ASICs. ;)

    Something like a capacitor, or resistor etc. would not be as they are of common commodity.
  • Sabresiberian - Thursday, September 11, 2014 - link

    I reject the notion that we should be satisfied with a slower rate of GPU performance increase. We have more use than ever before for a big jump in power. 2560x1440@144Hz. 4K@60Hz.

    Of course it's all good for me to say that without being a micro-architecture design engineer myself, but I think it's time for a total re-think. Or if the companies are holding anything back - bring it out now, please! :)
  • Stochastic - Thursday, September 11, 2014 - link

    Process node shrinks are getting more and more difficult, equipment costs are rising, and the benefits of moving to a smaller node are also diminishing. So sadly I think we'll have to adjust to a more sedate pace in the industry.
  • TiGr1982 - Thursday, September 11, 2014 - link

    I'm a longstanding AMD Radeon user for more than 10 years, but after reading this R9 285 review I can't help but think that, based on results of smaller GM107 in 750 Ti, GM204 in GTX 970/980 may offer much better performance/Watt/die area (at least for gaming tasks) in comparison to the whole AMD GPU lineup. Soon we'll see whether or not this will be the case.

Log in

Don't have an account? Sign up now