Final Words

Bringing this review to a close, for the last 14 months now we’ve been pondering just what a fully enabled Tonga desktop SKU might look like, and with Radeon R9 380X we finally have our answer. With the final 4 CUs enabled – bringing us from 28 CUs to 32 CUs – Radeon R9 380X picks up where R9 380 left off and adds a further 10% in performance. This is a bit less than the 14% we’d expect to gain going from CU counts alone, but at the same time few games are purely CU limited. So in a mixed selection of games this is a pretty reasonable outcome.

This also means that R9 380X essentially picks up from where AMD’s past Tahiti cards like the 7970 and R9 280X left off. As the successor-of-sorts to AMD’s original GCN GPU, Tahiti, Tonga brings with it some welcome feature upgrades that otherwise left Tahiti dated. So within AMD’s lineup it’s now Tonga that’s anchoring the mid-range, between the Hawaii based 390 series and the Pitcairn based 370 series.

This makes R9 380X a reasonable step up from the R9 380, though on the whole it’s unremarkable. Priced at $229, the card is about $30 more expensive than the 4GB R9 380 (and the 4GB GTX 960), which means it’s not pushing the price/performance curve in any way, though in all fairness to AMD they never said it would. Instead what we’re looking at is a small but logical stepping stone between the R9 380 and the R9 390, where similar to factory overclocked cards if you spend a bit more money you get a bit more performance. The end result is that for AMD’s stack the R9 380X is their best 1080p gaming card, almost never having to compromise on quality in order to get playable framerates.

Meanwhile looking at the competition, by virtue of the GPU configurations AMD and NVIDIA went with for this generation, the R9 380X has no true competitor from NVIDIA. This doesn’t give AMD much freedom – the card is only 10% faster than the GTX 960, so they have to stay within reason on pricing – but it means that they’re the only game in town for a $200-$250 video card family. Otherwise the one tradeoff here (as has been the case with most of AMD’s cards this year) will be on power efficiency; R9 380X doesn’t improve on AMD’s efficiency at all, resulting in R9 380X drawing a lot more power for its 10% advantage over GTX 960. We will add however that a 10% gap means that the R9 380X’s performance isn’t outside the potential reach of factory overclocked GTX 960 cards, but that is very much on a case-by-case basis as opposed to today’s look at baseline performance for each video card series.

The challenge to the R9 380X then doesn’t come from below, but from above. The R9 390 and GTX 970 start at $289 – $60 more than the R9 380X – and each is a rather sizable 40%+ faster than the R9 380X. Consequently both are value spoilers, offering that 40% better performance for a 26% higher price; a significantly higher cost for even more significant performance. At the end of the day budgets exist for a reason and the R9 380X is a reasonable offering in the product range it was designed for, but if you can afford to spend more for GTX 970 or R9 390 then right now that’s the better buy (with NVIDIA’s current game bundle as an extra kicker in favor of this).

Last but not least however we have the matter of the specific R9 380X card in today’s review, ASUS’s STRIX R9 380X OC. With the STRIX lineup ASUS has focused on quality and workmanship, and their STRIX R9 380X OC continues this legacy. It’s a well-built card – one of the best to have come our way all year – and it sets a very high bar for ASUS’s competition. The one drawback with the card is the price, and this goes hand-in-hand with the value spoiler effect we just covered. At $259 the STRIX R9 380X OC halves the premium for an R9 390/GTX 970, yet those cards are still 30%+ faster. It’s very hard to charge a premium price for a premium card in the current market, and while the STRIX R9 380X is a fantastic R9 380X, it’s none the less in a very awkward spot right below some very powerful video cards.

Overclocking
Comments Locked

101 Comments

View All Comments

  • FriendlyUser - Monday, November 23, 2015 - link

    This is not a bad product. It does have all the nice Tonga features (especially FreeSync) and good tesselation performance, whatever this is worth. But the price is a little bit higher than what would make a great deal. At $190, for example, this card woule be the best card in middle territory, in my opinion. We'll have to see how it plays, but I suspect this card will probably find its place in a few months and after a price drop.
  • Samus - Monday, November 23, 2015 - link

    Yeah, it's like every AMD GPU...overpriced for what it is. They need to drop the prices across the entire line about 15% just to become competitive. The OC versions of the 380X is selling for dollars less than some GTX970's, which use less power, are more efficient, are around 30% faster, and you could argue have better drivers and compatibility.
  • SunnyNW - Monday, November 23, 2015 - link

    To my understanding, the most significant reason for the decreased power consumption of Maxwell 2 cards ( the 950-60-70 etc.) was due to the lack of certain hardware in the chips themselves specifically pertaining to double precision. Nvidia seems to recommend Titan X for single precision but Titan Z for DP workloads. I bring this up because so many criticize AMD for being "inefficient" in terms of power consumption but if AMD did the same thing would they not see similar results? Or am I simply wrong in my assumption? I do believe AMD may not be able to do this currently due to the way their hardware and architecture is configured for GCN but I may be wrong about that as well, since I believe their 32 bit and 64 bit "blocks" are "coupled" together. Obviously I am not a chip designer or any sort of expert in this area so please forgive my lack of total knowledge and therefore the reason for me asking in hopes of someone with greater knowledge on the subject educating myself and the many others interested.
  • CrazyElf - Monday, November 23, 2015 - link

    It's more complex than that (AMD has used high density libraries and has very aggressively clocked its GPUs), but yes reducing DP performance could improve performance per watt. I will note however that was done on the Fury X; it's just that it was bottlenecked elsewhere.
  • Samus - Tuesday, November 24, 2015 - link

    At the end of the day, is AMD making GPU's for gaming or GPU's for floating point\double precision professional applications?

    The answer is both. The problem is, they have multiple mainstream architectures with multiple GPU designs\capabilities in each. Fury is the only card that is truly built for gaming, but I don't see any sub-$400 Fury cards, so it's mostly irrelevant since the vast majority (90%) of GPU sales are in the $100-$300 range. Every pre-Fury GPU incarnation focused too much on professional applications than they should have.

    NVidia has one mainstream architecture with three distinctly different GPU dies. The most enabled design focuses on FP64\Double Precision, while the others eliminate the FP64 die-space for more practical, mainstream applications.
  • BurntMyBacon - Tuesday, November 24, 2015 - link

    @Samus:: "At the end of the day, is AMD making GPU's for gaming or GPU's for floating point\double precision professional applications?"

    Both

    @Samus: "The answer is both."

    $#1+

    @Samus: " Fury is the only card that is truly built for gaming, but I don't see any sub-$400 Fury cards, so it's mostly irrelevant since the vast majority (90%) of GPU sales are in the $100-$300 range. Every pre-Fury GPU incarnation focused too much on professional applications than they should have."

    They tried the gaming only route with the 6xxx series. They went back to compute oriented in the 7xxx series. Which of these had more success for them?

    @Samus: "NVidia has one mainstream architecture with three distinctly different GPU dies. The most enabled design focuses on FP64\Double Precision, while the others eliminate the FP64 die-space for more practical, mainstream applications."

    This would make a lot of sense save for one major issue. AMD wants the compute capability in their graphics cards to support HSA. They need most of the market to be HSA compatible to incentivize developers to make applications that use it.
  • CiccioB - Tuesday, November 24, 2015 - link

    HSA and DP64 capacity have nothing in common.
    People constantly confuse GPGPU capability with DP64 support.
    nvidia GPU have been perfectly GPGPU capable and in fact they are even better than AMD ones for consumer calculations (FP32).
    I would like you to name a single GPGPU application that you can use at home that makes use of 64bit math.
  • Rexolaboy - Sunday, January 3, 2016 - link

    You asked a question that's been answered in the post you reply to. Amd wants to influence the market to support fp64 compute because it's ultimately more capable. No consumer programs using fp64 compute is exactly why amd is trying so hard to release cards capable of it, to influence the market.
  • FriendlyUser - Tuesday, November 24, 2015 - link

    It's not just DP, it's also a lot of bits that go towards enabling HSA. Stuff for memory mapping, async compute etc. AMD is not just building a gaming GPU, they want something that plays well in compute contexts. Nvidia is only being competitive thanks to the CUDA dominance they have built and their aggressive driver tuning for pro applications.
  • BurntMyBacon - Tuesday, November 24, 2015 - link

    @FriendlyUser: "It's not just DP, it's also a lot of bits that go towards enabling HSA. Stuff for memory mapping, async compute etc. AMD is not just building a gaming GPU, they want something that plays well in compute contexts."

    This. AMD has a vision where GPU's are far more important to compute workloads than they are now. Their end goal is still fusion. They want the graphics functions to be integrated into the CPU so completely that you can't draw a circle around it and you access it with CPU commands. When this happens, they believe that they'll be able to leverage the superior graphics on their APUs to close the performance gap with Intel's CPU compute capabilities. If Intel releases better GPU compute, they can still lean on discrete cards.

    Their problem is that there isn't a lot of buy-in to HSA. In general, there isn't a lot of buy-in to GPU compute on the desktop. Sure there are a few standouts and more than a few professional applications, but nothing making the average non-gaming user start wishing for a discrete graphics card. Still, they have to include the HSA (including DP compute) capabilities in their graphics cards if they ever expect it to take off.

    HSA in and of itself is a great concept and eventually I expect it will gain favor and come to market (perhaps by another name). However, it may be ARM chip manufacturers and phones/tablets that gain the most benefit from it. There are already some ARM manufacturers who have announce plans to build chips that are HSA compatible. If HSA does get market penetration in phones/tablets first as it looks like may happen, I have to wonder where all the innovative PC programmers went that they couldn't think of a good use for it with several years head start.

Log in

Don't have an account? Sign up now