Final Words

Bringing this review to a close, for the last 14 months now we’ve been pondering just what a fully enabled Tonga desktop SKU might look like, and with Radeon R9 380X we finally have our answer. With the final 4 CUs enabled – bringing us from 28 CUs to 32 CUs – Radeon R9 380X picks up where R9 380 left off and adds a further 10% in performance. This is a bit less than the 14% we’d expect to gain going from CU counts alone, but at the same time few games are purely CU limited. So in a mixed selection of games this is a pretty reasonable outcome.

This also means that R9 380X essentially picks up from where AMD’s past Tahiti cards like the 7970 and R9 280X left off. As the successor-of-sorts to AMD’s original GCN GPU, Tahiti, Tonga brings with it some welcome feature upgrades that otherwise left Tahiti dated. So within AMD’s lineup it’s now Tonga that’s anchoring the mid-range, between the Hawaii based 390 series and the Pitcairn based 370 series.

This makes R9 380X a reasonable step up from the R9 380, though on the whole it’s unremarkable. Priced at $229, the card is about $30 more expensive than the 4GB R9 380 (and the 4GB GTX 960), which means it’s not pushing the price/performance curve in any way, though in all fairness to AMD they never said it would. Instead what we’re looking at is a small but logical stepping stone between the R9 380 and the R9 390, where similar to factory overclocked cards if you spend a bit more money you get a bit more performance. The end result is that for AMD’s stack the R9 380X is their best 1080p gaming card, almost never having to compromise on quality in order to get playable framerates.

Meanwhile looking at the competition, by virtue of the GPU configurations AMD and NVIDIA went with for this generation, the R9 380X has no true competitor from NVIDIA. This doesn’t give AMD much freedom – the card is only 10% faster than the GTX 960, so they have to stay within reason on pricing – but it means that they’re the only game in town for a $200-$250 video card family. Otherwise the one tradeoff here (as has been the case with most of AMD’s cards this year) will be on power efficiency; R9 380X doesn’t improve on AMD’s efficiency at all, resulting in R9 380X drawing a lot more power for its 10% advantage over GTX 960. We will add however that a 10% gap means that the R9 380X’s performance isn’t outside the potential reach of factory overclocked GTX 960 cards, but that is very much on a case-by-case basis as opposed to today’s look at baseline performance for each video card series.

The challenge to the R9 380X then doesn’t come from below, but from above. The R9 390 and GTX 970 start at $289 – $60 more than the R9 380X – and each is a rather sizable 40%+ faster than the R9 380X. Consequently both are value spoilers, offering that 40% better performance for a 26% higher price; a significantly higher cost for even more significant performance. At the end of the day budgets exist for a reason and the R9 380X is a reasonable offering in the product range it was designed for, but if you can afford to spend more for GTX 970 or R9 390 then right now that’s the better buy (with NVIDIA’s current game bundle as an extra kicker in favor of this).

Last but not least however we have the matter of the specific R9 380X card in today’s review, ASUS’s STRIX R9 380X OC. With the STRIX lineup ASUS has focused on quality and workmanship, and their STRIX R9 380X OC continues this legacy. It’s a well-built card – one of the best to have come our way all year – and it sets a very high bar for ASUS’s competition. The one drawback with the card is the price, and this goes hand-in-hand with the value spoiler effect we just covered. At $259 the STRIX R9 380X OC halves the premium for an R9 390/GTX 970, yet those cards are still 30%+ faster. It’s very hard to charge a premium price for a premium card in the current market, and while the STRIX R9 380X is a fantastic R9 380X, it’s none the less in a very awkward spot right below some very powerful video cards.

Overclocking
Comments Locked

101 Comments

View All Comments

  • Ryan Smith - Monday, November 23, 2015 - link

    The power demands on the CPU are much more significant under a game than under FurMark.

    Also, that specific GTX 960 is an EVGA model with a ton of thermal/power headroom. So it's nowhere close to being TDP limited under Crysis.

    Edit: My apologies to one of our posters. It looks like I managed to delete your post instead of replying to it...
  • The True Morbus - Monday, November 23, 2015 - link

    So after all this time, this graphics card has the same performance as the now 2 years old GTX760?
    Right... I'm beginning to think the 760 was the best purchase of my life.
  • RussianSensation - Monday, November 23, 2015 - link

    Same performance? You may need to re-check benchmarks across the web. R9 380X is more than 40% faster than a GTX760 2GB. TPU has it 43% faster at 1080P and 45% faster at 1440P:
    http://www.techpowerup.com/reviews/ASUS/R9_380X_St...

    If you only have a 2GB version of the 760, you are also reducing texture quality in many games like Titanfall, Shadow of Mordor and have choppiness in Watch Dogs, AC Unity, Black Ops 3, and simply cannot even enable highest textures in some games like Wolfenstein NWO.

    R9 380X isn't anything special when we've seen GTX970/290/290X/390 for $250-270 but it beats your card easily by 35-40%.
  • Laststop311 - Monday, November 23, 2015 - link

    The 380x was a pointless launch. 50 dollars less you can just get the 380 which is only 10% slower. Or 50 more dollars and just get the 390 which blows the 380x away. This card targets a very narrow range and wasn't really needed imo.
  • Makaveli - Monday, November 23, 2015 - link

    I believe the difference in Shadow of Mordor between the 7970 and the 380x at 1080p may only be clockspeed and not a difference from Tahiti or Tonga!
  • silverblue - Monday, November 23, 2015 - link

    The 380X may come with extra features over the 7970, however has TrueAudio ever truly been tested? Its addition was to help reduce CPU usage and it would be a shame if it went unused in favour of the motherboard sound.
  • silverblue - Monday, November 23, 2015 - link

    Slight correction, it was to provide better effects, though I imagined that it would help a little with CPU usage anyway.
  • Makaveli - Monday, November 23, 2015 - link

    The only difference between them that counts is GCN 1.0 vs 1.2 TrueAudio has to be supported by the game and modor doesn't support it.
  • Cryio - Monday, November 23, 2015 - link

    You guys REALLY need to switch to a Skylake i7 4.5 GHz with DDR4 3000+ system for benching GPUs.

    That Ivy 4.2 GHz is certainly holding back AMD GPUs, core parking issues, not as fancy drivers and all.
  • Ryan Smith - Monday, November 23, 2015 - link

    The GPU testbed is due for a refresh. We'll be upgrading to Broadwell-E in 2016 once that's available.

Log in

Don't have an account? Sign up now