Radeon R9 390 Series: Return To Hawaii

Last but not least among the numbered 300 series stack are the top two cards and the only cards not to exist in some form in the OEM lineup, the R9 390 and R9 390X. These two cards are based on AMD’s Hawaii GPU, previously used for the R9 290 series, and of all of the cards in this refresh, these cards may very well attract the most interest for a couple of different reasons.

The first reason is simply because as AMD’s former flagship GPU, it was Hawaii and the 290 series that took the brunt of the blow from the launch of NVIDIA’s Maxwell 2 architecture launch in 2014. NVIDIA undercut AMD on price while beating them at performance and acoustics, which put AMD in a difficult situation. And since AMD is not replacing Hawaii with another chip at this performance level any time soon, this means that Hawaii is what AMD has available to throw at the sub-Fury market.

Meanwhile the other factor at play here is that AMD significantly cut 290 series prices in the months since the Maxwell 2 launch to recover from the situation and improve the competitive positioning of the products since they lacked the performance edge. This led to prices around $319 and $249 for the R9 290X and R9 290 as recently as the GTX 980 Ti launch, for example. However with the 300 series, AMD now wants to get card prices up to $429 and $329 respectively, which means that AMD needs to be able to do something special to justify a price hike of $100. Short of RAM and NAND flash memory, where cyclical commodity pressure leads to gluts and shortages of supplies, driving up the cost of existing computer components is very difficult to do, so how AMD is going to achieve this is of great interest.

Anyhow, without further ado, let’s take a look at AMD’s plans for the R9 390 series.

AMD R9 390 Series (Hawaii) Specification Comparison
  AMD Radeon R9 390X AMD Radeon R9 390 AMD Radeon R9 290X AMD Radeon R9 290
Stream Processors 2816 2560 2816 2560
Texture Units 176 160 176 160
ROPs 64 64 64 64
Core Clock ? ? 727MHz 662MHz
Boost Clock 1050MHz 1000MHz 1000MHz 947MHz
Memory Clock 6Gbps GDDR5 6Gbps GDDR5 5Gbps GDDR5 5Gbps GDDR5
Memory Bus Width 512-bit 512-bit 512-bit 512-bit
VRAM 8GB 8GB 4GB 4GB
FP64 1/8 1/8 1/8 1/8
TrueAudio Y Y Y Y
Transistor Count 6.2B 6.2B 6.2B 6.2B
Typical Board Power 275W 275W 250W 250W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.1 GCN 1.1 GCN 1.1 GCN 1.1
GPU Hawaii Hawaii Hawaii Hawaii
Launch Date 06/18/15 06/18/15 10/24/13 11/05/13
Launch Price $429 $329 $549 $399

As with the 290 series, the 390 series is split between two cards, with the difference coming down to clockspeeds and the number of CUs enabled. R9 390X and R9 390 are direct successors to R9 290 and R9 290X in this respect, with the former being a higher clocked part with all 44 CUs (2816 SPs) enabled, while the latter is lower clocked with 40 CUs (2560 SPs) enabled.

The biggest change from a specifications perspective is that AMD has cranked up both the GPU and memory clockspeeds in order to further improve performance. With partners already regularly offering higher-end factory overclocked cards with boost clockspeeds at 1050MHz (or more) for the R9 290X, AMD has essentially crafted a new SKU from this mark for the 390X. Meanwhile the R9 390 sees its boost clockspeed go from 947MHz to a flat 1000MHz, a slightly larger bump up in clockspeeds on both an absolute and relative basis.

More surprising is the fact that AMD has increased the memory clockspeeds from 5Gbps to 6Gbps, a full 1Gbps (20%) increase in memory clockspeeds. As you may recall from our R9 290X review, one of the architectural decisions made with Hawaii was to build its memory controllers wider and slower in order to make more efficient use of die space and to keep power consumption in check. The end result was a rather large 512-bit memory bus running at a slower 5Gbps, while on the chip itself Hawaii’s memory controller was smaller than Tahiti’s thanks to the lower memory clockspeed.

Consequently pushing 6Gbps is a surprising move from AMD. Since Hawaii’s memory controllers were not designed to be high clocking, AMD is certainly squeezing out everything they can from Hawaii in the process. The end result however is that AMD now has 20% more memory bandwidth to play with, for a total of 384GB/sec. This is more memory bandwidth than any other GDDR5 card in existence, and in memory bandwidth bottlenecked scenarios, this will definitely be to AMD’s advantage.

Speaking of memory, for the 390 series AMD has also made 8GB configurations the baseline for the series, whereas on the 290 series it was an optional value added feature for board partners. While one could write a small tome on the matter of memory capacity, especially in light of the fact that the Fury series only has 4GB of memory, ultimately the fact that the 390 series has 8GB now is due to a couple of factors. The first of which is the fact that 4GB Hawaii cards require 2Gb GDDR5 chips (2x16), a capacity that is slowly going away in favor of the 4Gb chips used on the Playstation 4 and many of the 2015 video cards. The other reason is that it allows AMD to exploit NVIDIA’s traditional stinginess with VRAM; just as with the 290 series versus the GTX 780/770, this means AMD once again has a memory capacity advantage, which helps to shore up the value of their cards versus what NVIDIA offers at the same price.

Meanwhile with the above in mind, based on comments from AMD product managers, it sounds like the use of 4Gb chips also plays a part in the memory clockspeed increases we’re seeing on the 390 series. Later generation chips don’t just get bigger, but they get faster and operate at lower voltages as well, and from what we’ve seen it looks like AMD is taking advantage of all of these factors.


Top: R9 Fury X. Bottom: R9 390X/290X

Moving on, let’s talk about power consumption. Both of the 390 series cards have been labeled with a typical board power of 275W. The fact that AMD is actually publishing a number this time around is a welcome change – the TBP for the 290 series wasn't originally published – but absent a review sample this also makes it hard to compare these cards to the 290 series. AMD’s guidance essentially suggests that total power consumption hasn’t come down at all (though perf-per-watt should have gone up), and in fact power consumption may be slightly up. Meanwhile this also happens to be identical to Fury X’s TBP, to give you an idea of how the 390 series compares to the Fury series.

In any case, this guidance gives us little reason to expect that 390’s per-per-watt situation is significantly improved from the 290 series. Consequently, expect to see AMD focus on producing a powerful card and selling it for a good price, as those are the best attributes that AMD can use to promote the 390 series and card sales.

Pricing on the other hand is going to be a mixed bag depending on which direction you’re looking from. As we mentioned a bit ago, AMD’s MSRPs for the 390 series is $329 for the R9 390 and $429 for the R9 390X. These prices essentially put the 390 cards in competition with the GTX 970 and GTX 980, with the R9 390 lining up perfectly with the former, and the R9 390X undercutting the latter by some $70. With the 290 series on the other hand AMD essentially had to price the 290X at GTX 970 levels and the 290 below that, so these 390 series prices represent a significant increase over the 290 series.

Ultimately AMD is banking on the improved performance of the 390 series to justify these higher price tags and allow them to recover on margins. Unfortunately we don’t have any review samples at this time, but if the performance is right then this would be a significant coup for AMD, as it would improve their situation and at the same time put significant new pressure on NVIDIA, which is the kind of situation that AMD thrives in.


PowerColor PCS+ R9 390X

One thing AMD won’t have to worry about is the card design situation. With the 290 series AMD shot themselves in the foot with an underperforming reference cooler, and although the underlying chip hasn’t really changed from 290 to 390, leaving card designs entirely in the hands of their partners means that AMD shouldn’t have a repeat of that aspect of the 290 series launch. All of the air-cooled 390 designs will be open air coolers with 2 or 3 fans, and usually the same designs as partners already used for their 290 series cards. Open air coolers do not solve the heat dissipation problem on their own – that 275W of heat needs to go somewhere – but at the very least for AMD it’s going to make for a quieter situation.

Last but certainly not least however, we want to talk a bit more about the performance optimizations AMD has been working on for the 390 series. While we’re still tracking down more details on just what changes AMD has made, AMD had told us that there are a number of small changes from the 290 series to the 390 series that should improve performance by several percent on a clock-for-clock, apples-to-apples basis. That means along with the 20% memory clockspeed increase and 5% GPU clockspeed increase, we should see further performance improvements from these lower-level changes, which is also why we can’t just overclock a 290X and call it a 390X.

So what are those changes? From our discussions with AMD, we have been told that the clock-for-clock performance gains comes from a multitude of small factors, things the company has learned from and been able to optimize for over the last 2 years. AMD did not name all of those factors, but there were a couple of optimizations in particular that were pointed out.

The first optimization is that AMD has gone back and refined their process for identifying the operating voltages of Hawaii chips, with the net outcome being that Hawaii voltages should be down a hair, reducing power and/or thermal throttling. The second optimization mentioned is that the 4Gb GDDR5 chips being used offer better timings than the 2Gb chips, which can depending on the timings improve various aspects of memory performance. Most likely AMD has reinvested these timing gains into improving the memory clockspeeds, but until we get our hands on a 390X card we won’t know for sure.


Sapphire R9 390X Tri-X

Shifting gears for a moment to marketing/promotion, expect to see AMD promote the 390 series on the basis of 4K gaming and VR. As far as VR goes there’s a good reason for this, as the R9 290 was the AMD card used in the Oculus Rift recommended hardware specification. So as long as developers stick to Oculus’s recommendations, then the 390 series will deliver the full performance necessary for VR gaming. Meanwhile for 4K gaming, the fact that AMD is promoting the 390 series for 4K is largely a rehash of AMD’s marketing angle from the 290 series, and not unlike the 380, I believe AMD is overshooting, especially as the Fury X should be a far better card for 4K gaming. But we’ll have to see what the performance numbers are like with retail cards.

Finally, to wrap things up we have our standard price comparison table below. Overall, while not every AMD card perfectly maps to an existing NVIDIA card, for the most part AMD is aiming the 300 series to go directly against the bulk of NVIDIA’s 900 series lineup. With the exception of the R9 390X AMD is not attempting to significantly undercut NVIDIA’s pricing, so it will be interesting to see if the performance of these refreshed cards is high enough to give the 300 series the competitive footing it needs. Otherwise AMD will need to deal with the spoiler effect of the 200 series, especially the R9 290X and R9 290.

Summer 2015 GPU Pricing Comparison
AMD Price NVIDIA
Radeon R9 Fury X $649 GeForce GTX 980 Ti
  $499 GeForce GTX 980
Radeon R9 390X $429  
Radeon R9 290X
Radeon R9 390
$329 GeForce GTX 970
Radeon R9 290 $250  
Radeon R9 380 $200 GeFroce GTX 960
Radeon R7 370
Radeon R9 270
$150  
  $130 GeForce GTX 750 Ti
Radeon R7 360 $110  
Radeon R7 360, R7 370, & R9 380
Comments Locked

290 Comments

View All Comments

  • Pantsu - Sunday, June 21, 2015 - link

    It seems to me a foolish hope to think they'd be ahead of schedule give HBM1 actually started volume production in Q1 2015 instead of what was marked on this product map. If anything, HBM2 is further delayed.
  • colhoop - Thursday, June 18, 2015 - link

    Why do people always talk about NVIDIA's software environment as if it is some major advantage they have on AMD? It seems to me that they are both just as good, and from my experience with NVIDIA and AMD is I've had less driver issues with AMD believe it or not.

    But yeah the Fury X has benchmarks released by AMD using Far Cry 4 4k Ultra Settings and it outperforms the Titan X by more than 10 average fps. I know the benchmark isn't as reliable since it was released by AMD obviously but still, it really makes you wonder. I definitely think it will outperform the 980ti especially if AMD claims it can outperform the Titan X but of course we shall see :)
  • Pantsu - Thursday, June 18, 2015 - link

    Nvidia certainly spends a lot more money and effort on their software currently.
    - They have more timely driver updates aligned with big game releases
    - SLI support is much better than AMD's sparse and late updates to CF
    - GeForce Experience works much better than AMD's third party equivalent
    - Better control panel features like DSR, adaptive V-sync. AMD's efforts tend to be like half-baked copies of these. AMD hasn't come up with anything truly new in a long while and all I can do is smh at 'new features' like FRTC that's so simple it should've been in the control panel a decade ago.

    I do think for single GPU driver performance and stability there isn't much of a difference between the two, regardless of how many driver updates Nvidia does. Actually the latest Nvidia drivers have been terrible with constant TDR crashes for a lot of people. But that's anecdotal, both sides have issues at times, and on average both have ok drivers for single GPU. It's the above mentioned things that push Nvidia to the top imo.
  • xthetenth - Thursday, June 18, 2015 - link

    I like people talking about how AMD didn't get drivers out for Witcher 3 immediately and ignore that NV's drivers were incredibly flaky and they needed to be reminded Kepler cards exist.
  • Zak - Thursday, June 18, 2015 - link

    What? Zero issues playing Witcher 3 since day one.
  • blppt - Thursday, June 18, 2015 - link

    "I like people talking about how AMD didn't get drivers out for Witcher 3 immediately and ignore that NV's drivers were incredibly flaky and they needed to be reminded Kepler cards exist."

    Pfft....to this DAY the Crossfire support in W3 is terrible, and ditto for GTA5. The former is a TWIMTBP title, the latter is not---even has AMD CHS tech in it. I run Kepler Titan Blacks and 290x(s) and there is no question NVIDIA's drivers are far, far better in both games. Even the launch day Witcher 3 drivers are superior to AMD's half-assed May 27th 15.5 betas, which havent been updated since.

    For single cards, I'd agree, AMD drivers are almost as good as Nvidia's, except those Gameworks titles that need to be reoptimized by AMD's driver team.

    But there isnt even a question that Nvidia gets betas out much, much quicker and more effectively than AMD.

    And if you arent into betas, heck, AMD hasnt released an OFFICIAL R9 2xx driver since December, LOL. Which is what annoys me about this Fury launch---once again, AMD puts out this awesome piece of hardware, and they've been neglecting their current parts' drivers for months. What good is the greatest videocard on the planet (Fury X) if the drivers are rarely and poorly updated/optimized?
  • chizow - Thursday, June 18, 2015 - link

    @xthetenth - complete rubbish, while AMD fanboys were boycotting the game over PC only features, Nvidia fans were enjoying the game on day 1, courtesy of Nvidia who gave the game away to new GeForce owners.

    How is CF+AA doing in Witcher 3 btw? Oh right, still flaky and broken.
  • Yojimbo - Thursday, June 18, 2015 - link

    I am mostly referring to their growing investment in gaming library middleware, i.e., GameWorks.
  • TheJian - Thursday, June 18, 2015 - link

    Have you heard of Cuda, Gameworks or DAY1 drivers for game releases? You seem to be oblivious to the fact that cuda runs on 200+ pro apps and is taught in 500+ universities. Never mind the fact that NV releases drivers constantly for games when they ship, not 4-6 months (sometimes longer) later. You are aware the last AMD WHQL driver was Dec correct?

    http://support.amd.com/en-us/download/desktop?os=W...
    Dec 8, is NOT GOOD. They can't even afford to put out a WHQL driver every 6 months now. Get real. Nvidia releases one or two EACH MONTH. And no, I don't believe you have more problems with NV drivers ;) I say that as a radeon 5850 owner currently :)

    AMD's R&D has been dropping for 4yrs, while NV's has gained and now is more than AMD with less products. Meaning NV's R&D is GREATER and more FOCUSED on gpu/drivers. Passing on consoles was the best thing NV did in the last few years, as we see what it has done to AMD R&D and lack of profits.

    AMD needs new management. Hopefully Lisa Su is that person, and ZEN is the right direction. Focus on your CORE products! APU's don't make squat - neither do consoles at the margins they made to get the deals. There was a VERY good reason NV said exactly that. They passed because it would rob from CORE PRODUCTS. We see it has for AMD. It hasn't just robbed from hardware either. Instead of approaching companies like CD Projekt for Witcher 3 to add TressFX 2+yrs ago, they wait until the last 2 months then ask...ROFL. That is lack of funding then excuses why perf sucks and complaints about hairworks killing them. An easy fix in a config/profile for the driver solves tessellation for both sides (only maxwell can handle the load) so it's a non issue anyway, but still AMD should have approached these guys the second they saw wolves on the screen 2+yrs ago showing hairworks.

    http://www.tomshardware.com/reviews/amd-radeon-r9-...
    Check the pro results...AMD's new cards get a total smackdown, 3 of the 5 are by HUGE margins. Showcase, Maya, Catia all massive losses. Note you'd likely see the same in Adobe apps (premiere, AE, not sure about the rest) since they use Cuda. There is a good reason nobody tests Adobe and checks the cuda box for NV vs. OpenCL for AMD. ;) There is reason Anandtech chooses Sony, which sucks on Nvidia (google it). They could just as easily test Adobe with Cuda vs. AMD with Sony vegas. But NOPE. Don't expect an AMD portal site to run either of these tests...LOL. Even toms won't touch it, or even respond to questions about why they don't do it in the forums :(
  • chizow - Thursday, June 18, 2015 - link

    @colhoop it is largely going to depend on your use cases. For example, GeForce Experience is something many Nvidia users laud because it just works, makes it easy to maximize game settings, get new drivers, record compressed video. Then you have drivers, driver-level features (FXAA, HBAO+, Vsync that works), day 1 optimizations that all just work. I've detailed above some of the special reqs I've come to expect from Nvidia drivers to control 3D, SLI, AA. And the last part is just advertised driver features that just work. G-Sync/SLI, low power mode while driving multiple monitors, VSR, Optimus all of these just work. And finally you have the Nvidia proprietary stuff, 3D Vision, GameWorks, PhysX etc. Amazing if you use them, if you don't, you're not going to see as much benefit or difference.

Log in

Don't have an account? Sign up now