Radeon R7 360, R7 370, & R9 380

Launching today is the bulk of the Radeon 300 series, the so-called “numbered” parts like 390, which are categorically distinct from the Fury products. As we mentioned in the introduction, from a volume standpoint these cards are the backbone of AMD’s lineup and will be where the majority of their sales take place. The attention may be on Fiji as AMD’s newest and fastest part, but it’s here where AMD makes the most in sales.

To cut right to the chase then, for better or worse all of the numbered parts – 360, 370, 380, and the 390 series – are refresh products based on existing AMD GPUs. The only new GPU AMD is launching for desktop video cards is Fiji for the Fury parts, which leads us to the current situation.

Since the launch of the original GCN 1.0 cards in 2012, AMD has gone about refreshing their lineup in an unusual piecemeal fashion. All-told AMD has launched three new desktop GPUs in the last two years – Bonaire, Hawaii, and Tonga – and next week Fiji will make number 4. So AMD hasn’t stayed idle since 2012, but because they’re releasing 1-2 GPUs a year each year, they none the less end up releasing refresh products such as is the case with the 300 series.

AMD Radeon Product Evolution
Predecessor GPU Successor
Radeon R9 290X Hawaii Radeon R9 390X
Radeon R9 285 Tonga Radeon R9 380
Radeon R9 265
Radeon HD 7850
Pitcairn Radeon R9 370
Radeon R9 260 Bonaire Radeon R9 360

Along these lines, because AMD is not releasing new GPUs in this range, the company is also forgoing releasing reference cards. Reference cards were built for testing/promotional purposes, but all of the cards being launched today will be fully-custom cards tailored to the 300 series specifications, a number of which will be similar to existing 200 series cards. This means AMD’s partners are offering a significant variety of cards right off the bat, a pure virtual launch, as it is sometimes called.

However because AMD isn’t producing any retail reference cards they have also opted to not pre-sample the press ahead of time for reviews. And while we’ll be looking at partner cards over the coming weeks, for today’s launch we do not have any cards or benchmarks in hand, and for the immediate future our focus is going to be on Fury.

So with that out of the way, let’s get started on AMD’s new Radeon 300 series lineup, starting from the bottom and working our way up.

AMD R7 360 (Bonaire) Specification Comparison
  AMD Radeon R7 360 AMD Radeon R7 260X AMD Radeon R7 260 AMD Radeon HD 7790
Stream Processors 768 896 768 896
Texture Units 48 56 48 56
ROPs 16 16 16 16
Boost Clock 1050MHz 1100MHz 1000MHz 1000MHz
Memory Clock 6.5Gbps GDDR5 6.5Gbps GDDR5 6Gbps GDDR5 6Gbps GDDR5
Memory Bus Width 128-bit 128-bit 128-bit 128-bit
VRAM 2GB 2GB 1GB 1GB
FP64 1/16 1/16 1/16 1/16
TrueAudio Y Y Y Y
Transistor Count 2.08B 2.08B 2.08B 2.08B
Typical Board Power 100W 115W 95W 85W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.1 GCN 1.1 GCN 1.1 GCN 1.1
GPU Bonaire Bonaire Bonaire Bonaire
Launch Date 06/18/15 10/11/13 01/14/14 03/22/13
Launch Price $109 $139 $109 $149

At the bottom of the 300 series stack is the R7 360. This is a cut-down Bonaire based card featuring 12 of Bonaire’s 14 CUs active, for a total of 768 SPs. This part is essentially the replacement for the R7 260, which was similarly a cut-down Bonaire based part.

Like the other refresh cards in the 300 series, the R7 360 pushes the envelope just a bit harder to offer an incremental improvement in performance over its predecessor. AMD has turned up the GPU and memory clockspeeds slightly, from 1000MHz/6Gbps on R7 260 to 1050MHz/6.5Gbps on R7 360. Both the GPU and memory clockspeeds essentially close in on the gap between this card and AMD’s top tier Bonaire card, R7 260X, so everything here is within what we’ve seen the best of Bonaire designs do before.

At a typical board power (TBP) of 100W, R7 360 is spec’d to draw just a bit more power than its predecessor, in line with its clockspeed increases. Meanwhile AMD is telling us that we should see cards with 2GB of VRAM, and from the product lists I’ve seen in advance I’m expecting that this will be the default, at-MSRP configuration for this product. And with an MSRP of $109, it will be a drop-in replacement for the R7 260 from an MSRP standpoint.

Finally, as AMD’s entry-level video card, expect to see AMD targeting this at budget buyers and/or MOBA/F2P gamers. Both AMD and NVIDIA are well aware of how much money DOTA2, League of Legends, and other games within that space have been able to pull in, and they want a piece of that pie themselves. All of these games are designed to be able to run decently on iGPUs, so the company’s marketing focus is on being able to play these games at higher framerates with better image quality.

AMD R7 370 (Pitcairn) Specification Comparison
  AMD Radeon R7 370 AMD Radeon R9 270 AMD Radeon R7 265 AMD Radeon HD 7850
Stream Processors 1024 1280 1024 1024
Texture Units 64 80 64 64
ROPs 32 32 32 32
Core Clock ? 900MHz 900MHz 860MHz
Boost Clock 975MHz 925MHz 925MHz N/A
Memory Clock 5.6Gbps GDDR5 5.6Gbps GDDR5 5.6Gbps GDDR5 4.8Gbps GDDR5
Memory Bus Width 256-bit 256-bit 256-bit 256-bit
VRAM 2GB 2GB 2GB 2GB
FP64 1/16 1/16 1/16 1/16
TrueAudio N N N N
Transistor Count 2.8B 2.8B 2.8B 2.8B
Typical Board Power 110W 150W 150W 150W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.0 GCN 1.0 GCN 1.0 GCN 1.0
GPU Pitcairn Pitcairn Pitcairn Pitcairn
Launch Date 06/18/15 11/13/13 02/13/14 03/05/12
Launch Price $149 $179 $149 $249

Up next is the Radeon R7 370. Based on AMD’s venerable Pitcairn GPU, this card is essentially a rework of the R7 265, AMD’s sole cut-down 16 CU (1024 SP) Pitcairn card in the 200 series.

Compared to the R7 265, the R7 370 sees a 5% GPU clockspeed bump, up from the R7 265’s 925MHz boost clock to an “engine clock” (and what we’re assuming is a boost clock) for the R7 370 of 975MHz. Meanwhile memory speeds remain unchanged at 5.6Gbps, which after AMD’s Pitcairn board rework for the 200 series, is about as much as Pitcairn’s memory controllers are up to the task of driving.

Perhaps the most interesting change here is that AMD’s official TBP is spec’d for 110W; this is sharply lower than the official 150W TBP for the R7 265.  Truth be told I question whether this number is grounded in reality – even with some hearty BIOS-level optimizations, that’s a 26% reduction – but we’ll have to see what the retail cards are like once we can get our hands on them.


PowerColor PCS+ R7 370

The big issue for R7 370 right now is that Pitcairn is venerable, but it’s also old. It’s the one GCN 1.0 GPU in the 300 series, and that means it lacks all of the feature updates and optimizations that have come since then. Consequently compared to newer products it doesn’t feature fine-grained clockspeed domains, support for Freesync or True Audio, support for decoding any video over 1080p, nor does it possess performance optimizations like delta color compression.

Pitcairn continues to be a workhorse for AMD in large part because it does its job so well, but the differences between it and the newer GCN GPUs become more readily apparent with each generation. Along with lacking Freesync support, it’s also the only GPU in the 300 series lineup not to support DirectX 12 feature level 12_0 (instead it’s an 11_1 part), so while none of these feature deficits are deal-breakers, it does none the less highlight the fact that like the workhorses that have come before it (e.g. G92), Pitcairn is on its last generation.

Anyhow, AMD’s specifications call for the R7 370 to be offered with both 2GB and 4GB configurations. Based on the early pricing we’ve seen, it looks like the 2GB card will be the common configuration, with 4GB being an option, though less likely. Expect to see 2GB cards hit the market at $149, the same price as the R7 265 before it, with 4GB pricing around $169.

AMD R9 380 (Tonga/Tahiti) Specification Comparison
  AMD Radeon R9 380 AMD Radeon R9 285 AMD Radeon R9 280 AMD Radeon HD 7950 w/Boost
Stream Processors 1792 1792 1792 1792
Texture Units 112 112 112 112
ROPs 32 32 32 32
Core Clock N/A N/A 827MHz 850MHz
Boost Clock 970MHz 918MHz 933MHz 925MHz
Memory Clock 5.5Gbps GDDR5 5.5Gbps GDDR5 5Gbps GDDR5 5Gbps GDDR5
Memory Bus Width 256-bit 256-bit 384-bit 384-bit
VRAM 2GB 2GB 3GB 3GB
FP64 1/16 1/16 1/4 1/4
TrueAudio Y Y N N
Transistor Count 5.0B 5.0B 4.31B 4.31B
Typical Board Power 190W 190W 250W 250W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.2 GCN 1.2 GCN 1.0 GCN 1.0
GPU Tonga Tonga Tahiti Tahiti
Launch Date 06/18/15 09/02/14 03/04/14 08/14/12
Launch Price $199 $249 $279 $329

Going up the ladder once more, we have the R9 380. This card is based on AMD’s Tonga GPU, their first GCN 1.2 GPU, and is the descendant of the R9 285, which was launched back in September.

Like the other cards in today’s launch, the R9 380 has seen a spec bump compared to its predecessor. The boost clock is up by about 6% from 918MHz to 970MHz, and the memory clock is unchanged, holding at 5.5Gbps. No other changes have been made, and the other specifications such as the number of CUs (28), the number of ROPs.(32), and the number of memory controllers (4) remains identical to R9 285.

Update 06/18: We have since corrected the memory clocks for the R9 380. AMD's marketing material lists two different values; 5.5Gbps and 5.7Gbps. The original guide we based our information on listed just 5.7, so that is the original value we used. As we always list the minimum specifications for a product, we have corrected this to 5.5Gbps. Our apologies for the confusion.

Unfortunately this also means that we’ll be waiting for another day to see what a fully-enabled Tonga would be like. AMD has to date not shipped a fully enabled chip; it has since become clear that Tonga does in fact have 6 memory controllers (for a 384-bit bus) as opposed to the 4 we see active here, and on the desktop in particular we’ve never seen a part with all 32 CUs (2048 SPs) enabled. The closest thing to a full Tonga remains the R9 M295X in the Apple iMac. Despite being the oldest of the GCN 1.2 chips, Tonga for now also remains the most mysterious.


Sapphire Nitro R9 380

Moving on, compared to the R9 285 the TBP of the R9 380 holds at 190W. Whatever power optimizations AMD has been able to make have essentially been consumed by the clockspeed increases. In the 300 series lineup this makes the R9 380 AMD’s premiere sub-225W card, as these cards can be driven by a pair of 6-pin connectors, or on more forward-looking models a single 8-pin connector.

As far as memory configurations and pricing goes, like the other 300 series cards, R9 380 gets an optional memory bump. The base MSRP of $199 is for the 2GB card, while you can expect to see 4GB cards for $219-$229 or so.  Since AMD’s partners are also offering 4GB R7 370s, a 4GB R9 380 is not all that surprising, and it gives buyers an option for a card that's going to be a little more future-proof than a 2GB card in 2015.

Finally, expect to see AMD pitch the R9 380 as a 1440p card. While we haven’t benchmarked this card yet, based on what we’ve seen with the R9 285, I’m expecting similar results. In which case like the R9 285, AMD is likely overshooting in their expectations. In 2015 Tonga is a good GPU for 1080p gaming, but even with 4GB I’m not sure the performance is there to keep 1440p with high quality settings playable over the next 2-3 years.

Overall it’s interesting to note that of the first 3 300 series cards, all of them ended up being close (if not identical) to the 300 series OEM cards AMD quietly launched back at the start of May. The 360 and 370 are identical, whereas the retail 380 is clocked a bit higher relative to its OEM counterpart. So for anyone who was expecting the retail cards to be similar to the OEM cards, they were right in this case. Though the upside here at least is that the OEM cards don’t significantly deviate from the retail cards, which is a great relief given what we’ve seen in some of the previous generations.

But like the OEM cards, this also means that there are some obvious gaps in AMD’s current lineup from a hardware standpoint. None of these first 3 parts utilizes a fully-enabled GPU; each and every card uses a cut-down GPU of some kind. From a pricing standpoint this is likely a good thing for AMD and its partners since it ensures clear tiers of cards, but when we have single-card entries this is also typically done with fully-enabled GPUs, not cut-down GPUs. If we don’t eventually see X cards for some of these series I will be shocked, but then again we never did get a R9 285X…

In any case, this means that the coexistence of the 200 and 300 series will be an odd one to begin with. Along with the generally lower pricing of the 200 series, you can find fully-enabled Pitcairn and Bonaire cards there that don’t yet exist in the 300 series lineup, so the relative performance of the two series is a bit of a mess for the moment.

AMD Launches Retail Radeon 300 Series Radeon R9 390 Series: Return To Hawaii
Comments Locked

290 Comments

View All Comments

  • Pantsu - Sunday, June 21, 2015 - link

    It seems to me a foolish hope to think they'd be ahead of schedule give HBM1 actually started volume production in Q1 2015 instead of what was marked on this product map. If anything, HBM2 is further delayed.
  • colhoop - Thursday, June 18, 2015 - link

    Why do people always talk about NVIDIA's software environment as if it is some major advantage they have on AMD? It seems to me that they are both just as good, and from my experience with NVIDIA and AMD is I've had less driver issues with AMD believe it or not.

    But yeah the Fury X has benchmarks released by AMD using Far Cry 4 4k Ultra Settings and it outperforms the Titan X by more than 10 average fps. I know the benchmark isn't as reliable since it was released by AMD obviously but still, it really makes you wonder. I definitely think it will outperform the 980ti especially if AMD claims it can outperform the Titan X but of course we shall see :)
  • Pantsu - Thursday, June 18, 2015 - link

    Nvidia certainly spends a lot more money and effort on their software currently.
    - They have more timely driver updates aligned with big game releases
    - SLI support is much better than AMD's sparse and late updates to CF
    - GeForce Experience works much better than AMD's third party equivalent
    - Better control panel features like DSR, adaptive V-sync. AMD's efforts tend to be like half-baked copies of these. AMD hasn't come up with anything truly new in a long while and all I can do is smh at 'new features' like FRTC that's so simple it should've been in the control panel a decade ago.

    I do think for single GPU driver performance and stability there isn't much of a difference between the two, regardless of how many driver updates Nvidia does. Actually the latest Nvidia drivers have been terrible with constant TDR crashes for a lot of people. But that's anecdotal, both sides have issues at times, and on average both have ok drivers for single GPU. It's the above mentioned things that push Nvidia to the top imo.
  • xthetenth - Thursday, June 18, 2015 - link

    I like people talking about how AMD didn't get drivers out for Witcher 3 immediately and ignore that NV's drivers were incredibly flaky and they needed to be reminded Kepler cards exist.
  • Zak - Thursday, June 18, 2015 - link

    What? Zero issues playing Witcher 3 since day one.
  • blppt - Thursday, June 18, 2015 - link

    "I like people talking about how AMD didn't get drivers out for Witcher 3 immediately and ignore that NV's drivers were incredibly flaky and they needed to be reminded Kepler cards exist."

    Pfft....to this DAY the Crossfire support in W3 is terrible, and ditto for GTA5. The former is a TWIMTBP title, the latter is not---even has AMD CHS tech in it. I run Kepler Titan Blacks and 290x(s) and there is no question NVIDIA's drivers are far, far better in both games. Even the launch day Witcher 3 drivers are superior to AMD's half-assed May 27th 15.5 betas, which havent been updated since.

    For single cards, I'd agree, AMD drivers are almost as good as Nvidia's, except those Gameworks titles that need to be reoptimized by AMD's driver team.

    But there isnt even a question that Nvidia gets betas out much, much quicker and more effectively than AMD.

    And if you arent into betas, heck, AMD hasnt released an OFFICIAL R9 2xx driver since December, LOL. Which is what annoys me about this Fury launch---once again, AMD puts out this awesome piece of hardware, and they've been neglecting their current parts' drivers for months. What good is the greatest videocard on the planet (Fury X) if the drivers are rarely and poorly updated/optimized?
  • chizow - Thursday, June 18, 2015 - link

    @xthetenth - complete rubbish, while AMD fanboys were boycotting the game over PC only features, Nvidia fans were enjoying the game on day 1, courtesy of Nvidia who gave the game away to new GeForce owners.

    How is CF+AA doing in Witcher 3 btw? Oh right, still flaky and broken.
  • Yojimbo - Thursday, June 18, 2015 - link

    I am mostly referring to their growing investment in gaming library middleware, i.e., GameWorks.
  • TheJian - Thursday, June 18, 2015 - link

    Have you heard of Cuda, Gameworks or DAY1 drivers for game releases? You seem to be oblivious to the fact that cuda runs on 200+ pro apps and is taught in 500+ universities. Never mind the fact that NV releases drivers constantly for games when they ship, not 4-6 months (sometimes longer) later. You are aware the last AMD WHQL driver was Dec correct?

    http://support.amd.com/en-us/download/desktop?os=W...
    Dec 8, is NOT GOOD. They can't even afford to put out a WHQL driver every 6 months now. Get real. Nvidia releases one or two EACH MONTH. And no, I don't believe you have more problems with NV drivers ;) I say that as a radeon 5850 owner currently :)

    AMD's R&D has been dropping for 4yrs, while NV's has gained and now is more than AMD with less products. Meaning NV's R&D is GREATER and more FOCUSED on gpu/drivers. Passing on consoles was the best thing NV did in the last few years, as we see what it has done to AMD R&D and lack of profits.

    AMD needs new management. Hopefully Lisa Su is that person, and ZEN is the right direction. Focus on your CORE products! APU's don't make squat - neither do consoles at the margins they made to get the deals. There was a VERY good reason NV said exactly that. They passed because it would rob from CORE PRODUCTS. We see it has for AMD. It hasn't just robbed from hardware either. Instead of approaching companies like CD Projekt for Witcher 3 to add TressFX 2+yrs ago, they wait until the last 2 months then ask...ROFL. That is lack of funding then excuses why perf sucks and complaints about hairworks killing them. An easy fix in a config/profile for the driver solves tessellation for both sides (only maxwell can handle the load) so it's a non issue anyway, but still AMD should have approached these guys the second they saw wolves on the screen 2+yrs ago showing hairworks.

    http://www.tomshardware.com/reviews/amd-radeon-r9-...
    Check the pro results...AMD's new cards get a total smackdown, 3 of the 5 are by HUGE margins. Showcase, Maya, Catia all massive losses. Note you'd likely see the same in Adobe apps (premiere, AE, not sure about the rest) since they use Cuda. There is a good reason nobody tests Adobe and checks the cuda box for NV vs. OpenCL for AMD. ;) There is reason Anandtech chooses Sony, which sucks on Nvidia (google it). They could just as easily test Adobe with Cuda vs. AMD with Sony vegas. But NOPE. Don't expect an AMD portal site to run either of these tests...LOL. Even toms won't touch it, or even respond to questions about why they don't do it in the forums :(
  • chizow - Thursday, June 18, 2015 - link

    @colhoop it is largely going to depend on your use cases. For example, GeForce Experience is something many Nvidia users laud because it just works, makes it easy to maximize game settings, get new drivers, record compressed video. Then you have drivers, driver-level features (FXAA, HBAO+, Vsync that works), day 1 optimizations that all just work. I've detailed above some of the special reqs I've come to expect from Nvidia drivers to control 3D, SLI, AA. And the last part is just advertised driver features that just work. G-Sync/SLI, low power mode while driving multiple monitors, VSR, Optimus all of these just work. And finally you have the Nvidia proprietary stuff, 3D Vision, GameWorks, PhysX etc. Amazing if you use them, if you don't, you're not going to see as much benefit or difference.

Log in

Don't have an account? Sign up now