Launching This Week: Radeon R9 280X

The highest performing part of today’s group of launches will be AMD’s Radeon R9 280X. Based on the venerable Tahiti GPU, the R9 280X is the 6th SKU based on Tahiti and the 3rd SKU based on a fully enabled part.

AMD GPU Specification Comparison
  Asus Radeon R9 280X DCU II TOP XFX Radeon R9 280X DD (Ref. Clocked) AMD Radeon HD 7970 GHz Edition AMD Radeon HD 7970
Stream Processors 2048 2048 2048 2048
Texture Units 128 128 128 128
ROPs 32 32 32 32
Core Clock 970MHz 850MHz 1000MHz 925MHz
Boost Clock 1070MHz 1000MHz 1050MHz N/A
Memory Clock 6.4GHz GDDR5 6GHz GDDR5 6GHz GDDR5 5.5GHz GDDR5
VRAM 3GB 3GB 3GB 3GB
Typical Board Power >250W? 250W 250W 250W
Width Double Slot Double Slot Double Slot Double Slot
Length 11.25" 11" N/A N/A
Warranty 3 Years Lifetime N/A N/A
Launch Date 10/11/13 10/11/13 06/22/12 01/09/12
Launch Price $309 $329? $499 $549

In a nutshell, the R9 280X is designed to sit somewhere in between the original 7970 and the 7970 GHz Edition. For memory it has the same 3GB of 6GHz GDDR5 as the 7970GE, while on the GPU side it has PowerTune Boost functionality like the 7970GE, but at lower clockspeeds. At its peak we’re looking at 1000MHz for the boost clock on R9 280X versus 1050MHz on the 7970GE. Stranger yet is the base clock, which is set at just 850MHz, 75MHz lower than the 7970’s sole GPU clock of 925MHz and 150MHz lower than the 7970GE’s base clock. AMD wasn’t able to give us a reason for this unusual change, but we believe it’s based on some kind of balance between voltages, yields, and intended power consumption.

With that in mind, even with the lower base clock because this is a boost part it will have no problem outperforming the original 7970, as we’ll see in our performance section. Between the higher memory clocks and boost virtually always active, real world performance is going to be clearly and consistently above the 7970. At the same time however performance will be below the 7970GE, and as the latter is slowly phased out it looks like AMD will let its fastest Tahiti configuration go into full retirement, leaving the R9 280X as the fastest Tahiti card on the market.

As an aside, starting with the R9 280X and applicable to all of AMD’s video cards, AMD is no longer advertising the base GPU clockspeed of their parts. The 7970GE for example, one of the only prior boost enabled parts, was advertised as “1GHz Engine Clock (up to 1.05GHz with boost)”. Whereas the 280X and other cards are simply advertised as “Up to 1GHz” or whatever the boost clock may be.

As of press time AMD hasn’t gotten back to us on why this is. There’s really little to say until we have a formal answer, but since these cards are rarely going to reach their highest boost clockspeed (the fact that we can’t see the real clockspeed only further muddles matters) we believe it’s important that both the base clock and boost clock are published side-by-side, the same way as AMD has done it in the past and NVIDIA does it in the present. In that respect at least some of AMD’s partners have been more straightforward, as we’ve seen product fliers that list both clocks.

Getting back to the matter of 280X, let’s put the theoretical performance of the card in perspective. As R9 280X is utilizing a fully enabled Tahiti GPU we’re looking at a full 2048 stream processors organized over 8 CU arrays, paired with 32 ROPs. Compared to the original 7970 this gives R9 280X between 92% and 108% of the 7970’s shader/ROP/texture throughput, and 109% of the memory bandwidth. Or compared to the 7970GE we’re looking at 85% to 95% of the shader/ROP/texture throughput and 100% of the memory bandwidth.

Since this is another Tahiti part, TDP hasn’t officially changed from the 7970GE. The official TDP is 250W and the use of boost should keep actual TDP rather close to that point, though the use of lower clockspeeds and lower voltages means that in practice the TDP will be somewhat lower than 7970GE’s. For idle TDP AMD isn’t giving out an official number, but that should be in the 10W-15W range.

Moving on, the MSRP on the R9 280X will be $300. This puts the card roughly in the middle of the gulf between NVIDIA’s GeForce GTX 760 and GTX 770 with no direct competition outside of a handful of heavily customized GTX 760 cards. Against AMD’s lineup this will be going up opposite the outgoing 7970 cards, depending on which the R9 280X can be anywhere between faster and equal to the outgoing cards, but unlike the 7970s the R9 280X won’t have the Never Settle Forever game bundle attached.

Finally, because the R9 280X is based on the existing Tahiti GPU, this is going to be a purely virtual launch. AMD’s partners will be launching custom designs right out of the gate, and while we don’t have a product list we don’t expect any two cards to be identical. AMD has put together some reference boards utilizing a newly restyled cooler for testing and photo opportunities, but these reference boards will not be sampled or sold. Instead they’ve sent us a pair of retail boards which we’ll go over in the following sections: the XFX Radeon R9 280X Double Dissipation, and the Asus Radeon R9 280X DirectCU II TOP.

Please note that for all practical purposes we’ll be treating the XFX R9 280X DD as our reference 280X board, as it ships at the 280X reference clocks of 850MHz base, 1000MHz boost, 6000MHz VRAM. We expect other retail cards to be similar to the XFX card, although there’s still some outstanding confusion from XFX on whether their card will be a $299 card or not.

Fall 2013 GPU Pricing Comparison
AMD Price NVIDIA
  $650 GeForce GTX 780
  $400 GeForce GTX 770
Radeon R9 280X $300  
  $250 GeForce GTX 760
Radeon R9 270X $200  
  $180 GeForce GTX 660
  $150 GeForce GTX 650 Ti Boost
Radeon R7 260X $140  

 

TrueAudio Technology: GPUs Get Advanced Audio Processing XFX Radeon R9 280X Double Dissipation
Comments Locked

151 Comments

View All Comments

  • Galidou - Saturday, October 12, 2013 - link

    If he really has to whine about high end video card prices, he's new to this because many generations before, the top of the line was often sold for 800$. Anyway, if you simply run 1080p(which most of us does) you can be totally satisfied with a 150-300$ video card and two or three graphical options not maxed(you won't notice it unless you stop playing to just look at the graphics) which is quite different from older generations where you had to pay big bucks to run at higher resolution/graphical settings.

    I bought a 660ti for 300$ when it came out a year ago and I still run everything at very high/max settings in 1080p PERFECTLY. No reason to whine at all nowadays unless you're a kiddo that is new to the gaming industry and pc gaming gear.
  • Etern205 - Monday, October 14, 2013 - link

    Many generations before top of the end graphic cards like ATi Radeon 9700 Pro, the best card of its time cost $300, and the 2nd best, ATi Radeon 9500 Pro cost just $200.
    High-end card is the past don't cost a arm and a leg, you can get one and still have enough to feed yourself for a week. Now they cost a arm and a leg, where you have to starve yourself for a month just to have enough to buy one.
  • hansmuff - Tuesday, July 14, 2015 - link

    The Matrox MGA Millennium 4MB was $549 at launch and became a somewhat legendary performer in DOS VESA modes. That's 1995.
  • bwat47 - Monday, December 9, 2013 - link

    Yeah, a 280x was a steal for 299, excellent card.
  • Anand Lal Shimpi - Tuesday, October 8, 2013 - link

    We're working on it :) AMD gave Ryan very little time to go through four new cards, it's being added in real time here.
  • Sunrise089 - Tuesday, October 8, 2013 - link

    No disrespect Anand, but 'special relationship' with AMD notwithstanding, if they're asking you to have your article up at midnight for a launch but you can't even have product specs available by then I worry the advertising side of things is encroaching a bit into the editorial side.
  • zanon - Tuesday, October 8, 2013 - link

    Have to agree. I've always appreciated in the past that Anandtech would take the time to do reviews right, even if it very, very often meant that they'd come in days or more after the early rush. We've already got plenty of early rush stuff on the net that is of poor quality, please do not go that route. Just do a pipeline piece with early conclusions as you have before. You've got this going up across all twitter/rss/whatever feeds, everyone sees it and comes in, and it's a really poor showing.

    If AMD tells you to hit a certain launch window please kindly tell them to get stuffed or get your hardware earlier next time. If you're letting them rush you to their own schedule that feels like a really bad sign.
  • Anand Lal Shimpi - Tuesday, October 8, 2013 - link

    See the above response, but I'd add: you don't have to worry about us going down the path of lowest common denominator. I hardly think that what was posted here at midnight was even close to fitting that description.
  • Anand Lal Shimpi - Tuesday, October 8, 2013 - link

    er below response :)
  • chizow - Tuesday, October 8, 2013 - link

    Easy guys, it's happened with other non-AMD reviews too in the past, I know other staff writers will often chip in and help with some aspects of the reviews, like tables and graphs, and sometimes the entire piece comes together online in real-time. It's like a big group project or presentation, sometimes it just doesn't go off perfectly on such short deadlines.

Log in

Don't have an account? Sign up now