Original Link: http://www.anandtech.com/show/7481/the-amd-radeon-r9-290-review
The AMD Radeon R9 290 Reviewby Ryan Smith on November 5, 2013 12:01 AM EST
With the launch of AMD’s Radeon R9 290X less than 2 weeks ago, the video card marketplace has become very active very quickly. The 290X not only reasserted AMD’s right to fight for the video card performance crown, but in doing so it has triggered an avalanche of pricing and positioning changes that have affected both NVIDIA and AMD.
NVIDIA for their part cut the price of the GTX 780 and GTX 770 to $500 and $330 respectively, repositioning the cards and giving them their first official price cuts since their spring launches. Meanwhile AMD has also made some changes, and although 290X is unaffected for the moment, 290 was affected before it even launched, receiving an arguably significant specification adjustment. Consequently with GTX 780’s price cut being NVIDIA’s counter to 290X, 290 has gone from just being a lower tier Hawaii card to also being AMD’s counter-counter, and in the process has become a somewhat different card than what it was going to be just one week ago.
But before we get ahead of ourselves, let’s start at the beginning. With the successful launch of the 290X behind them, and the equally successful launch of their new flagship GPU Hawaii, AMD is ready to make their next move. Launching today will be the Radeon R9 290, the obligatory lower-tier part for AMD’s new flagship lineup. Making the usual tradeoffs for a lower-tier part, AMD is cutting down on both the number of functional units and the clockspeeds, the typical methods for die harvesting, in exchange for a lower price. Now officially AMD has not announced the Radeon R9 290 in advance, but with listings for it having already gone up on the same day as the 290X, it’s something that everyone has been expecting.
As always we’ll offer a full breakdown of performance and other attributes in the following pages, but before we even begin with that we want to point out that the 290 is going to be one of AMD’s most controversial and/or hotly debated launches in at least a couple of years. The merits of 290X were already hotly debated in some gaming circles for its noise relative to its performance and competition, and unfortunately 290 is going to be significantly worse in that respect. We’ll have a full rundown in the following pages, but in a nutshell AMD has thrown caution into the wind in the name of maximizing performance.
|AMD GPU Specification Comparison|
|AMD Radeon R9 290X||AMD Radeon R9 290||AMD Radeon R9 280X||AMD Radeon HD 7970|
|Memory Clock||5GHz GDDR5||5GHz GDDR5||6GHz GDDR5||5.5GHz GDDR5|
|Memory Bus Width||512-bit||512-bit||384-bit||384-bit|
|Typical Board Power||~300W (Unofficial)||~300W (Unofficial)||250W||250W|
|Manufacturing Process||TSMC 28nm||TSMC 28nm||TSMC 28nm||TSMC 28nm|
|Architecture||GCN 1.1||GCN 1.1||GCN 1.0||GCN 1.0|
Diving right into the hardware specifications, Radeon R9 290 is a bit more powerful than usual for a lower-tier part. AMD has cut the number of CUs from 44 to 40 – disabling 1 CU per SE – while adjusting down the base GPU clockspeed and boost GPU clockspeed to from 727MHz and 1000MHz to 662MHz and 947MHz respectively. However AMD has not cut the amount of memory, the memory clockspeed, the memory bus width, or the number of ROPs, leaving those at 5GHz for the memory clockspeed, 512-bits for the memory bus width, and all 64 ROPs for the back-end hardware.
As a result the differences between the 290 and 290X are on paper limited entirely to the clockspeed differences and the reduced number of CUs. At their top boost bins this gives 290 95% the clockspeed of 290X, and 91% of the shader hardware, giving 290 100% of 290X’s memory performance, 95% of 290X’s ROP and geometry performance, and 86% of 290X’s shading/texturing performance.
Compared to AMD’s last generation offerings, the 290 is going to be closer to 290X than 7950 was to 7970. 290 retains a larger percentage of 290X’s shader and ROP performance, never mind the fact that the full 320GB/sec of memory bandwidth is being retained. As such despite the wider price difference this time around, performance on paper is going to be notably closer. Paper will of course be the key word here, as in the case of 290 more so than any other card we’ve looked at in recent history theory and practice will not line up. Compared to the 290X, practice will be favoring the 290 by far.
Moving on to power consumption, perhaps because of AMD’s more aggressive specifications for their lower-tier card this time around, power consumption is not dropping at all. AMD is still not throwing us any useful hard numbers, but based on our performance data we estimate the 290 to have a nearly identical TDP to the 290X, leading us to keep it at an unofficial 300W. Lower-tier parts typically trade performance for power consumption, but that will not be the case here. Power consumption will be identical while performance will be down, so efficiency will be slipping and 290 will have all the same power/cooling requirements as 290X.
Meanwhile like the 290X launch, the 290 launch is going to be a hard launch, and a full reference launch at that. As such we’ll be seeing 290 cards go up for sale at the usual retailers today, with all of those cards using AMD’s reference cooler and reference board, itself unchanged from the 290X.
As for pricing and competitive positioning, AMD will be launching the 290 at what we consider to be a very aggressive price of $399. Based on the initial specifications, the performance, and the competition, we had been expecting AMD to launch this at $449, mirroring the launch of the 7950 in the process. But AMD has gone one step further by significantly undercutting both themselves and NVIDIA.
290’s immediate competition on the AMD side will be the $549 290X above it and the $299 280X below it, while on the NVIDIA side the competition will be the $499 GTX 780 above it and the $329 GTX 770 below it. Pricing wise this puts 290 as closer competition to 280X/GTX 770 than it does the high-tier cards, but as we’ll see in our benchmarks AMD is aiming for the top with regards to performance, which will make price/performance comparisons both interesting and frustrating at the same time.
NVIDIA for their part will have their 3 game Holiday GeForce Bundle on the GTX 780 and GTX 770, presenting the same wildcard factor for overall value that we saw with the 290X launch. As always, the value of bundles are ultimately up to the buyer, especially in this case since we’re looking at a rather significant $100 price gap between the 290 and the GTX 780.
|Fall 2013 GPU Pricing Comparison|
|Radeon R9 290X||$550|
|$500||GeForce GTX 780|
|Radeon R9 290||$400|
|$330||GeForce GTX 770|
|Radeon R9 280X||$300|
|$250||GeForce GTX 760|
|Radeon R9 270X||$200|
|$180||GeForce GTX 660|
|$150||GeForce GTX 650 Ti Boost|
|Radeon R7 260X||$140|
AMD's Last Minute 290 Revision
As we alluded to at the start of this review, the launch of the 290 series has put both AMD and NVIDIA in a state of rapid response. NVIDIA has needed to make adjustments in response to AMD’s new products, and in turn AMD has needed to make adjustments to their own products to take into account NVIDIA’s adjustments. In a more typical launch cycle this process may be more spread out – and in most cases, all adjustments will happen before the first new video card is even launched – but like AMD’s launch schedule itself all of these adjustments have become compressed.
This is no more evident across AMD’s product lineup than it is for the Radeon R9 290, which even by video card launch standards received a very late specification adjustment. Review samples started arriving on Friday, October 25th (the day after the 290X launch) with a planned launch date of Thursday the 31st, only for AMD to push back the launch late on Monday the 28th, doing so just 48 hours before the 290 was meant to be launched. Pushing back the launch, with hardware already in our hands and many benchmarks already complete, AMD issued a new launch date of today (November 5th) and alongside it changed their competitive positioning, giving the 290 a specification adjustment in the process.
The end result is this: while the 290 was originally slated to go up against the $400 GTX 770, in response to NVIDIA’s price cuts AMD decided to make a run at the newly repriced $500 GTX 780 instead. To do so and to be able to meaningfully challenge the GTX 780, AMD would have to make the 290 faster than its original configuration, which in turn necessitated the specification change. As a result this has been one of the wildest video card launches in some time, easily rivaling the launch of the Radeon HD 4870.
So what’s changed between the original 290 and the new 290 as we know it? With 290 hardware already in the hands of reviewers and being shipped out to retailers for the original launch date, it’s already too late for AMD to change the clockspeeds or CU configurations; all of that was validated and burnt into GPUs and BIOSes long ago. Never mind the fact that AMD has already binned these chips for the existing 290 clockspeeds and voltages, so higher clockspeeds would require a new binning and reduce yields in the process. As a result what hasn’t changed are the formal clockspeeds; 290 was and remains a 947MHz boost clock product with 40 active CUs and 4GB of 5GHz GDDR5.
|Radeon R9 290 Specification Changes|
|AMD Radeon R9 290 (Revised)||AMD Radeon R9 290 (Original)|
|Memory Clock||5GHz GDDR5||5GHz GDDR5|
|Memory Bus Width||512-bit||512-bit|
|Typical Board Power||~300W (Unofficial)||~300W (Unofficial)|
|Max Fan Speed||47%||40%|
|Intended Competitor||GeForce GTX 780||GeForce GTX 770|
What has changed is the default fan speed. As you might recall from our 290X review, the 290X can’t actually sustain its 1000MHz boost clock at its default fan limit of 40%. The amount of heat generated at those clockspeeds and voltages is just too great for the cooler, and as a result the card has to pull back, significantly at times, in order to keep itself within tolerances with the amount of cooling provided at a 40% fan speed. Like the 290X, the 290 as originally specified would also have a default fan speed of 40%, and like the 290X it too would throttle under just about all sustained workloads. Or as AMD likes to put it, the 40% fan speed on the original 290 would have left “untapped performance headroom.”
So for the new 290 as will be reviewed and shipping, AMD has turned up the default fan speed from 40% to 47%, essentially making uber mode the default mode on the 290. Consequently with improved cooling performance the 290 throttles less (if at all), thereby improving its performance despite the other specifications technically remaining the same. Or to put this another way, AMD was able to significantly increase their performance merely by turning up the fan speed and reducing the thermal throttling that was holding back the card’s performance.
Of course there are some very clear, very important tradeoffs for doing this. As this is the same cooler that was on the 290X, the acoustic profile on the 290 is identical to the 290X. That means that at the original 40% default the amount of noise coming from the card is just north of 53dB, a level that’s louder than NVIDIA’s competing cards and just on the edge of reasonable noise levels overall. Going above 40% further improves the cooling performance of the card, but it moves the noise levels into “unreasonable” territory, with the increase to 47% causing an equally significant increase in noise.
To give you an idea of both the performance improvement and the noise increase from AMD’s last minute specification change, we’ve run a selection of our games at 2560x1440 both on the originally specified 290, and the 290 with its new shipping specification.
|Radeon R9 290 Average Clockspeeds|
|47% Fan (Revised)||40% Fan (Original)|
|TW: Rome 2||
AMD’s fan speed adjustment had a significant impact on gaming performance. At 47% the fan speed on the 290 is now fast enough – and just so – to eliminate thermal throttling on the card under any gaming workload. As a result sustained clockspeeds that were anywhere between 870MHz and 662MHz (the base clock) have become 947MHz across the board. Consequently the overall performance increase from doing this is 14%, which is larger than the performance gap between some cards. At 40% the 290 was getting throttled so badly that just by increasing the fan speed AMD was able to essentially reinvent the 290 as a higher performance SKU without changing the hardware.
But the acoustic costs are equally significant.
We’ll have a full breakdown of the matter later on in our article, but briefly the acoustic cost of increasing the 290’s fan speed from 40% to 47% is a 4dB rise in noise. 290X was just shy of being 6dB louder than GTX 780, and with 290 that gap is now just shy of 10dB, twice the loudness on a perceptual basis. It’s very much a pyrrhic victory for AMD; 290 is now an incredibly fast card for the price, but the noise levels are unreasonable and border on the absurd.
On a final note on the matter, how AMD is distributing the revised fan speed specification is also a little different than anything we’ve seen them do before. With it being too late to reprogram the first wave of cards – remember, at this point they’re already boxed up for retailers – AMD has programmed the change into their drivers instead. So for the 290 with Catalyst 13.11 Beta v8 and later, AMD will override the BIOS default of 40% and put it at 47%, making this the rumored "AMD performance driver" that some have been expecing to be released alongside the 290. Meanwhile the next wave of cards will presumably come pre-programmed with the new 47% specification.
Meet The Radeon R9 290
Having gone over AMD’s last minute 290 revision, let’s briefly go over the hardware itself. Since the 290 is based on the exact same reference design as last month’s 290X, the hardware is identical with the exception of the changes in the GPU configuration and the aforementioned fan speed changes. As such, compared to the 290X there are no changes on a physical basis.
Starting as always from the top, the 290 measures in at 10.95”. The PCB itself is a bit shorter at 10.5”, but like the 7970 the metal frame/baseplate that is affixed to the board adds a bit of length to the complete card. Meanwhile AMD’s shroud sports a new design, one which is shared across the 200 series. Functionally it’s identical to the 7900 series, being made of similar material and ventilating in the same manner.
Pulling off the top of the shroud, we can see in full detail AMD’s cooling assembling, including the heatsink, radial fan, and the metal baseplate. AMD is still using a covered aluminum block heatsink designed specifically for use in blower designs, which runs most of the length of the card between the fan and PCIe bracket. Connecting the heatsink to the GPU is an equally large vapor chamber cooler, which is in turn mounted to the GPU using AMD’s screen printed, high performance phase change TIM. The radial fan providing airflow is the same 75mm diameter fan we’ve seen in earlier AMD designs, and consequently the total heat capacity of this cooler will be similar, but not identical to earlier designs. With AMD running the 290 at a hotter 95C versus the 80C average of the 7900 series, this same cooler is actually able to move more heat despite being otherwise no more advanced.
Meanwhile for power delivery AMD is using a traditional 5+1 power phase setup, with power delivery being driven by their newly acquired IR 3567B controller. This will be plenty to drive the card at stock, but hardcore overclockers looking to attach the card to water or other exotic cooling will likely want to wait for something with a more robust power delivery system. As for memory, despite the 5GHz memory clockspeed for the 290, AMD has actually equipped the card with everyone’s favorite 6GHZ Hynix R0C modules. 16 of these modules are located around the GPU on the front side of the PCB, with thermal pads connecting them to the metal baseplate for cooling.
As for display connectivity, the 290 utilizes AMD’s new reference design of 2x DL-DVI-D, 1x HDMI, and 1x DisplayPort. Compared to the 7900 series AMD has dropped the two Mini DisplayPorts for a single full-size DisplayPort, and brought back the second DVI port. Note that unlike some of AMD’s more recent cards these are both physically and electrically DL-DVI ports, so the card can drive 2 DL-DVI monitors out of the box; the second DL-DVI port isn’t just for show. But as a compromise of this design – specifically, making the second DVI port full DL-DVI – AMD had to give up the second DisplayPort, which is why the full sized DisplayPort is back.
Moving on, AMD’s dual BIOS functionality is back once more, but unlike the 290X the second BIOS will not be serving any defined purpose. Both BIOSes are identical as AMD doesn’t have an uber mode for the 290, so switching between the two will not change the card’s performance in any way. In this setup the second BIOS is reduced to serving as a safety net for end-user BIOS flashing.
Finally, let’s wrap things up by talking about miscellaneous power and data connectors. With AMD having gone with bridgeless (XDMA) Crossfire for the 290 series, the Crossfire connectors that have adorned high-end AMD cards for years are now gone. Other than the BIOS switch, the only thing you will find at the top of the card are the traditional PCIe power sockets. AMD is using the traditional 6pin + 8pin setup here, and not the 6pin + 6pin setup seen in the first pictures of the card. Thse sockets when combined with the PCIe slot power are good for delivering 300W to the card, which is what we estimate to be the card’s TDP limit. Consequently overclocking boards are all but sure to go the 8pin + 8pin route once those eventually arrive.
AMD's Gaming Evolved Application
During AMD’s “partner time” block at the 2014 GPU Product Showcase, one of the projects presented was the Raptr social networking and instant messaging application. Put together by the company of the same name, AMD would be partnering with Raptr to produce an AMD branded version of the utility called the “AMD Gaming Evolved App, Powered By Raptr”.
In a nutshell, the Gaming Evolved App (GEA) is AMD’s attempt to bring another value add feature to the Radeon brand. And although AMD will never explicitly say this, to be more specific the GEA is clearly intended to counter NVIDIA successful GeForce Experience utility, which exited beta back in May and has been continuing to add features since.
Raptr/GEA contains a wealth of functionality, with the application being several years old at this point, but the key feature as a video card utility and the reason AMD has picked it up is its latest feature addition, the game optimization service. Just launched last month in beta, the optimization service is a functional clone of GeForce Experience’s optimization service. Designed with the same goals in mind, the GEA optimization service is intended to offer the ability for gamers disinterested in configuring their games – or even just looking for a place to start – a way to simply download a suitable collection of settings for their games and hardware and apply those settings to their games.
The concept is in practice very similar to the recommended settings that most games apply today, but driven by the GPU manufacturer instead of the game developer, and kept up to date with hardware/software changes as opposed to being set in stone when the game went gold. Even for someone like a professional GPU reviewer, it’s a very nifty thing to have when turning up every setting isn’t going to be practical.
To get right to the point then, while we’re big fans of the concept it’s clear that this is a case of AMD tripping over themselves in trying to react to something NVIDIA has done, by trying to find the fastest way of achieving the same thing. Like GeForce Experience, AMD has started bundling GEA with their drivers and installing it by default, but unlike GFE it’s still in beta at this point, and a very rough beta at that. And not to take an unnecessary shot at AMD, but even in beta GeForce Experience wasn’t this raw or this incomplete.
So why are we so down on GEA? There are a few reasons, but the most basic of which is that the Raptr service lacks enough performance data for GEA to offer meaningful recommendations. Even on a fairly old card like a Radeon HD 7950, GEA was only able to find settings for 5 of the 11 games we have installed on our GPU testbed, failing to include settings for a number of games that are months (if not years) old. To be fair every service has to start out somewhere, and GFE certainly didn’t launch with a massive library of games, but 5 games, none newer than March, is a particularly bad showing.
Now a lot of this has to do with how Raptr collects the performance data it uses for recommendations. NVIDIA for their part decided to do everything in house, relying on their driver validation GPU farms to benchmark games across multiple settings to find a good balance based on parameters picked by the GFE development team. Raptr, though backed by AMD, does not have anything resembling NVIDIA’s GPU farms and as such is going the crowdsourced route, relying on telemetry taken from Raptr users’ computers. Raptr’s data acquisition method is not necessarily wrong, but it means there’s no one to bootstrap the service with data, which means the service has started out with essentially nothing.
Raptr for their part is aware of the problem they’re faced with, and in time the distribution of the GEA along with their own Raptr application will hopefully ensure that there are enough users playing enough games out there to collect the necessary data. Even so, they did have to implement what amounts to a solution to the tragedy of the commons problem to make sure that data gets collected; users cannot receive settings from the Raptr service unless they provide data in return. Turning off the telemetry service will also turn off the client’s ability to pull down settings, full stop. Given the service’s requirements for data collection it’s likely the best solution to the problem, but regardless we have to point out that Ratpr is alone in this requirement. NVIDIA can offer GFE without requiring performance telemetry from users.
Moving on then, the other showstopper with GEA’s current optimization service is that it’s obvious the UI has been an afterthought. The GEA UI lists settings by the values used in a game’s settings file, rather than the name of that value. E.g. “Ultra” texture quality in Bioshock Infinite is labeled as texture detail “4”, or worse. Without sufficient labeling it’s impossible to tell just what those settings mean, let alone what they may do. As such applying GEA settings right now is something of a shot in the dark, as you don’t know what you’re going to get.
Finally, presumably as a holdover from the fact that Raptr is free, GEA runs what can only be described as ads. These aren’t straight up advertisements, rather directing users towards other services Raptr/GEA provides, such as Free-2-Play games and a rewards service. But the end game is the same as these services are paid for by Raptr’s sponsors and are intended to drive users towards purchasing games and merchandise from those sponsors. Which far be it for us to look down upon advertisements – after all, AnandTech is ad supported – but there’s something to be said for ad supported applications in a driver download. We're at something of a loss for explaining why AMD doesn't just foot the complete bill on their customized version of the Raptr client and have the ads removed entirely.
At any rate we do have some faith that in time these issues can be dealt with and the GEA can essentially be fixed, but right now the GEA is far too raw for distribution. It needs to go back into development for another few months or so (and the service bootstrapped with many more computer configurations and games) before it’s going to be of suitable quality for inclusion in AMD’s drivers. Otherwise AMD is doing their users a disservice by distributing inferior, ad supported software alongside the software required to use their products.
For the launch of the Radeon R9 290, the press drivers and the launch drivers will be AMD’s recently released Catalyst 13.11 Beta v8 drivers. Along with containing support for the 290 and the 47% fan speed override, the only other changes in these drivers involve Batman: Arkham Origins and Battlefield 4, games which we aren’t using for this review. So the results will be consistent with past drivers. Meanwhile for NVIDIA’s cards we’re continuing to use their release 331.58 drivers.
|CPU:||Intel Core i7-4960X @ 4.2GHz|
|Motherboard:||ASRock Fatal1ty X79 Professional|
|Power Supply:||Corsair AX1200i|
|Hard Disk:||Samsung SSD 840 EVO (750GB)|
|Memory:||G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)|
|Case:||NZXT Phantom 630 Windowed Edition|
AMD Radeon R9 290X
AMD Radeon R9 290
XFX Radeon R9 280X Double Dissipation
AMD Radeon HD 7970 GHz Edition
AMD Radeon HD 7970
AMD Radeon HD 6970
AMD Radeon HD 5870
NVIDIA GeForce GTX Titan
NVIDIA GeForce GTX 780
NVIDIA GeForce GTX 770
NVIDIA Release 331.58
AMD Catalyst 13.11 Beta v1
AMD Catalyst 13.11 Beta v5
AMD Catalyst 13.11 Beta v8
|OS:||Windows 8.1 Pro|
Metro: Last Light
As always, kicking off our look at performance is 4A Games’ latest entry in their Metro series of subterranean shooters, Metro: Last Light. The original Metro: 2033 was a graphically punishing game for its time and Metro: Last Light is in its own right too. On the other hand it scales well with resolution and quality settings, so it’s still playable on lower end hardware.
For the bulk of our analysis we’re going to be focusing on our 2560x1440 results, as monitors at this resolution will be what we expect the 290 to be primarily used with. A single 290 may have the horsepower to drive 4K in at least some situations, but given the current costs of 4K monitors that’s going to be a much different usage scenario. The significant quality tradeoff for making 4K playable on a single card means that it makes far more sense to double up on GPUs, given the fact that even a pair of 290Xs would still be a fraction of the cost of a 4K, 60Hz monitor.
With that said, there are a couple of things that should be immediately obvious when looking at the performance of the 290.
- It’s incredibly fast for the price.
- Its performance is at times extremely close to the 290X
To get right to the point, because of AMD’s fan speed modification the 290 doesn’t throttle in any of our games, not even Metro or Crysis 3. The 290X in comparison sees significant throttling in both of those games, and as a result once fully warmed up the 290X is operating at clockspeeds well below its 1000MHz boost clock, or even the 290’s 947MHz boost clock. As a result rather than having a 5% clockspeed deficit as the official specs for these cards would indicate, the 290 for all intents and purposes clocks higher than the 290X. Which means that its clockspeed advantage is now offsetting the loss of shader/texturing performance due to the CU reduction, while providing a clockspeed greater than the 290X for the equally configured front-end and back-end. In practice this means that 290 has over 100% of 290X’s ROP/geometry performance, 100% of the memory bandwidth, and at least 91% of the shading performance.
So in games where we’re not significantly shader bound, and Metro at 2560 appears to be one such case, the 290 can trade blows with the 290X despite its inherent disadvantage. Now as we’ll see this is not going to be the case in every game, as not every game GPU bound in the same manner and not every game throttles on the 290X by the same degree, but it sets up a very interesting performance scenario. By pushing the 290 this hard, and by throwing any noise considerations out the window, AMD has created a card that can not only threaten the GTX 780, but can threaten the 290X too. As we’ll see by the end of our benchmarks, the 290 is only going to trail the 290X by an average of 3% at 2560x1440.
Anyhow, looking at Metro it’s a very strong start for the 290. At 55.5fps it’s essentially tied with the 290X and 12% ahead of the GTX 780. Or to make a comparison against the cards it’s actually priced closer to, the 290 is 34% faster than the GTX 770 and 31% faster than the 280X. AMD’s performance advantage will come crashing down once we revisit the power and noise aspects of the card, but looking at raw performance it’s going to look very good for the 290.
Company of Heroes 2
Our second benchmark in our benchmark suite is Relic Games’ Company of Heroes 2, the developer’s World War II Eastern Front themed RTS. For Company of Heroes 2 Relic was kind enough to put together a very strenuous built-in benchmark that was captured from one of the most demanding, snow-bound maps in the game, giving us a great look at CoH2’s performance at its worst. Consequently if a card can do well here then it should have no trouble throughout the rest of the game.
Unlike Metro, Company of Heroes 2 isn’t a title that the 290X gets throttled by nearly as much in our benchmarking, but it’s still something that once again demonstrates just how close 290 gets to 290X. 290 trails 290X by just 5%, a far cry from the $150 difference in price tags. Meanwhile because this is a game that AMD cards are doing so well in, the 290 also fares extremely well against the GTX 780, surpassing it by 23%. The performance gaps versus the 280X and GTX 770 are even larger yet, at 34% and 55% respectively.
Minimum framerates are similarly in AMD’s favor. On a relative basis the 290 falls behind the 290X by a little more here – by about 7% – due to the shader heavy workload of this benchmark’s most difficult scene, but that’s still only 7% behind a card 38% more expensive. Or to once again draw a GTX 780 comparison, it’s 33% faster.
Bioshock Infinite is Irrational Games’ latest entry in the Bioshock franchise. Though it’s based on Unreal Engine 3 – making it our obligatory UE3 game – Irrational had added a number of effects that make the game rather GPU-intensive on its highest settings. As an added bonus it includes a built-in benchmark composed of several scenes, a rarity for UE3 engine games, so we can easily get a good representation of what Bioshock’s performance is like.
With Bioshock we once again see the 290 trailing the 290X by a small margin, this time of 5%. It’s the difference between technically sustaining a 60fps average at 2560 or just falling short, but only just. Meanwhile compared to the GTX 780 the 290 is handed its first loss, though by an even narrower margin of only 3%. More to the point, on a pure price/performance basis, the 290 would need to lose by quite a bit more to offset the $100 price difference.
Meanwhile, it’s interesting to note not only how much faster the 290 is than the 280X or the GTX 770, but even the 7950B. The 290 series is not necessarily intended to be an upgrade for existing 7900 series, but because the 7950’s performance was set so much lower than the 7970/280X’s, and because 290 performs so closely to the top-end 290X, it creates a sizable gap between the 7950 and its official replacement. With a performance difference just shy of 50%, the 290 is reaching the point where it’s going to be a practical upgrade for 7950 owners, particularly those who purchased it in early 2012 and who paid the full $450 price tag it launched at. It’s nowhere near a full generational jump, but it’s certainly a lot more than we’d expect to see for a GPU that’s manufactured on the same process as 7950’s GPU, Tahiti.
Our major multiplayer action game of our benchmark suite is Battlefield 3, DICE’s 2011 multiplayer military shooter. Its ability to pose a significant challenge to GPUs has been dulled some by time and drivers, but it’s still a challenge if you want to hit the highest settings at the highest resolutions at the highest anti-aliasing levels. Furthermore while we can crack 60fps in single player mode, our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, so hitting high framerates here may not be high enough.
With Battlefield 3 generally favoring NVIDIA GPUs the 290X fell just short of the GTX 780, and consequently the 290 will fall back a bit further. As such the 290 trails the GTX 780 by 7% while trailing the 290X by a narrower 5%. Furthermore in this case the 290 just hits the cutoff for a 60fps average at 2560, which means the card should have no problem sustaining minimum framerates above 30fps in even the most hectic firefights.
Elsewhere the 290 doesn’t get to enjoy quite the massive performance advantages over the 280X and GTX 770 that it enjoyed earlier, but it’s still ahead of its cheaper competitors. Against the 280X the 290 is 23% faster, while against the GTX 770 it’s a narrower 12%.
Still one of our most punishing benchmarks, Crysis 3 needs no introduction. With Crysis 3, Crytek has gone back to trying to kill computers and still holds “most punishing shooter” title in our benchmark suite. Only in a handful of setups can we even run Crysis 3 at its highest (Very High) settings, and that’s still without AA. Crysis 1 was an excellent template for the kind of performance required to drive games for the next few years, and Crysis 3 looks to be much the same for 2013.
Crysis 3 happens to be another game that the 290X sees significant throttling at, and as such this is another game where the 290X and 290 are neck and neck. With all of a .4fps difference between the two, the two cards are essentially tied, once more showcasing how the 290X is held back in order to get reasonable acoustics, and how fast the 290 can go when it does the opposite and lets loose.
This also ends up being a very close matchup between the 290 and the GTX 780, with the 290 losing to the GTX 780 by just 1%, making for another practical tie. Which coincidentally will make our power and noise tests all the more meaningful, since this is the game we use for those tests.
Meanwhile compared to the GTX 770 and 280X, this is actually the narrowest victory for the 290. Despite the solid performance of the 290 and 290X, it beats the GTX 770 by just 11%. The margin of victory over the 280X however is closer to normal at 29%.
Up next is our legacy title for 2013/2014, Crysis: Warhead. The stand-alone expansion to 2007’s Crysis, at over 5 years old Crysis: Warhead can still beat most systems down. Crysis was intended to be future-looking as far as performance and visual quality goes, and it has clearly achieved that. We’ve only finally reached the point where single-GPU cards have come out that can hit 60fps at 1920 with 4xAA, never mind 2560 and beyond.
Unlike games such as Battlefield 3, AMD’s GCN cards have always excelled on Crysis: Warhead, and as a result it’s a good game for the 290 right off the bat. Furthermore because the 290X throttles so much here, coupled with this game’s love of ROP performance, the 290 actually beats the 290X, if only marginally so. .5fps is within our experimental variation (even though this benchmark is looped multiple times), but it just goes to show how close the 290 and 290X can be, and furthermore how powerful the higher average clockspeeds can be in ROP or geometry bound scenarios. Graphics rendering may be embarrassingly parallel in general, but sometimes a bit narrower and a bit higher clocked can be the path to better performance.
Meanwhile because the 290 does so well here, it makes for another sizable victory over the GTX 780, beating it by 16%. Further down the line the GTX 770 is beaten by 46%, and the 280X by 27%.
Moving on to our minimum framerates, the 290 actually extends its lead over the 290X. Now minimum framerates aren’t as reliable as average framerates, even in Crysis, so our experimental variation is going to be higher here, but it does once again show the advantages the 290 enjoys being clocked higher than the 290X under a sustained workload. Though on the other hand the GTX 780 catches up slightly, closing the gap to 10%.
Total War: Rome 2
The second strategy game in our benchmark suite, Total War: Rome 2 is the latest game in the Total War franchise. Total War games have traditionally been a mix of CPU and GPU bottlenecks, so it takes a good system on both ends of the equation to do well here. In this case the game comes with a built-in benchmark that plays out over a forested area with a large number of units, definitely stressing the GPU in particular.
For this game in particular we’ve also gone and turned down the shadows to medium. Rome’s shadows are extremely CPU intensive (as opposed to GPU intensive), so this keeps us from CPU bottlenecking nearly as easily.
Rome is another game that sees the 290X significantly throttle, and as such it’s another game the 290 has little trouble catching up in. At 2560 the two cards are essentially tied, each enjoying a 5% lead over the GTX 780. Elsewhere the 290 beats the 280X by 27% and the GTX 770 by 30%. Even the 7950B gets left behind to a significant extent, with the 290 beating it by 58%.
The second-to-last game in our lineup is Hitman: Absolution. The latest game in Square Enix’s stealth-action series, Hitman: Absolution is a DirectX 11 based title that though a bit heavy on the CPU, can give most GPUs a run for their money. Furthermore it has a built-in benchmark, which gives it a level of standardization that fewer and fewer benchmarks possess.
With Hitman we finally see the 290X and 290 pull apart, but once more it’s to a fairly small degree. With the 290X not being as significantly throttled here the 290 trails by 4%, reducing it in status to the second fastest single-GPU card in this test. GTX 780 for its part isn’t too far behind and does pass 60fps, but we still see 290 beat it by 12%.
This also ends up being another situation where the 290 does well for itself compared to the GTX 770 and 280X. There it beats the GTX 770 by 42% and the 280X by 31%.
Moving on to minimum framerates, the performance situation shifts even more towards the 290’s favor. Now it’s once again ahead of the 290X, this time by 2fps or 3%, and the performance advantage over the GTX 780 grows to 23%.
The final game in our benchmark suite is also our racing entry, Codemasters’ GRID 2. Codemasters continues to set the bar for graphical fidelity in racing games, and with GRID 2 they’ve gone back to racing on the pavement, bringing to life cities and highways alike. Based on their in-house EGO engine, GRID 2 includes a DirectCompute based advanced lighting system in its highest quality settings, which incurs a significant performance penalty but does a good job of emulating more realistic lighting within the game world.
Our final benchmark has the 290X and 290 once again closely matched; the 290 trails its higher-tier sibling by just 2%, putting up 78fps at 2560. The GTX 780 for its part does close in on the 290, and although we’re admittedly getting a bit academic since all of these cards are well 60fps, the 290 ultimately beats the GTX 780 by 7%.
Meanwhile the performance gains against the 290’s lower priced competition are uneven. Against the GTX 770 it puts up a very average 27% performance advantage, but with Tahiti cards holding up so well in this game the advantage over the 280X is just 18%.
As always we’ll also take a quick look at synthetic performance. The 290X shouldn’t pack any great surprises here since it’s still GCN, and as such bound to the same general rules for efficiency, but we do have the additional geometry processors and additional ROPs to occupy our attention.
Hawaii performance on TessMark continues to underwhelm. 290 should have plenty of performance to create triangles, but it’s apparently having a problem fully utilizing them, a problem NVIDIA does not have.
Moving on, we have our 3DMark Vantage texture and pixel fillrate tests, which present our cards with massive amounts of texturing and color blending work. These aren’t results we suggest comparing across different vendors, but they’re good for tracking improvements and changes within a single product family.
3DMark’s texel fill test is one of a handful of tests where the 290 and 290X are actually separated by as much as the theoretical performance difference implies they should be. Clocked lower and with fewer CUs than 290X, 290 delivers 87% of the texturing performance.
Meanwhile the pixel fill rates show just how close the 290 and 290X can be even under best case circumstances for each card, due to the fact that they have an equal number of ROPs and equal memory bandwidth. Consequently the 290 loses almost nothing for pixel pushing power as compared to its bigger sibling.
Jumping into pure compute performance, this is another scenario where the 290X shouldn’t throttle as much, and as such the performance differences between the 290 and 290X should be closer to what they are on paper. With compute workloads the ROPs aren’t being hit hard, so that’s power and thermal savings that lets both cards operate at close to their maximum boost clocks.
As always we'll start with our DirectCompute game example, Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. While DirectCompute is used in many games, this is one of the only games with a benchmark that can isolate the use of DirectCompute and its resulting performance.
As with the 290X, Civ V can’t tell us much of value due to the fact that we’re running into CPU bottlenecks, not to mention increasingly absurd frame rates. The 290 is marginally slower than the 290X due to the lower clockspeeds and missing CUs, but minimally so.
Our next benchmark is LuxMark2.0, the official benchmark of SmallLuxGPU 2.0. SmallLuxGPU is an OpenCL accelerated ray tracer that is part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.
With both cards unthrottled and bound solely by shader performance, it’s an outright foot race for the Radeon cards. 290 trails 290X by around 9%, closely mirroring the difference in the CU count between the two cards. Though 290 is being very closely chased by the 280X, as Hawaii in general seems to have trouble getting the most out of its shader hardware on this benchmark.
Our 3rd compute benchmark is Sony Vegas Pro 12, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.
There’s not enough of a GPU performance difference between the two cards to matter with this test. Both tie at 22 seconds.
Our 4th benchmark set comes from CLBenchmark 1.1. CLBenchmark contains a number of subtests; we’re focusing on the most practical of them, the computer vision test and the fluid simulation test. The former being a useful proxy for computer imaging tasks where systems are required to parse images and identify features (e.g. humans), while fluid simulations are common in professional graphics work and games alike.
In the CLBenchmark fluid simulation the 290X and 290 take the top spots as expected, with the 290 trailing once more by 9%. However both Hawaii cards are still struggling with the computer vision benchmark, leading to the 290 being edged out by the 7970 of all things.
Moving on, our 5th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, as Folding @ Home has moved exclusively to OpenCL this year with FAHCore 17.
Generally Tahiti and Hawaii are strong performers in the GPU compute arena, but that isn’t of particular help to the 290 here, as it loses out to the GTX 780 in every mode. In single precision FAHBench has trouble putting Hawaii to good use at times, while double precision tests have the 1/8th DP rate 290 and 290X falling behind due to their lower than Tahiti DP throughput.
Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, as described in this previous article, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.
SystemCompute is another benchmark where 290 and 290X do not experience meaningful throttling, and as such are separated by more than what happens in our gaming benchmarks. In this case 290 yet again trails 290X by 9%, though it still enjoys a considerable lead over the GTX 780 and all other NVIDIA cards.
Power, Temperature, & Noise
As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.
As we alluded to in our look at the 290’s build quality and AMD’s last minute specification change, while the 290 has great performance under the complete range of our gaming benchmarks, it’s with power, temperature, and noise that it has to pay the piper for that performance. There’s no getting around the fact that 47% on the 290 series reference cooler is going to be loud, and in this section we’ll break down those numbers and attempt to explain why that is.
First, let’s start with voltages. Ideally we’d use VIDs here, but due to the fact that none of our regular tools can read AMD’s VIDs for the 290 series, we’re left with what works. And what works is GPU-Z, which can read the VDDC coming off of the IR 3567B controller. These aren’t “perfect” numbers as we don’t have the ability to factor out fluctuations due to temperature or the impact of vDroop, but they’ll work in a pinch.
|Radeon R9 290 Series Voltages (VDDC/GPU-Z)|
|Ref. 290X Boost Voltage||Ref. 290 Boost Voltage||Ref. 290 Base Voltage|
To that end you can immediately see that the 290 starts off in a weakened position relative to the 290X. Second tier products are a mixed bag in this regard as sometimes they’ll be composed solely of chips with damaged functional units that can be shut off and then downclocked to operate at a lower voltage, while in other cases they’ll also include chips that have worse leakage and power consumption characteristics. In the case of the 290 we have the latter.
As such the 290 is operating at a higher voltage than the 290X at both the base GPU clockspeed of 662Mhz, and the boost GPU clockspeed of 947MHz. This means at any given clockspeed the GPU on the 290 is going to be drawing more power – likely more than enough to offset the reduction from the disabled CUs – and furthermore we’re seeing that the voltage reduction from operating at lower voltages is not very significant. If these results are reasonably accurate then this means that the power costs of ramping up the clockspeed are relatively cheap, but the power savings of throttling down are relatively sparse. The GTX Titan, by comparison, sees a full 100mv decrease going from 940MHz to 836Mhz.
Having established that, we can see why AMD’s 7% fan speed increase had such a large impact on performance. Even a bit more cooling allows the card to jump to far higher clockspeeds, which significantly improves performance. With a fan speed of 47% the 290 has enough cooling to sustain 947MHz across everything except the TDP limited FurMark and Company of Heroes 2.
|Radeon R9 290 Average Clockspeeds|
|47% Fan (New Default)|
|TW: Rome 2||
With that out of the way, let’s dive into power, temperatures, and noise.
Idle power is essentially unchanged from the 290X. The 16 GDDR5 memory chips aren’t doing AMD any favors, but more significantly they appear to still have a power leak in their drivers at idle. Until they fix that, the 290 series will draw several watts more than any other modern single-GPU card.
Moving on to power consumption under Crysis 3, we can see just how AMD’s TDP hasn’t changed compared to the 290X. In fact because these cards are effectively tied in performance in this game, we can even see at least some of the impact of the 290’s higher voltages. By operating at higher voltages in general, and then furthermore higher clockspeeds (requiring higher voltages), the 290 draws just a wee bit more power than the 290X under our gaming workload. Power efficiency wasn’t AMD’s strongest hand to behind with on 290X, and 290 makes it just a bit worse.
This also means that the 290 isn’t competitive with the GTX 780 on the matter of power consumption and power efficiency in general. A 54W difference at the wall for identical performance in Crysis 3 – or extrapolated over our complete benchmark suite a performance advantage of 6% – is very difficult to swallow. As with everything else to come for power, temp, and noise, the GTX 780 has a very real advantage here.
Moving on to FurMark, despite the fact that we should be TDP limited the 290 actually draws more power than the 290X. To be frank we’re at a bit of a loss on this one; 290 bottoms out at 662MHz here, so it may be that we’re seeing one of the things the card can do to try to maintain its base clockspeed. Alternatively this may be the voltage effect amplified. Regardless of the reason though it’s a very repeatable scenario, and it’s a scenario that has 290 drawing 34W more at the wall than 290X.
Given the fact that the 290 and 290X are built on identical boards, the idle temperatures are consistent, if not a bit more spread out than usual. Until AMD gets their power leak under control, Hawaii isn’t going to come down below 40C at idle with the reference cooler.
Due to the mechanisms of PowerTune on the 290 series, the sustained load temperatures for the 290 and 290X are a very consistent 94C. As we laid out in our review of the 290X these temperatures are not a problem so long as AMD properly accounts for them in their power consumption projections and longevity projections. But coming from earlier cards it does take some getting used to.
At last we have our look at noise. Starting with idle noise, we can see that the 290 actually outperforms the 290X to a meaningful degree, squeaking under the 40dB mark. The fact that these cards utilize the same cooler operating at the same fan speed means that these results caught us off guard at first, but our 290 sample for whatever reason seems to be slightly better built than our 290X sample. These results match what our ears say, which is that the 290X has a bit of a grind to it that’s not present on the 290, and consequently the 290 is that much quieter.
Our Crysis 3 noise chart is something that words almost don’t do justice for. It’s something that needs to be looked at and allowed to sink in for a moment.
With the 290 AMD has thrown out any kind of reasonable noise parameters, instead choosing to chase performance and price above everything else. As a result at 57.2dB the 290 is the loudest single-GPU card in our current collection of results. It’s louder than 290X (a card that was already at the limit for reasonable), it’s louder than the 7970 Boost (an odd card that operated at too high a voltage and thankfully never saw a retail reference release), and it’s 2.5dB louder than the GTX 480, the benchmark for loud cards. Even GTX 700 series SLI setups aren’t this loud, and that’s a pair of cards.
At the end of the day the 290 is 9.7dB louder than its intended competition, the GTX 780. With a 10dB difference representing a two-fold increase in noise on a human perceptual basis, the 290 is essentially twice as loud as the GTX 780. It’s $100 cheaper and 6% faster, but all of that comes at the very high price of twice the noise.
Everyone’s cutoff for a reasonable noise level for a single-GPU card is going to be different. Ours happens to be the 7970, which on our latest testbed measures in at 53.5dB. To that end the 290 is 3.7dB louder, putting it well past what we’d consider to be reasonable for the 290. It’s a powerful card, but these noise levels are unreasonable and unacceptable for any scenario that involves being able to hear the card while it’s working.
Finally we’ll look at noise under FurMark. As loud as the 290 is under Crysis 3, the 290 was only pushed to 45% fan speed under that workload. Under FurMark the 290 ratchets up to 47% and to its peak noise level of 58.5dB. Now to the credit of the 290 this does end up being better than the 5870 and GTX 480, but as neither of those cards implements modern power throttling technology it’s at best an unfair fight. Compared to any card with power throttling, the 290 ends up once more being the worst.
Wrapping things up, the power/temp/noise situation for the 290 is rather straightforward, but unfortunately for AMD it’s not going to be in their favor. 290 is at least marginally more power hungry and quite a bit louder than 290X, never mind GTX 780. As we’ve seen in previous pages the performance is quite good, but it comes at what will in most cases be a very high cost.
Finally, since we had a bit more time to prepare for the 290 review than we did the 290X review, we used part of our time to investigate something we didn’t get to do with the 290X: what would performance look like if we equalized for noise? Earlier in this article we took at brief look at performance if we equalized for noise against the 290X – the performance hit is about 12% - but how would the 290 fare if it were equalized to the GTX 780?
The answer is not well, likely for the voltage matters we discovered earlier in this article. To get a 290 down to ~48dB requires reducing the maximum fan speed to 34%, which is something AMD’s software allows. The problem is that at 34% the effective cooling level on the 290 is so poor that even after dropping to the base GPU clockspeed of 662MHz it still generates too much heat, requiring it to ramp up the fan speed to compensate. In other words it’s simply impossible to get the 290 to match the GTX 780’s noise levels under load. Based on our data the 290 requires a minimum fan speed of 38% to maintain its base clockspeed under sustained load, which pushes noise levels out from a GTX 780-like 48dB to a GTX Titan-like 50.9dB.
With that in mind, we went ahead and ran a selection of our benchmarks with the 34% maximum fan speed. The performance hit, as you’d expect, is significant.
|Radeon R9 290 Average Clockspeeds|
|47% Fan||40% Fan||34% Fan|
|TW: Rome 2||
|Radeon R9 290 Relative Performance|
|290: 47% Fan Speed (Default)||290: 40% Fan Speed||290: ~34% Fan Speed|
To get down to the 34%-38% fan speed range, the 290 has to shed an average of 22% of its performance, peaking under a few titles at 25%. To be sure this makes the card much quieter – though not as quiet as a GTX 780 – but it also sacrifices much of the 290’s performance advantage in the process. At this point we’ve essentially reduced it to a 280X.
Looking at the resulting noise levels, you can see the full outcome of our tweaks. If we could sustain 34% we’d have a noise level consistently close to that of the GTX 780, but instead under Crysis 3 and a couple other games fan speeds level out at 38%, pushing noise levels to 50.9dB and placing them a bit higher than GTX Titan.
Based on this data it’s safe to say that the performance cost for using the fan control function to reduce the fan noise on the 290 will be moderate to severe. You can’t match GTX 780 or even GTX Titan, and doing so will reduce performance to that of a 280X. 40% on the other hand is more viable, but keep in mind we’re now at 290X noise levels for roughly 85% of the 290X’s performance, which isn’t a great outcome either.
Finally, let’s spend a bit of time looking at the overclocking prospects for the 290. Without any voltage adjustment capabilities and with AMD binning chips for clockspeeds and power consumption we’re not necessarily expecting a lot of headroom here, but none the less it’s worth checking out to see how much more we can squeeze out of the card.
Even though we’re officially limited to AMD’s Overdrive utility for the moment for overclocking, Overdrive offers a wide enough range of values that we shouldn’t have any problem maxing out the card. In fact we’ll be limited by the card first.
|Radeon R9 290 Overclocking|
|Reference Radeon R9 290|
|Shipping Core Clock||662MHz|
|Shipping Boost Clock||947MHz|
|Shipping Memory Clock||5GHz|
|Shipping Boost Voltage||~1.18v|
|Overclock Core Clock||790MHz|
|Overclock Boost Clock||1075MHz|
|Overclock Memory Clock||5.6GHz|
|Overclock Max Boost Voltage||~1.18v|
Despite the lack of voltage control, when it comes to overclocking the 290 we were able to achieve solid overclocks on both the GPU and the memory. On a boost clock basis we were able to push the 290 from 947MHz to 1075MHz, an increase of 128MHz (14%). Meanwhile we were able to push the memory from 5GHz to 5.6GHz before artifacting set in, representing a 600MHz (12%) memory overclock. Being able to increase both clockspeeds to such a similar degree means that no matter what the video bottleneck is – be it GPU or memory – we should see some kind of performance increase out of overclocking.
On a side note, for overclocking the 290 we stuck with moderate increases to both the maximum fan speed and the PowerTune limit. In the case of the former we used a 65% maximum fan speed (which actually proved to be more than what’s necessary), while for the latter we went with a 20% increase in the PowerTune limit, as at this point in time we don’t have a good idea for what the safe power limits are for the reference 290/290X board. Though in either case only FurMark could push the overclocked card to its power limit, and nothing could push the card to its fan speed limit. Similarly we didn’t encounter any throttling issues with our overclocked settings, with every game (including CoH2) running at 1075MHz sustained.
Taking a brief look at power, temp, and noise before jumping into our gaming performance results, we can see that overclocking the card has a measurable impact on power consumption under both Crysis 3 and FurMark. With Crysis 3 we’re clockspeed limited before we’re power limited, leading to an increase in power consumption of 27W, while under FurMark where we were power limited it’s a much more academic increase of 87W.
Since the 290 already ships at the highest temperate limit it allows – 95C – our sustained temperatures are unchanged even after overclocking.
The 290 is already an unreasonably loud card at stock, and unfortunately the fan speed increases needed to handle the greater heat load from overclocking only make this worse. Under Crysis 3 we peaked at 59.7dB, or 49% fan speed. While under FurMark we peaked at 65.3dB, or 59% fan speed. For these noise levels to be bearable the 290 really needs to be fully isolated (e.g. in another room) or put under water, as otherwise 59.7dB sustained is immensely loud for a video card.
Finally getting to the matter of game performance, we’re seeing consistently strong scaling across every game in our collection. The specific performance increase depends on the game as always, but a 14% core overclock and 12% memory overclock has netted us anywhere between 9% in Metro up to the full 14% in Total War: Rome II. At this performance level the 290 OC exceeds the performance of any other single-GPU card at stock, and comes very close to delivering 60fps in every action game in our benchmark suite.
Bringing this review to a close, it’s admittedly not very often that we write a negative video card review, especially for a major SKU launch from NVIDIA or AMD. Both companies have competitive analysis teams to do benchmarking and performance comparisons, and as a result know roughly where they stand long before we get their cards. Consequently they have plenty of time to tweak their cards and/or their pricing (the latter of which is typically announced only a day or two in advance) in order to make a place in the market for their cards. So it’s with a great deal of confusion and a tinge of sadness that we’re seeing AMD miss their mark and their market, and not by a small degree.
To get the positive aspects covered first, with the Radeon R9 290 AMD has completely blown the roof off of the high-end video card market. The 290 is so fast and so cheap that on a pure price/performance basis you won’t find anything quite like it. At $400 AMD is delivering 106% of the $500 GeForce GTX 780’s performance, or 97% of the $550 Radeon R9 290X’s performance. The high-end market has never been for value seekers – the fastest cards have always commanded high premiums – but the 290 completely blows that model apart. On a pure price/performance basis the GTX 780 and even the 290X are rendered completely redundant by the 290, which delivers similar-to-better performance for $100 less if not more.
The problem is that while the 290 is a fantastic card and a fantastic story on a price/performance basis, in chasing that victory AMD has thrown caution into the wind and thrown out any kind of balance between performance and noise. At 57.2dB the 290 is a loud card. A very loud card. An unreasonably loud card. AMD has quite simply prioritized performance over noise, and these high noise levels are the price of doing so.
To get right to the point then, this is one of a handful of cards we’ve ever had to recommend against. The performance for the price is stunning, but we cannot in good faith recommend a card this loud when any other card is going to be significantly quieter. There comes a point where a video card is simply too loud for what it does, and with the 290 AMD has reached it.
Ultimately there will be scenarios where this is acceptable – namely, anything where you don’t have to hear the 290, such as putting it in another room or putting it under water – but on a grand scale those are few and far between. For most buyers who will simply purchase the card and drop it into their computers as-is, this represents an unreasonable level of noise.
As a result for most buyers the competitive landscape in the video card market will remain unchanged, even with today’s launch of the 290. With the reference 290 untenable as a purchase, this leaves the GTX 780 at $500, the 290X at $550, or the GTX 770 and 280X at the $300-$330 range, leaving a large hole in the market in the short term. In the long term it will be up to AMD’s partners to try to salvage the 290 with custom designs, enhanced coolers, and other modifications. The 290 still has quite a bit of potential both as a product and as a competitor in the larger video card marketplace, but that potential is wasted so long as it’s paired with AMD’s reference cooler and the need to run it so loudly.
On a final note, with the launch of the 290 and AMD’s promotional efforts we can’t help but feel that AMD is trying to play both sides of the performance/noise argument by shipping the card a high performance configuration, and using its adjustability to simultaneously justify its noise as something that can be mitigated. This is technically correct (ed: the best kind of correct), but it misses the point that most users are going to install a video card and use it as it's configured out of the box. To that end adjustability is a great feature and we’re happy to see such great efforts made to offer it, but adjustability cannot preclude shipping a more reasonable product in the first place.
Had the 290 shipped in its original 40% fan configuration, it wouldn’t be knocking on the GTX 780’s door any longer, but it would have been in a spot where its balance of price, performance, and noise would have made for an attractive product. Instead AMD has shipped the 290 with the equivalent of uber mode as the default, and in the process has failed to meet the needs of the majority of their customers.
Originally published here.
In this week’s article I flat out avoided recommending the 290 because of its acoustic profile. When faced with the tradeoff of noise vs. performance, AMD clearly chose the latter and ended up with a card that delivers a ridiculous amount of performance for $399 but exceeds our ideas of comfortable noise levels in doing so.
I personally value acoustics very highly and stand by my original position that the reference R9 290 is too loud. When I game I use open back headphones so I can listen for phone calls or the door for shipments, and as a result acoustics do matter to me. In the review I assumed everyone else valued acoustics at least similarly to me, but based on your reaction it looks like I was mistaken. While a good number of AnandTech readers agreed the R9 290 was too loud, an equally important section of the audience felt that the performance delivered was more than enough to offset the loud cooling solution. We want our conclusions to not only be reflective of our own data, but also be useful to all segments of our audience. In the case of the 290 review, I believe we accomplished the former but let some of you down with the latter.
Part of my motivation here is to make sure that we send the right message to AMD that we don’t want louder cards. I believe that message has been received loud and clear from what I understand. It’s very important to me that we don’t send the message to AMD or NVIDIA that it’s ok to engage in a loudness war in the pursuit of performance; we have seen a lot of progress in acoustics and cooler quality since the mid-to-late 2000’s, and we’d hate to see that progress regressed on. A good solution delivers both performance and great user experience, and I do believe it’s important that we argue for both (which is why we include performance, power and noise level data in our reviews).
The Radeon R9 290 does offer a tremendous value, and if you’re a gamer that can isolate yourself from the card’s acoustics (or otherwise don’t care) it’s easily the best buy at $399. If acoustics are important to you, then you’re in a tougher position today. There really isn’t an alternative if you want R9 290 performance at the same price. The best recommendation I have there is to either pony up more cash for a quieter card, accept the noise as is or wait and see what some of the customized partner 290 cards look like once those do arrive. I suspect we’ll have an answer to that problem in the not too distant future as well.
Note that this isn't going to be the last time performance vs. acoustics are going to be a tradeoff. AMD pointed out to us that the 290/290X update is the first time its fan speed has been determined by targeting RPMs vs. PWM manipulation. In the past, it didn't really matter since performance didn't scale all that much with fan speed. Given the current realities of semiconductor design and manufacturing, the 290/290X situation where fan speed significantly impacts performance is going to continue to be the case going forward. We've already made the case to AMD for better reference cooling designs and it sounds like everyone is on the same page there.