Grand Theft Auto V

The final game in our review of the R9 Fury X is our most recent addition, Grand Theft Auto V. The latest edition of Rockstar’s venerable series of open world action games, Grand Theft Auto V was originally released to the last-gen consoles back in 2013. However thanks to a rather significant facelift for the current-gen consoles and PCs, along with the ability to greatly turn up rendering distances and add other features like MSAA and more realistic shadows, the end result is a game that is still among the most stressful of our benchmarks when all of its features are turned up. Furthermore, in a move rather uncharacteristic of most open world action games, Grand Theft Auto also includes a very comprehensive benchmark mode, giving us a great chance to look into the performance of an open world action game.

On a quick note about settings, as Grand Theft Auto V doesn't have pre-defined settings tiers, I want to quickly note what settings we're using. For "Very High" quality we have all of the primary graphics settings turned up to their highest setting, with the exception of grass, which is at its own very high setting. Meanwhile 4x MSAA is enabled for direct views and reflections. This setting also involves turning on some of the advanced redering features - the game's long shadows, high resolution shadows, and high definition flight streaming - but not increasing the view distance any further.

Otherwise for "High" quality we take the same basic settings but turn off all MSAA, which significantly reduces the GPU rendering and VRAM requirements.

Grand Theft Auto V - 3840x2160 - Very High Quality

Grand Theft Auto V - 3840x2160 - High Quality

Grand Theft Auto V - 2560x1440 - Very High Quality

Closing out our gaming benchmarks, the R9 Fury is once again in the lead, besting the GTX 980 by as much as 15%. However GTA V also serves as a reminder that the R9 Fury doesn’t have quite enough power to game at 4K without compromises. And if we do shift back to 1440p, a more comfortable resolution for this card, AMD’s lead is down to just 5%. At that point the R9 Fury isn’t quite covering its price advantage.

Meanwhile compared to the R9 Fury X, we close out roughly where we started. The R9 Fury trails the more powerful R9 Fury X by 5-7% depending on the resolution, a difference that has more to do with GPU clockspeeds than the cut-down CU count. Overall the gap between the two cards has been remarkably consistent and surprisingly narrow.

Grand Theft Auto V - 99th Percentile Framerate - 3840x2160 - Very High Quality

Grand Theft Auto V - 99th Percentile Framerate - 3840x2160 - High Quality

Grand Theft Auto V - 99th Percentile Framerate - 2560x1440 - Very High Quality

99th percentile framerates however are simply not in AMD’s favor here. Despite AMD’s driver optimizations and the fact that the GTX 980 only has 4GB of VRAM, the R9 Fury X could not pull ahead of the GTX 980, so the R9 Fury understandably fares worse. Even at 1440p the R9 Fury cards can’t quite muster 30fps, though in all fairness even the GTX 980 falls just short of this mark as well.

GRID Autosport Synthetics
Comments Locked

288 Comments

View All Comments

  • CiccioB - Monday, July 13, 2015 - link

    The myth, here again!
    Let's see these numbers of a miraculous vs crippling driver.
    And I mean I WANT NUMBNERS!
    Or what you are talking about is just junk you are reporting because you can't elaborate yourself.
    Come on, the numbers!!!!!!!!!
  • FlushedBubblyJock - Thursday, July 16, 2015 - link

    So you lied loguerto, but the sad truth is amd bails on it's cards and drivers for them FAR FAR FAR sooner than nvidia does.
    YEARS SOONER.

    Get with it bub.
  • Count Vladimir - Thursday, July 16, 2015 - link

    Hard evidence or gtfo.
  • Roboyt0 - Sunday, July 12, 2015 - link

    I am very interested to see how much of a difference ASUS' power delivery system will make for (real) overclocking in general once voltage control is available. If these cards act the same as the 290's did, then AMD's default VRM setup could very likely be more than capable of overclocks in the 25% or more range. I'm basing the 25% or more off of my experience with a half dozen reference based R9 290's, default 947MHz core, that would reach 1200 core clock with ~100mV additional. And if you received a capable card then you could surpass those clocks with more voltage.

    It appears AMD has followed the EXACT same path they did with the 290 and 290X. The 290X always held a slight lead in performance, but the # of GPU components disabled didn't hinder the 290 as much as anyone thought. This is exactly what we see now with the Fury ~VS~ Fury X...overclock the Fury and it's the better buy. All while the Fury X is there for those who want that little bit of extra performance for the premium, and this time you're getting water cooling! It seems like a pretty good deal to me.

    Once 3rd party programmers(not AMD) figure out voltage control for these cards, history will likely repeat itself for AMD. Yes, these will run hotter and use more power than their Nvidia counterparts...I don't see why this is a shock to anyone since this is still 28nm and similar enough to Hawaii...What no one seems to mention is the amount of performance increase compared to Hawaii in the same power/thermal envelope..it's a very significant jump.

    Whom in the enthusiast PC world really cares about the additional power draw? We're looking at 60-90W under normal load conditions; Furmark is NOT normal load. Unless electricity where you hail from is that expensive, it isn't actually costing you that much more in the long run. If you're in the market for a ~$550 GPU, then you probably aren't too concerned with buying a good PSU. What the FurMark power draw of the Fury X/Sapphire Fury really tell us is that the reference PCB is capable of handling 385W+ of draw. This should give an idea of what the card can do once we are able to control the voltage.

    These cards are enthusiast grade and plenty of those users will remove the included cooler for maximum performance. A full cover waterblock is going to be the key to releasing the full potential of Fury(X) just like it was for 290(X). It is a definite plus to see board partners with solid air cooling solutions out of the gate though...Sapphire's cooling solution fares better in temperature AND noise during FurMark than ASUS' when it's pulling 130W additional power! Way to go Sapphire!

    My rant will continue concerning drivers. Nvidia has mature hardware with mature drivers. The fact AMD is keeping up, or winning is some instances, is a solid achievement. Go back to a 290(X) review when their primary competition was a 780 Ti, where the 780 Ti was usually winning. Now, the 390(X), that so many are calling a rebranded POS, easily bests the 780 Ti and competes with GTX 980. Nvidia changed architecture, but AMD is still competitive? Another commenter said it best by saying: "An AMD GPU is like a fine wine, and gets better with age."

    This tells me 3 things...

    1) Once drivers mature, AMD stands to gain solid performance improvements.
    2) Adding voltage control to enable actual overclocking will show the true potential of these cards.
    3) Add these two factors together and AMD has another winning product.

    Lastly we still have DX12 to factor into all of this. Sure, you can say DX12 is too far away, but in actuality it is not. I know there are those people who MUST HAVE the latest and greatest hardware every time something new comes around every ~9 months. However, there are plenty more of us who wait a few generations of GPUs to upgrade. If DX12 brings even a half of the anticipated performance gains and you're in the market, then purchasing this card now, or in the coming months, will be a solid investment for the coming years.
  • Peichen - Monday, July 13, 2015 - link

    Whatever flats your boat. There are still some people like you that believes FX CPUs are faster than i7s and they are what keeps AMD afloat. The rest of us.... we actually consider everything and go Intel & Nvidia.

    There are 3 fails in your assumptions:
    1. Fiji is a much bigger core tied to 4 HBM modules. OC will likely not be as "smooth" as 290X
    2. 60-90W is not just cost in electricity. It is also getting a PSU that will supply the additional draw and more fan(s) and better case to get the heat out. Or suffer the heat and noise. The $15-45 a year in additional electricity bill also means you will be in the red in a couple od years.
    3. You assume AMD/ATI driver team is still around and will be around a couple of years in the future.
  • silverblue - Tuesday, July 14, 2015 - link

    3. Unless the driver work has been completely outsourced and there's proof of this happening, I'm not sure you can use this as a "fail".

    Fiji isn't a brand new version of GCN so I don't expect the huge gains in performance that are being touted, however whatever they do bring to the table should benefit Tonga as well, which will (hopefully) distance itself from Tahiti and perhaps improve sales further down the stack.
  • Count Vladimir - Thursday, July 16, 2015 - link

    Honestly, driver outsourcing might be for the best in case of AMD.
  • Oxford Guy - Wednesday, July 15, 2015 - link

    The most electrically efficient 3D computer gaming via an ARM chip, right? Think of all the wasted watts for these big fancy GPUs. Even more efficient are text-based games.
  • FlushedBubblyJock - Thursday, July 16, 2015 - link

    You forgot he said spend a hundred and a half on a waterblock...for the amd card, for "full potential"..

    ROFL - once again the future that never comes is very bright and very expensive.
  • beck2050 - Monday, July 13, 2015 - link

    A bit disingenuous as custom cooled over clocked 980s are the norm these days and easily match or exceed Fury, while running cooler with much less power and can be found cheaper. AMD HAS its work cut out.

Log in

Don't have an account? Sign up now