Grand Theft Auto V

The final game in our review of the R9 Fury X is our most recent addition, Grand Theft Auto V. The latest edition of Rockstar’s venerable series of open world action games, Grand Theft Auto V was originally released to the last-gen consoles back in 2013. However thanks to a rather significant facelift for the current-gen consoles and PCs, along with the ability to greatly turn up rendering distances and add other features like MSAA and more realistic shadows, the end result is a game that is still among the most stressful of our benchmarks when all of its features are turned up. Furthermore, in a move rather uncharacteristic of most open world action games, Grand Theft Auto also includes a very comprehensive benchmark mode, giving us a great chance to look into the performance of an open world action game.

On a quick note about settings, as Grand Theft Auto V doesn't have pre-defined settings tiers, I want to quickly note what settings we're using. For "Very High" quality we have all of the primary graphics settings turned up to their highest setting, with the exception of grass, which is at its own very high setting. Meanwhile 4x MSAA is enabled for direct views and reflections. This setting also involves turning on some of the advanced redering features - the game's long shadows, high resolution shadows, and high definition flight streaming - but it not increasing the view distance any further.

Otherwise for "High" quality we take the same basic settings but turn off all MSAA, which significantly reduces the GPU rendering and VRAM requirements.

Grand Theft Auto V - 3840x2160 - Very High Quality

Grand Theft Auto V - 3840x2160 - High Quality

Grand Theft Auto V - 2560x1440 - Very High Quality

Our final game sees the R9 Fury X go out on either an average or slightly worse than average note, depending on the settings and resolution we are looking at. At our highest 4K settings the R9 Fury X trails the GTX 980 Ti once again, this time by 10%. Worse, at 1440p it’s now 15%. On the other hand if we run at our lower, more playable 4K settings, then the gap is only 5%, roughly in line with the overall average 4K performance gap between the GTX 980 Ti and R9 Fury X.

In this case it’s probably to AMD’s benefit that our highest 4K settings aren’t actually playable on a single GPU card, as the necessary drop in quality gets them closer to NVIDIA’s performance. On the other hand this does reiterate the fact that right now many games will force a tradeoff between resolution and quality if you wish to pursue 4K gaming.

Finally, the performance gains relative to the R9 290X are pretty good. 29% at 1440p, and 44% at the lower quality playable 4K resolution setting.

Grand Theft Auto V - 99th Percentile Framerate - 3840x2160 - Very High Quality

Grand Theft Auto V - 99th Percentile Framerate - 3840x2160 - High Quality

Grand Theft Auto V - 99th Percentile Framerate - 2560x1440 - Very High Quality

Shifting gears to 99th percentile frametimes however – a much-welcome feature of the game’s built-in benchmark – finds that AMD doesn’t fare nearly as well. At the 99th percentile the R9 Fury X trails the GTX 980 Ti at all times, and significantly so. The deficit is anywhere between 26% at 1440p to 40% at 4K Very High.

What’s happening here is a combination of multiple factors. First and foremost, next to Shadow of Mordor, GTAV is our other VRAM busting game. This, I believe, is why 99th percentile performance dives so hard at 4K Very High for the R9 Fury X, as it only has 4GB of VRAM compared to 6GB on the GTX 980 Ti. But considering where the GTX 980 places – above the R9 Fury X – I also believe there’s more than just VRAM bottlenecking occurring here. The GTX 980 sees at least marginally better framerates with the same size VRAM pool (and a lot less of almost everything else), which leads me to believe that AMD’s drivers may be holding them back here. Certainly the R9 290X comparison lends some possible credit to that, as the 99th percentile gains are under 20%. Regardless, one wouldn’t expect to be VRAM limited at 1440p or 4K without MSAA, especially as this test was not originally designed to bust 4GB cards.

GRID Autosport Synthetics
Comments Locked

458 Comments

View All Comments

  • bennyg - Saturday, July 4, 2015 - link

    Marketing performance. Exactly.

    Except efficiency was not good enough across the generations of 28nm GCN in an era where efficiency + thermal/power limits constrain performance, and look what Nvidia did over a similar era from Fermi (which was at market when GCN 1.0 was released) to Kepler to Maxwell. Plus efficiency is kind of the ultimate marketing buzzword in all areas of tech and not having any ability to mention it (plus having generally inferor products) hamstrung their marketing all along
  • xenol - Monday, July 6, 2015 - link

    Efficiency is important because of three things:

    1. If your TDP is through the rough, you'll have issues with your cooling setup. Any time you introduce a bigger cooling setup because your cards run that hot, you're going to be mocked for it and people are going to be weary of it. With 22nm or 20nm nowhere in sight for GPUs, efficiency had to be a priority, otherwise you're going to ship cards that take up three slots or ship with water coolers.

    2. You also can't just play to the desktop market. Laptops are still the preferred computing platform and even if people are going for a desktop, AIOs are looking much more appealing than a monitor/tower combo. So you want to have any shot in either market, you have to build an efficient chip. And you have to convince people they "need" this chip, because Intel's iGPUs do what most people want just fine anyway.

    3. Businesses and such with "always on" computers would like it if their computers ate less power. Even if you can save a handful of watts, multiplying that by thousands and they add up to an appreciable amount of savings.
  • xenol - Monday, July 6, 2015 - link

    (Also by "computing platform" I mean the platform people choose when they want a computer)
  • medi03 - Sunday, July 5, 2015 - link

    ATI is the reason both Microsoft and Sony use AMDs APUs to power their consoles.
    It might be the reason why APUs even exist.
  • tipoo - Thursday, July 2, 2015 - link

    That was then, this is now. Now, AMD together with the acquisition, has a lower market cap than Nvidia.
  • Murloc - Thursday, July 2, 2015 - link

    yeah, no.
  • ddriver - Thursday, July 2, 2015 - link

    ATI wasn't bigger, AMD just paid a preposterous and entirely unrealistic amount of money for it. Soon after the merger, AMD + ATI was worth less than what they paid for the latter, ultimately leading to the loss of its foundries, putting it in an even worse position. Let's face it, AMD was, and historically has always been betrayed, its sole purpose is to create the illusion of competition so that the big boys don't look bad for running unopposed, even if this is what happens in practice.

    Just when AMD got lucky with Athlon a mole was sent to make sure AMD stays down.
  • testbug00 - Sunday, July 5, 2015 - link

    foundries didn't go because AMD bought ATI. That might have accelerated it by a few years however.

    Foundry issue and cost to AMD dates back to the 1990's and 2000-2001.
  • 5150Joker - Thursday, July 2, 2015 - link

    True, AMD was at a much better position in 2006 vs NVIDIA, they just got owned.
  • 3DVagabond - Friday, July 3, 2015 - link

    When was Intel the underdog? Because that's who's knocked them down (The aren't out yet.).

Log in

Don't have an account? Sign up now