Battlefield 4

Kicking off our benchmark suite is Battlefield 4, DICE’s 2013 multiplayer military shooter. After a rocky start, Battlefield 4 has since become a challenging game in its own right and a showcase title for low-level graphics APIs. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.

Battlefield 4 - 3840x2160 - Ultra Quality - 0x MSAA

Battlefield 4 - 3840x2160 - Medium Quality

Battlefield 4 - 2560x1440 - Ultra Quality

When the R9 Fury X launched, one of the games it struggled with was Battlefield 4, where the GTX 980 Ti took a clear lead. However for the launch of the R9 Fury, things are much more in AMD’s favor. The two R9 Fury cards have a lead just shy of 10% over the GTX 980, roughly in-line with their price tag difference. As a result of that difference AMD needs to win in more or less every game by 10% to justify the R9 Fury’s higher price, and we’re starting things off exactly where AMD needs to be for price/performance parity.

Looking at the absolute numbers, we’re going to see AMD promote the R9 Fury as a 4K card, but even with Battlefield 4 I feel this is a good example of why it’s better suited for high quality 1440p gaming. The only way the R9 Fury can maintain an average framerate over 50fps (and thereby reasonable minimums) with a 4K resolution is to drop to a lower quality setting. Otherwise at just over 60fps, it’s in great shape for a 1440p card.

As for the R9 Fury X comparison, it’s interesting how close the R9 Fury gets. The cut-down card is never more than 7% behind the R9 Fury X. Make no mistake, the R9 Fury X is meaningfully faster, but scenarios such as these question whether it’s worth the extra $100.

The Test Crysis 3
Comments Locked

288 Comments

View All Comments

  • FlushedBubblyJock - Wednesday, July 15, 2015 - link

    Oh, gee, forgot, it's not amd's fault ... it was "developers and access" which is not amd's fault, either... of course...

    OMFG
  • redraider89 - Monday, July 20, 2015 - link

    What's your excuse for being such an idiotic, despicable and ugly intel/nvidia fanboy? I don't know, maybe your parents? Somewhere you went wrong.
  • OldSchoolKiller1977 - Sunday, July 26, 2015 - link

    I am sorry and NVIDIA fan boys resort to name calling.... what was it that you said and I quote "Hypocrite" :)
  • redraider89 - Monday, July 20, 2015 - link

    Your problem is deeper than just that you like intel/nvidia since you apparently hate people who don't like those, and ONLY because they like something different than you do.
  • ant6n - Saturday, July 11, 2015 - link

    A third way to look at it is that maybe AMD did it right.

    Let's say the chip is built from 80% stream processors (by area), the most redundant elements. If some of those functional elements fail during manufacture, they can disable them and sell it as the cheaper card. If something in the other 20% of the chip fails, the whole chip may be garbage. So basically you want a card such that if all the stream processors are functional, the other 20% become the bottleneck, whereas if some of the stream processors fail and they have to sell it as a simple Fury, then the stream processors become the bottleneck.
  • thomascheng - Saturday, July 11, 2015 - link

    That is probably AMD's smart play. Fury was always the intended card. Perfect cards will be the X and perhaps less perfect card will be the Nano.
  • FlushedBubblyJock - Thursday, July 16, 2015 - link

    "fury was always the intended card"
    ROFL
    amd fanboy out much ?
    I mean it is unbelievable, what you said, and that you said it.
  • theduckofdeath - Friday, July 24, 2015 - link

    Just shut up, Bubby.
  • akamateau - Tuesday, July 14, 2015 - link

    Anand has been running DX12 benchmarks last spring. When they compared Radeon 290x to GTX 980 Ti nVidia ordered them to stop. That is why no more DX12 benchmarks have been run.

    Intel and nVidia are at a huge disadvantage with DX12 and Mantle.

    The reason:

    AMD IP: Asynchronous Shader Pipelines and Asynchronous Compute Engines.
  • FlushedBubblyJock - Wednesday, July 15, 2015 - link

    We saw mantle benchmarks so your fantasy is a bad amd fanboy delusion.

Log in

Don't have an account? Sign up now