Battlefield 4

Kicking off our benchmark suite is Battlefield 4, DICE’s 2013 multiplayer military shooter. After a rocky start, Battlefield 4 has since become a challenging game in its own right and a showcase title for low-level graphics APIs. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.

Battlefield 4 - 3840x2160 - Ultra Quality - 0x MSAA

Battlefield 4 - 3840x2160 - Medium Quality

Battlefield 4 - 2560x1440 - Ultra Quality

When the R9 Fury X launched, one of the games it struggled with was Battlefield 4, where the GTX 980 Ti took a clear lead. However for the launch of the R9 Fury, things are much more in AMD’s favor. The two R9 Fury cards have a lead just shy of 10% over the GTX 980, roughly in-line with their price tag difference. As a result of that difference AMD needs to win in more or less every game by 10% to justify the R9 Fury’s higher price, and we’re starting things off exactly where AMD needs to be for price/performance parity.

Looking at the absolute numbers, we’re going to see AMD promote the R9 Fury as a 4K card, but even with Battlefield 4 I feel this is a good example of why it’s better suited for high quality 1440p gaming. The only way the R9 Fury can maintain an average framerate over 50fps (and thereby reasonable minimums) with a 4K resolution is to drop to a lower quality setting. Otherwise at just over 60fps, it’s in great shape for a 1440p card.

As for the R9 Fury X comparison, it’s interesting how close the R9 Fury gets. The cut-down card is never more than 7% behind the R9 Fury X. Make no mistake, the R9 Fury X is meaningfully faster, but scenarios such as these question whether it’s worth the extra $100.

The Test Crysis 3
Comments Locked

288 Comments

View All Comments

  • CiccioB - Monday, July 13, 2015 - link

    The myth, here again!
    Let's see these numbers of a miraculous vs crippling driver.
    And I mean I WANT NUMBNERS!
    Or what you are talking about is just junk you are reporting because you can't elaborate yourself.
    Come on, the numbers!!!!!!!!!
  • FlushedBubblyJock - Thursday, July 16, 2015 - link

    So you lied loguerto, but the sad truth is amd bails on it's cards and drivers for them FAR FAR FAR sooner than nvidia does.
    YEARS SOONER.

    Get with it bub.
  • Count Vladimir - Thursday, July 16, 2015 - link

    Hard evidence or gtfo.
  • Roboyt0 - Sunday, July 12, 2015 - link

    I am very interested to see how much of a difference ASUS' power delivery system will make for (real) overclocking in general once voltage control is available. If these cards act the same as the 290's did, then AMD's default VRM setup could very likely be more than capable of overclocks in the 25% or more range. I'm basing the 25% or more off of my experience with a half dozen reference based R9 290's, default 947MHz core, that would reach 1200 core clock with ~100mV additional. And if you received a capable card then you could surpass those clocks with more voltage.

    It appears AMD has followed the EXACT same path they did with the 290 and 290X. The 290X always held a slight lead in performance, but the # of GPU components disabled didn't hinder the 290 as much as anyone thought. This is exactly what we see now with the Fury ~VS~ Fury X...overclock the Fury and it's the better buy. All while the Fury X is there for those who want that little bit of extra performance for the premium, and this time you're getting water cooling! It seems like a pretty good deal to me.

    Once 3rd party programmers(not AMD) figure out voltage control for these cards, history will likely repeat itself for AMD. Yes, these will run hotter and use more power than their Nvidia counterparts...I don't see why this is a shock to anyone since this is still 28nm and similar enough to Hawaii...What no one seems to mention is the amount of performance increase compared to Hawaii in the same power/thermal envelope..it's a very significant jump.

    Whom in the enthusiast PC world really cares about the additional power draw? We're looking at 60-90W under normal load conditions; Furmark is NOT normal load. Unless electricity where you hail from is that expensive, it isn't actually costing you that much more in the long run. If you're in the market for a ~$550 GPU, then you probably aren't too concerned with buying a good PSU. What the FurMark power draw of the Fury X/Sapphire Fury really tell us is that the reference PCB is capable of handling 385W+ of draw. This should give an idea of what the card can do once we are able to control the voltage.

    These cards are enthusiast grade and plenty of those users will remove the included cooler for maximum performance. A full cover waterblock is going to be the key to releasing the full potential of Fury(X) just like it was for 290(X). It is a definite plus to see board partners with solid air cooling solutions out of the gate though...Sapphire's cooling solution fares better in temperature AND noise during FurMark than ASUS' when it's pulling 130W additional power! Way to go Sapphire!

    My rant will continue concerning drivers. Nvidia has mature hardware with mature drivers. The fact AMD is keeping up, or winning is some instances, is a solid achievement. Go back to a 290(X) review when their primary competition was a 780 Ti, where the 780 Ti was usually winning. Now, the 390(X), that so many are calling a rebranded POS, easily bests the 780 Ti and competes with GTX 980. Nvidia changed architecture, but AMD is still competitive? Another commenter said it best by saying: "An AMD GPU is like a fine wine, and gets better with age."

    This tells me 3 things...

    1) Once drivers mature, AMD stands to gain solid performance improvements.
    2) Adding voltage control to enable actual overclocking will show the true potential of these cards.
    3) Add these two factors together and AMD has another winning product.

    Lastly we still have DX12 to factor into all of this. Sure, you can say DX12 is too far away, but in actuality it is not. I know there are those people who MUST HAVE the latest and greatest hardware every time something new comes around every ~9 months. However, there are plenty more of us who wait a few generations of GPUs to upgrade. If DX12 brings even a half of the anticipated performance gains and you're in the market, then purchasing this card now, or in the coming months, will be a solid investment for the coming years.
  • Peichen - Monday, July 13, 2015 - link

    Whatever flats your boat. There are still some people like you that believes FX CPUs are faster than i7s and they are what keeps AMD afloat. The rest of us.... we actually consider everything and go Intel & Nvidia.

    There are 3 fails in your assumptions:
    1. Fiji is a much bigger core tied to 4 HBM modules. OC will likely not be as "smooth" as 290X
    2. 60-90W is not just cost in electricity. It is also getting a PSU that will supply the additional draw and more fan(s) and better case to get the heat out. Or suffer the heat and noise. The $15-45 a year in additional electricity bill also means you will be in the red in a couple od years.
    3. You assume AMD/ATI driver team is still around and will be around a couple of years in the future.
  • silverblue - Tuesday, July 14, 2015 - link

    3. Unless the driver work has been completely outsourced and there's proof of this happening, I'm not sure you can use this as a "fail".

    Fiji isn't a brand new version of GCN so I don't expect the huge gains in performance that are being touted, however whatever they do bring to the table should benefit Tonga as well, which will (hopefully) distance itself from Tahiti and perhaps improve sales further down the stack.
  • Count Vladimir - Thursday, July 16, 2015 - link

    Honestly, driver outsourcing might be for the best in case of AMD.
  • Oxford Guy - Wednesday, July 15, 2015 - link

    The most electrically efficient 3D computer gaming via an ARM chip, right? Think of all the wasted watts for these big fancy GPUs. Even more efficient are text-based games.
  • FlushedBubblyJock - Thursday, July 16, 2015 - link

    You forgot he said spend a hundred and a half on a waterblock...for the amd card, for "full potential"..

    ROFL - once again the future that never comes is very bright and very expensive.
  • beck2050 - Monday, July 13, 2015 - link

    A bit disingenuous as custom cooled over clocked 980s are the norm these days and easily match or exceed Fury, while running cooler with much less power and can be found cheaper. AMD HAS its work cut out.

Log in

Don't have an account? Sign up now