Battlefield 4

Kicking off our benchmark suite is Battlefield 4, DICE’s 2013 multiplayer military shooter. After a rocky start, Battlefield 4 has since become a challenging game in its own right and a showcase title for low-level graphics APIs. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.

Battlefield 4 - 3840x2160 - Ultra Quality - 0x MSAA

Battlefield 4 - 3840x2160 - Medium Quality

Battlefield 4 - 2560x1440 - Ultra Quality

When the R9 Fury X launched, one of the games it struggled with was Battlefield 4, where the GTX 980 Ti took a clear lead. However for the launch of the R9 Fury, things are much more in AMD’s favor. The two R9 Fury cards have a lead just shy of 10% over the GTX 980, roughly in-line with their price tag difference. As a result of that difference AMD needs to win in more or less every game by 10% to justify the R9 Fury’s higher price, and we’re starting things off exactly where AMD needs to be for price/performance parity.

Looking at the absolute numbers, we’re going to see AMD promote the R9 Fury as a 4K card, but even with Battlefield 4 I feel this is a good example of why it’s better suited for high quality 1440p gaming. The only way the R9 Fury can maintain an average framerate over 50fps (and thereby reasonable minimums) with a 4K resolution is to drop to a lower quality setting. Otherwise at just over 60fps, it’s in great shape for a 1440p card.

As for the R9 Fury X comparison, it’s interesting how close the R9 Fury gets. The cut-down card is never more than 7% behind the R9 Fury X. Make no mistake, the R9 Fury X is meaningfully faster, but scenarios such as these question whether it’s worth the extra $100.

The Test Crysis 3
Comments Locked

288 Comments

View All Comments

  • nightbringer57 - Friday, July 10, 2015 - link

    Intel kept it in stock for a while but it didn't sell. So the management decided to get rid of it, gave it away to a few colleagues (dell, HP, many OEMs used BTX for quite a while, both because it was a good user lock-down solution and because the inconvenients of BTX didn't matter in OEM computers, while the advantages were still here) and noone ever heard of it on the retail market again?
  • nightbringer57 - Friday, July 10, 2015 - link

    Damn those not-editable comments...
    I forgot to add: with the switch from the netburst.prescott architecture to Conroe (and its followers), CPU cooling became much less of a hassle for mainstream models so Intel did not have anything left to gain from the effort put into BTX.
  • xenol - Friday, July 10, 2015 - link

    It survived in OEMs. I remember cracking open Dell computers in the later half of 2000 and finding out they were BTX.
  • yuhong - Friday, July 10, 2015 - link

    I wonder if a BTX2 standard that fixes the problems of original BTX is a good idea.
  • onewingedangel - Friday, July 10, 2015 - link

    With the introduction of HBM, perhaps it's time to move to socketed GPUs.

    It seems ridiculous for the industry standard spec to devote so much space to the comparatively low-power CPU whilst the high-power GPU has to fit within the confines of (multiple) pci-e expansion slots.

    Is it not time to move beyond the confines of ATX?
  • DanNeely - Friday, July 10, 2015 - link

    Even with the smaller PCB footprint allowed by HBM; filling up the area currently taken by expansion cards would only give you room for a single GPU + support components in an mATX sized board (most of the space between the PCIe slots and edge of the mobo is used for other stuff that would need to be kept not replaced with GPU bits); and the tower cooler on top of it would be a major obstruction for any non-GPU PCIe cards you might want to put into the system.
  • soccerballtux - Friday, July 10, 2015 - link

    man, the convenience of the socketed GPU is great, but just think of how much power we could have if it had it's own dedicated card!
  • meacupla - Friday, July 10, 2015 - link

    The clever design trend, or at least what I think is clever, is where the GPU+CPU heatsinks are connected together, so that, instead of many smaller heatsinks trying to cool one chip each, you can have one giant heatsink doing all the work, which can result in less space, as opposed to volume, being occupied by the heatsink.

    You can see this sort of design on high end gaming laptops, Mac Pro, and custom water cooling builds. The only catch is, they're all expensive. Laptops and Mac Pro are, pretty much, completely proprietary, while custom water cooling requires time and effort.

    If all ATX mobos and GPUs had their core and heatsink mounting holes in the exact same spot, it would be much easier to design a 'universal multi-core heatsink' that you could just attach to everything that needs it.
  • Peichen - Saturday, July 11, 2015 - link

    That's quite a good idea. With heat-pipes, distance doesn't really matter so if there is a CPU heatsink that can extend 4x 8mm/10mm heatpipes over the videocard to cooled the GPU, it would be far quieter than the 3x 90mm can cooler on videocard now.
  • FlushedBubblyJock - Wednesday, July 15, 2015 - link

    330 watts transferred to the low lying motherboard, with PINS attached to amd's core failure next...
    Slap that monster heat onto the motherboard, then you can have a giant green plastic enclosure like Dell towers to try to move that heat outside the case... oh, plus a whole 'nother giant VRM setup on the motherboard... yeah they sure will be doing that soon ... just lay down that extra 50 bucks on every motherboard with some 6X VRM's just incase amd fanboy decides he wants to buy the megawatter amd rebranded chip...

    Yep, NOT HAPPENING !

Log in

Don't have an account? Sign up now