Battlefield 4

Kicking off our 2015 benchmark suite is Battlefield 4, DICE’s 2013 multiplayer military shooter. After a rocky start, Battlefield 4 has since become a challenging game in its own right and a showcase title for low-level graphics APIs. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.

Battlefield 4 - 3840x2160 - Ultra Quality - 0x MSAA

Battlefield 4 - 3840x2160 - Medium Quality

Battlefield 4 - 2560x1440 - Ultra Quality

After stripping away the Frostbite engine’s expensive (and not wholly effective) MSAA, what we’re left with for BF4 at 4K with Ultra quality puts the GTX Titan X in a pretty good light. At 58.3fps it’s not quite up to the 60fps mark, but it comes very close, close enough that the GTX Titan X should be able to stay above 30fps virtually the entire time, and never drop too far below 30fps in even the worst case scenario. Alternatively, dropping to Medium quality should give the GTX Titan X plenty of headroom, with an average framerate of 94.8fps meaning even the lowest framerate never drops below 45fps.

From a benchmarking perspective Battlefield 4 at this point is a well optimized title that’s a pretty good microcosm of overall GPU performance. In this case we find that the GTX Titan X performs around 33% better than the GTX 980, which is almost exactly in-line with our earlier performance predictions. Keeping in mind that while GTX Titan X has 50% more execution units than GTX 980, it’s also clocked at around 88% of the clockspeed, so 33% is right where we should be in a GPU-bound scenario.

Otherwise compared to the GTX 780 Ti and the original GTX Titan, the performance advantage at 4K is around 50% and 66% respectively. GTX Titan X is not going to double the original Titan’s performance – there’s only so much you can do without a die shrink – but it continues to be amazing just how much extra performance NVIDIA has been able to wring out without increasing power consumption and with only a minimal increase in die size.

On the broader competitive landscape, this is far from the Radeon R9 290X/290XU’s best title, with GTX Titan X leading by 50-60%. However this is also a showcase title for when AFR goes right, as the R9 295X2 and GTX 980 SLI both shoot well past the GTX Titan X, demonstrating the performance/consistency tradeoff inherent in multi-GPU setups.

Finally, shifting gears for a moment, gamers looking for the ultimate 1440p card will not be disappointed. GTX Titan X will not get to 120fps here (it won’t even come close), but at 78.7fps it’s well suited for driving 1440p144 displays. In fact it’s the only single-GPU card to do better than 60fps at this resolution.

Our 2015 GPU Benchmark Suite & The Test Crysis 3
Comments Locked

276 Comments

View All Comments

  • Dug - Thursday, March 19, 2015 - link

    Thank you for pointing this out.
  • chizow - Monday, March 23, 2015 - link

    Uh, they absolutely do push 4GB, its not all for the framebuffer but they use it as a texture cache that absolutely leads to a smoother gaming experience. I've seen SoM, FC4, AC:Unity all use the entire 4GB on my 980 at 1440p Ultra settings (textures most important ofc) even without MSAA.

    You can optimize as much as you like but if you can keep texture buffered locally it is going to result in a better gaming experience.

    And for 780Ti owners not being happy, believe what you like, but these are the folks jumping to upgrade even to 980 because that 3GB has crippled the card, especially at higher resolutions like 4K. 780Ti beats 290X in everything and every resolution, until 4K.

    https://www.google.com/?gws_rd=ssl#q=780+ti+3gb+no...
  • FlushedBubblyJock - Thursday, April 2, 2015 - link

    Funny how 3.5GB wass just recently a kickk to the insufficient groin, a gigantic and terrible lie, and worth a lawsuit due to performance issues... as 4GB was sorely needed, now 4GB isn't used....

    Yes 4GB isn't needed. It was just 970 seconds ago, but not now !
  • DominionSeraph - Tuesday, March 17, 2015 - link

    You always pay extra for the privilege of owning a halo product.
    Nvidia already rewrote the pricing structure in the consumer's favor when they released the GTX 970 -- a card with $650 performance -- at $329. You can't complain too much that they don't give you the GTX 980 for $400. If you want above the 970 you're going to pay for it. And Nvidia has hit it out of the ballpark with the Titan X. If Nvidia brought the high end of Maxwell down in price AMD would pretty much be out of business considering they'd have to sell housefire Hawaii at $150 instead of being able to find a trickle of pity buyers at $250.
  • MapRef41N93W - Tuesday, March 17, 2015 - link

    Maxwell architecture is not designed for FP64. Even the Quadro doesn't have it. It's one of the ways NVIDIA saved so much power on the same node.
  • shing3232 - Tuesday, March 17, 2015 - link

    I believe they could put FP64 into it if they want, but power efficiency is a good way to make ads.
  • MapRef41N93W - Tuesday, March 17, 2015 - link

    Would have required a 650mm^2 die which would have been at the limits of what can be done on TSMC 28nm node. Would have also meant a $1200 card.
  • MapRef41N93W - Tuesday, March 17, 2015 - link

    And the Quadro a $4000 card doesn't have it, so why would a $999 gaming card have it.
  • testbug00 - Tuesday, March 17, 2015 - link

    would it have? No. They could have given it FP64. Could they have given it FP64 without pushing the power and heat up a lot? Nope.

    the 390x silicon will be capable of over 3TFlop FP64 (the 390x probably locked to 1/8 performance, however) and will be a smaller chip than this. The price to pay will be heat and power. How much? Good question.
  • dragonsqrrl - Tuesday, March 17, 2015 - link

    Yes, it would've required a lot more transistors and die area with Maxwell's architecture, which relies on separate fp64 and fp32 cores. Comparing the costs associated with double precision performance directly to GCN is inaccurate.

Log in

Don't have an account? Sign up now