Battlefield 4

Kicking off our benchmark suite is Battlefield 4, DICE’s 2013 multiplayer military shooter. After a rocky start, Battlefield 4 has since become a challenging game in its own right and a showcase title for low-level graphics APIs. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.

Battlefield 4 - 3840x2160 - Ultra Quality - 0x MSAA

Battlefield 4 - 3840x2160 - Medium Quality

Battlefield 4 - 2560x1440 - Ultra Quality

Battlefield 4 is going to set the pace for the rest of this review. In our introduction we talked about how the GTX 980 Ti may as well be the GTX Titan X, and this is one such example why. With a framerate deficit of no more than 3% in this benchmark, the difference between the two cards is just outside the range of standard run-to-run experimental variation that we see in our benchmarking process. So yes, it really is that fast.

In any case, after stripping away the Frostbite engine’s expensive (and not wholly effective) MSAA, what we’re left with for BF4 at 4K with Ultra quality puts the 980 Ti in a pretty good light. At 56.5fps it’s not quite up to the 60fps mark, but it comes very close, close enough that the GTX 980 Ti should be able to stay above 30fps virtually the entire time, and never drop too far below 30fps in even the worst case scenario. Alternatively, dropping to Medium quality should give the card plenty of headroom, with an average framerate of 91.8fps meaning even the lowest framerate never drops below 45fps.

Meanwhile our other significant comparison here is the GTX 980, which just saw its price cut by $50 to $499 to make room for the GTX 980 Ti. At $649 the GTX 980 Ti ideally should be 30% faster to justify its 30% higher price tag; here it’s almost exactly on that mark, fluctuating between a 28% and 32% lead depending on the resolution and settings.

Finally, shifting gears for a moment, gamers looking for the ultimate 1440p card will not be disappointed. GTX 980 Ti will not get to 120fps here (it won’t even come close), but at 77.7fps it’s well suited for driving 1440p144 displays. In fact and GTX Titan X are the single-GPU cards to do better than 60fps at this resolution.

NVIDIA's Computex Announcements & The Test Crysis 3
Comments Locked

290 Comments

View All Comments

  • FlushedBubblyJock - Wednesday, June 10, 2015 - link

    I bought a bunch of G80 G92 G92b and G94 nvidia cards because you could purchase memory size, bandwidth, bit width, power connector config, essentially any speed at any price point for a gamers rig, install the same driver, change the cards easily, upgrade for your customers without hassles...

    IT WAS A GOLD MINE OF FLEXIBILITY

    What happened was, the amd fanboys got very angry over the IMMENSE SUCCESS of the initial G80 and it's reworked cores and totally fluid memory, card size, bit width, and pricing configurations... so they HAD TO TRY TO BRING IT DOWN...

    Thus AMD launched their PR war, and the clueless amd fan launched their endless lies.

    I'll tell you this much, no on would trade me a 9800GTX for a 9800GT

    I couldn't get the 92 bit width cards for the same price as the 128 bit

    DDR2 and DDR3 also differentiated the stack massively.

    What we had wasn't rebranding, but an amazingly flexible GPU core that stood roaring above at the top and could be CUT down to the middle and the low gaming end, an configured successfully with loads of different bit widths and memory configs....

    64 bit width, 92, 128, 256, 384, 192, ETC...

    That was an is a awesome core, period.
  • BillyONeal - Sunday, May 31, 2015 - link

    And people have been bent out of shape about it. For "YEARS" :)
  • dragonsqrrl - Sunday, May 31, 2015 - link

    Their highest-end rebadge, the 390X, will likely compete with the 980, not the 980 Ti. The 980 Ti will be closer to Fiji's performance profile.
  • austinsguitar - Sunday, May 31, 2015 - link

    I dont think you realize how much more efficiant this card is even compared to past cards for its nm and performance. This is a feat. Just calm down and enjoy. I am very happy that the cards price us perfect. :) thanks nvidia
  • MapRef41N93W - Sunday, May 31, 2015 - link

    Maybe you aren't aware of how silicon works, but this a 601mm^2 die which costs a boat load to produce especially with the rising costs of crystalline silicon dies. Being on 28nm this long just means the yields are higher (which is why a 601mm^2 is even possible).

    You aren't going to see a 14nm card that outperforms this by much till 2017 at the earliest which following the recent NVIDIA trends should see the Titan XYZ (whatever they want to call it) which should be a pretty huge jump at a pretty high price.
  • Thomas_K - Monday, June 1, 2015 - link

    Actually AMD is doing 14nm starting next year

    http://www.guru3d.com/news-story/it-is-official-am...
    "Although this was a rumor for a long time now we now know that AMD skips 20nm and jumps onto a 14nm fabrication node for their 2016 GPUs."
  • dragonsqrrl - Sunday, May 31, 2015 - link

    Not sure I understand your comment, 28nm is precisely why we're paying this much for this level of performance in 2015... But it's also pretty impressive for the same reason.
  • Azix - Sunday, May 31, 2015 - link

    14/16nm might cost more. 28nm should have better yields and lower cost. These chips do not cost much to make at all (retail price could be 2-3 times the chip cost)
  • dragonsqrrl - Sunday, May 31, 2015 - link

    I think you misinterpreted my comment. I was responding to someone who seemed shocked by the fact that price/performance ratios aren't improving dramatically despite the fact that we're on a very mature process. In response I said the fact that we're on the same process is precisely why we aren't seeing dramatic improvements in price/performance ratios.

    "28nm should have better yields and lower cost. These chips do not cost much to make at all (retail price could be 2-3 times the chip cost)"
    Yields are just one part of the equation. Die size also plays a significant role in manufacturing costs. The fact that your trying to say with a straight face that GM200 does not cost much to make says more than your written comment itself.
  • zepi - Monday, June 1, 2015 - link

    Assuming perfect scaling 600mm2 28nm chip would shrink to 150mm2 at 14nm.

    GM107 is a 148mm2 chip, so basically this "monster" with just a dieshrink would find a nice place for itself at the bottom end of Nvidias lineup with after transition to 14nm.

    This does not take into account the fact that at 14nm and 150mm2 they couldn't give it enough memory bandwidth so easily, but just tells you something about how significant the reduction in size and manifacturing cost is after the initial ramp-up of the yields.

Log in

Don't have an account? Sign up now