Battlefield 4

One of the older games in our benchmark suite, DICE’s Battlefield 4 remains a staple of MP gaming. Even at its age, Battlefield 4 remained a challenging game in its own right, as very few mass market MP shooters push the envelope on graphics quality right now. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.

Battlefield 4 - 3840x2160 - Ultra Quality (0x MSAA)

Battlefield 4 - 2560x1440 - Ultra Quality

Battlefield 4 - 1920x1080 - Ultra Quality

As a game that has traditionally favored NVIDIA, Battlefield 4 makes for a very clean sweep of the field. The GTX 1080 takes top honors with the GTX 1070 some distance behind it. Notably, the two Pascal cards become the first cards to cross 60fps at 4K, which means that they’re the first cards we can be reasonably sure won’t have framerate dips below 30fps in multiplayer.

Looking at our standard generational comparisons, both GTX 1080 and GTX 1070 improve upon their predecessors by about what we’d expect; 67% and 58% respectively. Or to see how GTX 1080 and GTX 1070 compare, we find that the GTX 1080 leads its cut-down sibling by between 20% and 25%, with the gap increasing with the resolution. This is consistent with what we know about GTX 1080, as its bandwidth advantage means that it’s going to have an easier time pushing pixels at 4K, as the case is here.

Finally, to check in on the GTX 680, we find the GTX 1080 has only improved in performance by 2.8x, which is actually a bit less of a gain than the average. None the less we’ve gone from a card that can’t quite muster 1080p with 4xMSAA to a card that can easily handle 4K without any MSAA.

Ashes of the Singularity Crysis 3
Comments Locked

200 Comments

View All Comments

  • TestKing123 - Wednesday, July 20, 2016 - link

    Then you're woefully behind the times since other sites can do this better. If you're not able to re-run a benchmark for a game with a pretty significant patch like Tomb Raider, or a high profile game like Doom with a significant performance patch like Vulcan that's been out for over a week, then you're workflow is flawed and this site won't stand a chance against the other crop. I'm pretty sure you're seeing this already if you have any sort of metrics tracking in place.
  • TheinsanegamerN - Wednesday, July 20, 2016 - link

    So question, if you started this article on may 14th, was their no time in the over 2 months to add one game to that benchmark list?
  • nathanddrews - Wednesday, July 20, 2016 - link

    Seems like an official addendum is necessary at some point. Doom on Vulkan is amazing. Dota 2 on Vulkan is great, too (and would be useful in reviews of low end to mainstream GPUs especially). Talos... not so much.
  • Eden-K121D - Thursday, July 21, 2016 - link

    Talos Principle was a proof of concept
  • ajlueke - Friday, July 22, 2016 - link

    http://www.pcgamer.com/doom-benchmarks-return-vulk...

    Addendum complete.
  • mczak - Wednesday, July 20, 2016 - link

    The table with the native FP throughput rates isn't correct on page 5. Either it's in terms of flops, then gp104 fp16 would be 1:64. Or it's in terms of hw instruction throughput - then gp100 would be 1:1. (Interestingly, the sandra numbers for half-float are indeed 1:128 - suggesting it didn't make any use of fp16 packing at all.)
  • Ryan Smith - Wednesday, July 20, 2016 - link

    Ahh, right you are. I was going for the FLOPs rate, but wrote down the wrong value. Thanks!

    As for the Sandra numbers, they're not super precise. But it's an obvious indication of what's going on under the hood. When the same CUDA 7.5 code path gives you wildly different results on Pascal, then you know something has changed...
  • BurntMyBacon - Thursday, July 21, 2016 - link

    Did nVidia somehow limit the ability to promote FP16 operations to FP32? If not, I don't see the point in creating such a slow performing FP16 mode in the first place. Why waste die space when an intelligent designer can just promote the commands to get normal speeds out of the chip anyways? Sure you miss out on speed doubling through packing, but that is still much better than the 1/128 (1/64) rate you get using the provided FP16 mode.
  • Scali - Thursday, July 21, 2016 - link

    I think they can just do that in the shader compiler. Any FP16 operation gets replaced by an FP32 one.
    Only reading from buffers and writing to buffers with FP16 content should remain FP16. Then again, if their driver is smart enough, it can even promote all buffers to FP32 as well (as long as the GPU is the only one accessing the data, the actual representation doesn't matter. Only when the CPU also accesses the data, does it actually need to be FP16).
  • owan - Wednesday, July 20, 2016 - link

    Only 2 months late and published the day after a different major GPU release. What happened to this place?

Log in

Don't have an account? Sign up now