Battlefield 4

Our latest addition to our benchmark suite and our current major multiplayer action game of our benchmark suite is Battlefield 4, DICE’s 2013 multiplayer military shooter. After a rocky start, Battlefield 4 has finally reached a point where it’s stable enough for benchmark use, giving us the ability to profile one of the most popular and strenuous shooters out there. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.

Battlefield 4 - 1920x1080 - High Quality

Battlefield 4 - 1920x1080 - Medium Quality

Battlefield 4 - 1920x1080 - Low Quality

Bioshock Infinite Crysis 3
POST A COMMENT

181 Comments

View All Comments

  • MrSpadge - Tuesday, February 18, 2014 - link

    To be fair GTX650Ti Boost consumes ~100 W in the real world. Still a huge improvement! Reply
  • NikosD - Tuesday, February 18, 2014 - link

    Hello.

    I have a few questions regarding HTPC and video decoding.

    Can we say that we a new video processor from Nvidia, a new name like VP6 or more like a VP5.x ?

    How Nvidia is calling the new video decoder ?

    Why don't you add a 4K60 fps clip in order to test soon to be released HDMI 2.0 output ?

    If you run a benchmark using DXVA Checker between VP5 and VP6 (?) how much faster is VP6 in H.264 1080p, 4K clips ?

    Thanks!
    Reply
  • Ryan Smith - Thursday, February 20, 2014 - link

    NVIDIA doesn't have a name for it; at least not one they're sharing with us. Reply
  • NikosD - Thursday, February 20, 2014 - link

    Thanks.
    Is it possible to try a 4K60fps with Maxwell ?

    I wonder if it can decode it in realtime...
    Reply
  • Flunk - Tuesday, February 18, 2014 - link

    I think these will be a lot more exciting in laptops. Even if they're no where near Nvidia's claimed 2x Kepler efficiency per watt. On the desktop it's not really that big a deal. The top-end chip will probably be ~40% faster than the 780TI but that will be a while. Reply
  • dylan522p - Tuesday, February 18, 2014 - link

    the 880 will be much more powerful than the 780ti. More than 40% even. They could literally die shrink and throw a few more SMX's and the 40% would be achieved. I would imagine either they are gonna have a HUGE jump (80% +) or they are gonna do what they did with Kepler and release a 200W Sku that is about 50% faster and when 20nm yields are good enough have the 900 series come with 250W Skus. Reply
  • npz - Tuesday, February 18, 2014 - link

    For HTPC or general video offloading support, you'd be better off with a powerful cpu. The article states rendering issues in certain circumstances using the GPU and there is no mention of reference frames, 10-bit or colorspace support beyond YUV 4:2:0 so I will assume these to be unrectified by the fixed function decoders/encoders (at least until HEVC since these are part of its higher profiles). Reply
  • npz - Tuesday, February 18, 2014 - link

    I also don't trust the quality of NVENC at all, especially at lower bitrates vs x264 (and especially with x264 psycho visual optimizations). And it the lower bitrates is where such hw encoders will be most useful for real time streaming. x264's fast preset can already do greater than realtime and produces better results. Reply
  • Egg - Tuesday, February 18, 2014 - link

    I'm pretty sure that on HTPCs the idea is to use madVR, which is GPU assisted. Reply
  • npz - Tuesday, February 18, 2014 - link

    madVR only helps with rendering and from these tests, full GPU offloaded rendering (scaling, colorspace conversion to RGB, deinterlacing, post-processing, video mixing etc) is broken for some specs. It's useful when it works though. But I'd rather have a good cpu and basic gpu good enough the final portion of the rendering pipeline, that's guaranteed to play anything in any format/spec. Reply

Log in

Don't have an account? Sign up now