Battlefield 4

Our latest addition to our benchmark suite and our current major multiplayer action game of our benchmark suite is Battlefield 4, DICE’s 2013 multiplayer military shooter. After a rocky start, Battlefield 4 has finally reached a point where it’s stable enough for benchmark use, giving us the ability to profile one of the most popular and strenuous shooters out there. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.

Battlefield 4 - 2560x1440 - Ultra Quality

Battlefield 4 - 1920x1080 - Ultra Quality

Battlefield 4 - 1920x1080 - High Quality

Our first Mantle-enabled game, Battlefield 4 shows the current Mantle R9 285 performance regressions front and center. At every resolution the R9 285 loses performance, sometimes remarkably so. As a result it is limited to Direct3D.

Regressions aside, I feel like Battlefield 4 is a good case for why the R9 285 needs more VRAM, or at the very least it’s not a good choice for 2560x1440. The sustained performance at 2560 is too low for this game, and the performance loss compared to the 3GB R9 280 appears to be a direct result of VRAM pressure. If the R9 285 had more VRAM, I suspect it would reach parity with the R9 280, especially given what happens at 1080p with High settings.

In any case, this is also the first game where the R9 285 trades blows with the GTX 760 rather than taking a distinct lead. With both cards limited to Direct3D, both cards are also returning similar performance. Which for R9 285 and its higher price tag is essentially a loss.

Bioshock Infinite Crysis 3
Comments Locked

86 Comments

View All Comments

  • mczak - Wednesday, September 10, 2014 - link

    This is only partly true. AMD cards nowadays can stay at the same clocks in multimon as in single monitor mode though it's a bit more limited than GeForces. Hawaii, Tonga can keep the same low clocks (and thus idle power consumption) up to 3 monitors, as long as they all are identical (or rather more accurately probably, as long as they all use the same display timings). But if they have different timings (even if it's just 2 monitors), they will clock the memory to the max clock always (this is where nvidia kepler chips have an advantage - they will stay at low clocks even with 2, but not 3, different monitors).
    Actually I believe if you have 3 identical monitors, current kepler geforces won't be able to stick to the low clocks, but Hawaii and Tonga can, though unfortunately I wasn't able to find the numbers for the geforces - ht4u.net r9 285 review has the numbers for it, sorry I can't post the link as it won't get past the anandtech forum spam detector which is lame).
  • Solid State Brain - Thursday, September 11, 2014 - link

    A twin monitor configuration where the secondary display is smaller / has a lower resolution than the primary one is a very common (and logic) usage scenario nowadays and that's what AMD should sort out first. I'm positively surprised that on newer Tonga GPUs if both displays are identical frequencies remain low (according to the review you pointed out), but I'm not going to purchase a different display (or limit my selection) to get advantage of that when there's no need to with equivalent NVidia GPUs.
  • mczak - Thursday, September 11, 2014 - link

    Fixing this is probably not quite trivial. The problem is if you reclock the memory you can't honor memory requests for display scan out for some time. So, for single monitor, what you do is reclock during vertical blank. But if you have several displays with different timings, this won't work for obvious reasons, whereas if they have identical timings, you can just run them essentially in sync, so they have their vertical blank at the same time.
    I don't know how nvidia does it. One possibility would be a large enough display buffer (but I think it would need to be in the order of ~100kB or so, so not quite free in terms of hw cost).
  • PEJUman - Thursday, September 11, 2014 - link

    I used multimonitor with AMD & NVIDIA cards. I would take that 30W hit if it means working well.
    NVIDIA: too aggressive with low power mode, if you have video on one screen & game on the other, it will remain at the clock speed of the 1st event (if you start the video before the game loading, it will be stuck at the video clocks).

    I used 780TI currently, R9 290x I had previously works better where it will always clock up...
  • hulu - Wednesday, September 10, 2014 - link

    The conclusions section of Crysis: Warhead seems to be copy-pasted from Crysis 3. R9 285 does not in fact trail GTX 760.
  • thepaleobiker - Wednesday, September 10, 2014 - link

    @Ryan - A small typo on the last page, last line of first paragraph - "Functionally speaking it’s just an R9 285 with more features"

    It should be R9 280, not 285. Just wanted to call it out for you! :)

    Bring on more Tonga, AMD!
  • FriendlyUser - Wednesday, September 10, 2014 - link

    I would like to note that if memory compression is effective, it should not only improve bandwidth but also reduce the need for texture memory. Maybe 2GB with compression is closer to 3GB in practice, at least if the ~40% compression advantage is true.

    Obviously, there is no way to predict the future, but I think your conclusion concerning 2GB boards should take compression in account.
  • Spirall - Wednesday, September 10, 2014 - link

    If GCN1.2 (instead of a GCN 2.0) is what AMD has to offer as the new arquitecture for their next year cards, Maxwell (based in 750Ti x 260X tests), will punch hard AMD in terms of performance per watt and production cost (not price) so their net income.
  • shing3232 - Wednesday, September 10, 2014 - link

    750ti use a better 28nm process call HPM while rest of the 200 series use HPL , that's the reason why maxwell are so efficient.
  • Spirall - Wednesday, September 10, 2014 - link

    I'm afraid this won't be enough (but hope it does). Anyway, as Nvidia is expected to launch their Maxwell 256 bits card nearby, we'll have the answer soon.

Log in

Don't have an account? Sign up now