Battlefield 3

Our major multiplayer action game of our benchmark suite is Battlefield 3, DICE’s 2011 multiplayer military shooter. Its ability to pose a significant challenge to GPUs has been dulled some by time and drivers, but it’s still a challenge if you want to hit the highest settings at the highest resolutions at the highest anti-aliasing levels. Furthermore while we can crack 60fps in single player mode, our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, so hitting high framerates here may not be high enough.

For our Battlefield 3 benchmark NVIDIA cards have consistently been the top performers over the years, and as a result this is one of the hardest fights for any AMD card. So how does the 290X fare? Very well, as it turns out. The slowest game for the 290X (relative to the GTX 780) has it losing to the GTX 780 by just 2%, effectively tying NVIDIA’s closest competitor. Not only is the 290X once again the first single-GPU AMD card that can break 60fps average on a game at 2560 – thereby ensuring good framerates even in heavy firefights – but it’s fully competitive with NVIDIA in doing so in what’s traditionally AMD’s worst game. At worst for AMD, they can’t claim to be competitive with GTX Titan in this one.

Moving on to 4K gaming, none of these single-GPU cards are going to cut it at Ultra quality; the averages are decent but the minimums will drop to 20fps and below. This means we either drop down to Medium quality, where 290X is now performance competitive with GTX Titan, or we double up on GPUs, which sees the 290X CF in uber mode take top honors. This game happens to be another good example of how the 290X is scaling into 4K better than the GTX 780 and other NVIDIA cards are, as not only does AMD’s relative positioning versus NVIDIA cards improve, but in heading to 4K AMD picks up a 13% lead over the GTX 780. The only weak spot here for AMD will be performance scaling for multiple GPUs, as while the 290X enjoys a 94% scaling factor at 2560, that drops to 60% at 4K, at a time where NVIDIA’s scaling factor is 76%. The 290X has enough of a performance lead for the 290X CF to hold out over the GTX 780 SLI, but the difference in scaling factors will make it cut close.

Meanwhile in an inter-AMD comparison, this is the first game in our benchmark suite where the 290X doesn’t beat the 280X by at least 30%. Falling just short at 29.5%, it’s a reminder that despite the similarities between 290X (Hawaii) and 280X (Tahiti), the performance differences between the two will not be consistent.

Looking at our delta percentages, this is another strong showing for the 290X CF, especially as compared to the 280X CF. AMD has once again halved their variance as compared to the 280X CF, bringing it down to sub-10% levels. This despite the theoretical advantage that the dedicated CFBI should give the 280X. However AMD can’t claim to have the lowest variance of any multi-GPU setup, as this is NVIDIA’s best game, with the GTX 780 SLI seeing a variance of only 6%. It’s a shame not all games can be like this (for either vendor) since there would be little reason not to go with a multi-GPU setup if this was the typical AFR experience as opposed to the best AFR experience.

Finally, looking at delta percentages under 4K shows that AMD’s variance has once again risen slightly compared to the variance at 2560x1440, but not significantly so. The 290X CF still holds under 10% here.

Bioshock Infinite Crysis 3
Comments Locked

396 Comments

View All Comments

  • ninjaquick - Thursday, October 24, 2013 - link

    so 4-5% faster than Titan?
  • Drumsticks - Thursday, October 24, 2013 - link

    If the 780Ti is $599, then that means the 780 should see at least a $150 (nearly 25%!) price drop, which is good with me.
  • DMCalloway - Thursday, October 24, 2013 - link

    So, what you are telling me is Nvidia is going to stop laughing- all- the- way- to-the-bank and price the 780ti for less than current 780 prices? Current 780 owners are going to get HOT and flood the market with used 780's.
  • dragonsqrrl - Thursday, October 24, 2013 - link

    Why is it that this is only ever the case when Nvidia performs a massive price drop? Nvidia price drop = early adopters getting screwed (even though 780 has been out for ~6 months now). AMD price drop = great value for enthusiasts, go AMD! ... lolz.
  • Minion4Hire - Thursday, October 24, 2013 - link

    Titan is a COMPUTE card. A poor man's (relatively speaking) proper compute solution. The fact that it is also a great gaming card is almost incidental. No one needs a 6GB frame buffer for gaming right now. The Titan comparisons are nearly meaningless.

    The "nearly" part is the unknown 780 TI. Nvidia could enable the remaining CUs on 780 to at least give the TI comparable performance to Titan. But who cares that Titan is $1000? It isn't really relevant.
  • ddriver - Thursday, October 24, 2013 - link

    Even much cheaper radeons compeltely destroy the titan as well as every other nvidia gpu in compute, do not be fooled by a single, poorly implemented test, the nvidia architecture plainly sucks in double precision performance.
  • ShieTar - Thursday, October 24, 2013 - link

    Since "much cheaper" Radeons tend to deliver 1/16th DP performance, you seem to not really know what you are talking about. Go read up on a relevant benchmark suite on professional and compute cards, e.g. http://www.tomshardware.com/reviews/best-workstati... The only tasks where AMD cards shine are those implemented in OpenCL.
  • ddriver - Thursday, October 24, 2013 - link

    "Much cheaper" relative to the price of the titan, not entry level radeons... You clutched onto a straw and drowned...

    OpenCL is THE open and portable industry standard for parallel computing, did you expect radeons to shine at .. CUDA workloads LOL, I'd say OpenCL performance is all I really need, it has been a while since I played or cared about games.
  • Pontius - Tuesday, October 29, 2013 - link

    I'm in the same boat as you ddriver, all I care about is OpenCL in these articles. I go straight to that section usually =)
  • TheJian - Friday, October 25, 2013 - link

    You're neglecting the fact that everything you can do professionally in openCL you can already do faster in cuda. Cuda is taught in 600+ universities for a reason. It is in over 200 pro apps and has been funded for 7+yrs unlike opencl which is funded by a broke company hoping people will catch on one day :) Anandtech refuses to show cuda (gee they do have an AMD portal after all...LOL) but it exists and is ultra fast. You really can't name a pro app that doesn't have direct support or support via plugin for Cuda. And if you're buying NV and running opencl instead of cuda (like anand shows calling it compute crap) you're an idiot. Why don't they run Premiere instead of Sony crap for video editing? Because Cuda works great for years in it. Same with Photoshop etc...

    You didn't look at folding@home DP benchmark here in this review either I guess. 2.5x faster than 290x. As you can see it depends on what you do and the app you use. I consider F@H stupid use of electricity but that's just me...LOL. Find anything where OpenCL (or any AMD stuff, directx, opengl) beats CUDA. Compute doesn't just mean OpenCL, it means CUDA too! Dumb sites just push openCL because its OPEN...LOL. People making money use CUDA and generally buy quadro or tesla (they own 90% of the market for a reason, or people would just buy radeons right?).
    http://www.anandtech.com/show/7457/the-radeon-r9-2...
    DP in F@H here. Titan sort of wins right? 2.5x or so over 290x :) It's comic both here and toms uses a bunch of junk synthetic crap (bitmining, Asics do that now, basemark junk, F@H, etc) to show how good AMD is, but forget you can do real work with Cuda (heck even bitmining can be done with cuda)

    When you say compute, I think CUDA, not opencl on NV. As soon as you toss in Cuda the compute story changes completely. Unfortunately even Toms refuses to pit OpenCL vs. Cuda just like here at anandtech (but that's because both love OpenCL and hate proprietary stuff). But at least they show you in ShieTar's link (which craps out, remove the . at the end of the link) that Titan kills even the top quadro cards (it's a Tesla remember for $1500 off). It's 2x+ faster than quadro's in almost everything they tested. So yeah, Titan is very worth it for people who do PRO stuff AND game.
    http://www.tomshardware.com/reviews/best-workstati...
    For the lazy, fixed ShieTar's link.

    All these sites need to do is fire up 3dsmax, cinema4d, Blender, adobe (pick your app, After Effect, Premiere, Photoshop) and pit Cuda vs. OpenCL. Just pick an opencl plugin for AMD (luxrender) and Octane/furryball etc for NV then run the tests. Does AMD pay all these sites to NOT do this? I comment and ask on every workstation/vid card article etc at toms, they never respond...LOL. They run pure cuda, then pure opencl, but act like they never meet. They run crap like basemark for photo/video editing opencl junk (you can't make money on that), instead of running adobe and choosing opencl(or directx/opengl) for AMD and Cuda for NV. Anandtech runs Sony Vegas which a quick google shows has tons of problems with NV. Heck pit Sony/AMD vs. Adobe/NV. You can run the same tests in both on video, though it would be better to just use adobe for both but they won't do that until AMD gets done optimizing for the next rev...ROFL. Can't show AMD in a bad light here...LOL. OpenCL sucks compared to Cuda (proprietary or not...just the truth).

Log in

Don't have an account? Sign up now