Bioshock Infinite

Bioshock Infinite is Irrational Games’ latest entry in the Bioshock franchise. Though it’s based on Unreal Engine 3 – making it our obligatory UE3 game – Irrational had added a number of effects that make the game rather GPU-intensive on its highest settings. As an added bonus it includes a built-in benchmark composed of several scenes, a rarity for UE3 engine games, so we can easily get a good representation of what Bioshock’s performance is like.

Bioshock Infinite - 5760x1200 - Ultra Quality

Bioshock Infinite - 2560x1440 - Ultra Quality + DDoF

Bioshock Infinite - 1920x1080 - Ultra Quality + DDoF

We knew from the moment the first reviews came out that we wanted to use Bioshock Infinite as one of our new games, but we weren’t quite sure what to expect. In the end there’s almost a clean split between the relative performance of single-GPU cards and multi-GPU cards. As such while the 7990 and 7970GE CF are clearly in the lead over their NVIDIA counterparts at 2560, the GTX 680 slightly edges out the 7970GE.

In this case AMD is seeing better multi-GPU scaling than NVIDIA, giving the 7990 the edge here in all resolutions, with the 7990 besting the GTX 690 by 6% at 2560 and a very sizable and unexpected 20% at 5760. It will be interesting to see what the FCAT results are like once we have those back in order to confirm whether Bioshock is as smooth on AMD cards as it appears at first glance.

Civilization V Crysis 3
Comments Locked

91 Comments

View All Comments

  • HisDivineOrder - Wednesday, April 24, 2013 - link

    They bring a fantastic cooler that prioritizes silence and convenience to have SLI in a system that doesn't have two PCIe slots available for them. Plus, you always had the option of quad-SLI that's a little harder to do with four 680's.

    That said, I think anyone buying a 690 over a Titan now is pretty stupid. It's not about the speed difference. It's that if you're in the market for a $1k GPU, go for the one that won't be running out of memory with next year's PS4/next Xbox ports.
  • extremesheep - Wednesday, April 24, 2013 - link

    Table typo...should the first be "7990"?
  • extremesheep - Wednesday, April 24, 2013 - link

    Err...should page 1, table 1, column 1 be "7990" instead of "7970"?
  • Ryan Smith - Wednesday, April 24, 2013 - link

    You may be seeing an old, cached copy. That was fixed about 25 minutes ago.
  • code65536 - Wednesday, April 24, 2013 - link

    Any chance we could get Tomb Raider in future benchmark tests?
  • Ryan Smith - Wednesday, April 24, 2013 - link

    In the desktop tests? No. We keep the tests capped at 10 so that it's a manageable load when we need to redo everything, such as with the 7990 launch. At this point the desktop benchmark suite is set for at least the immediate future.
  • VulgarDisplay - Wednesday, April 24, 2013 - link

    4th paragraph: Incorrectly stated that Tahiti has 48 rops.
  • Flamencor - Wednesday, April 24, 2013 - link

    What a mediocre review! In your conclusions, you mention nothing about how AMD absolutely spanked NVIDIA in compute performance and synthetics! It is 75 watts more power hungry, and in exchange you get substantially more memory and a total win on compute and synthetics! I know synthetics aren't actual gaming numbers, but they're indicative of how the card will stand up to future games. The fact that the card has far better synthetics says a lot about it's longevity. The card looks like a great card (although quite late)! I'm no fanboy, but why can't people just write a legitimately upbeat and positive review about an amazing part?
  • Warren21 - Wednesday, April 24, 2013 - link

    Ryan typically has a slight undertone of NVIDIA bias; it can be found in most of his articles. That being said, the GTX 600 series are some amazing cards. I'd love to have a GK104-based card to replace my aged 6870 1GB.
  • CiccioB - Wednesday, April 24, 2013 - link

    This kind of compute benchmarks based on OpenCL are quite useless. No professional applications use OpenCL and nvidia doesn't really put all its efforts in optimizing their OpenCL drivers.
    You may be surprise to know that REAL applications that really need GPU assisted computation use CUDA. And thus you have the option to use nvidia GPU computation or nothing else.
    That's for how good is OpenCL. It may be open, it may be something AMD needs to show good (useless) graphs, but in real word none is going to use it for serious stuff.
    3D renderers are a meaningful example: apart the useless SmallLuxMark benchmark, professional engines use CUDA. AMD is not there with whatever "devasting" computational solution you may believe they have. That's why nvidia holds more than 80% of the professional market and it's the only one having GPUs solutions for HPC while AMD just struggles to sell consumer products.

    By the way, goo review, though a double Titan solution may have been added to make it more interesting (especially for power consumption) :)

Log in

Don't have an account? Sign up now