Crysis 3

Still one of our most punishing benchmarks 3 years later, Crysis 3 needs no introduction. Crytek’s DX11 masterpiece, Crysis 3’s Very High settings still punish even the best of video cards, never mind the rest. Along with its high performance requirements, Crysis 3 is a rather balanced game in terms of power consumption and vendor optimizations. As a result it can give us a good look at how our video cards stack up on average, and later on in this article how power consumption plays out.

Crysis 3 - 3840x2160 - Very High Quality + FXAA

Crysis 3 - 2560x1440 - Very High Quality + FXAA

Crysis 3 - 1920x1080 - Very High Quality + FXAA

This being the first cycle we’ve used the Very High settings, it’s humorous to see a $700 video card getting 35fps on a 3 year old game. Very High settings give Crysis 3 a level of visual quality many games still can’t match, but the tradeoff is that it obliterates most video cards. We’re probably still 3-4 years out from a video card that can run at 4K with 4x MSAA at 60fps, never mind accomplishing that without the MSAA.

The GTX 1080 does however at least get the distinction of being the one and only card to crack 30fps at 4K. Though 30fps is not suggested for Crysis, it can legitimately claim to be the only card that can even handle the game at 4K with a playable framerate at this time. Otherwise if we turn down the resolution, the GTX 1080 is now the only card to crack 60fps at 1440p. Very close to that mark though is the GTX 1070, which at 58.1fps is a small overclock away from 60fps.

Looking at the generational comparisons, GTX 1080 and GTX 1070 lead by a bit less than usual, at 62% and 51% respectively. The GTX 1080/1070 gap on the other hand is pretty typical, with the GTX 1080 leading by 27% at 4K, 23% at 1440p, and 21% at 1080p.

Battlefield 4 The Witcher 3
POST A COMMENT

200 Comments

View All Comments

  • Ryan Smith - Wednesday, July 20, 2016 - link

    Thanks. Reply
  • Eden-K121D - Wednesday, July 20, 2016 - link

    Finally the GTX 1080 review Reply
  • guidryp - Wednesday, July 20, 2016 - link

    This echoes what I have been saying about this generation. It is really all about clock speed increases. IPC is essentially the same.

    This is where AMD lost out. Possibly in part the issue was going with GloFo instead of TSMC like NVidia.

    Maybe AMD will move Vega to TSMC...
    Reply
  • nathanddrews - Wednesday, July 20, 2016 - link

    Curious... how did AMD lose out? Have you seen Vega benchmarks? Reply
  • TheinsanegamerN - Wednesday, July 20, 2016 - link

    its all about clock speed for Nvidia, but not for AMD. AMD focused more on ICP, according to them. Reply
  • tarqsharq - Wednesday, July 20, 2016 - link

    It feels a lot like the P4 vs Athlon XP days almost. Reply
  • stereopticon - Wednesday, July 20, 2016 - link

    My favorite era of being a nerd!!! Poppin' opterons into s939 and pumpin the OC the athlon FX levels for a fraction of the price all while stompin' on pentium. It was a good (although expensive) time to a be a nerd... Besides paying 100 dollars for 1gb of DDR500. 6800gs budget friendly cards, and ATi x1800/1900 super beasts.. how i miss the days Reply
  • eddman - Thursday, July 21, 2016 - link

    Not really. Pascal has pretty much the same IPC as Maxwell and its performance increases accordingly with the clockspeed.

    Pentium 4, on the other hand, had a terrible IPC compared to Athlon and even Pentium 3 and even jacking its clockspeed to the sky didn't help it.
    Reply
  • guidryp - Wednesday, July 20, 2016 - link

    No one really improved IPC of their units.

    AMD was instead forced increase the unit count and chip size for 480 is bigger than the 1060 chip, and is using a larger bus. Both increase the chip cost.

    AMD loses because they are selling a more expensive chip for less money. That squeezes their unit profit on both ends.
    Reply
  • retrospooty - Wednesday, July 20, 2016 - link

    "This echoes what I have been saying about this generation. It is really all about clock speed increases. IPC is essentially the same."
    - This is a good thing. Stuck on 28nm for 4 years, moving to 16nm is exactly what Nvidias architecture needed.
    Reply

Log in

Don't have an account? Sign up now