Crysis 3

Still one of our most punishing benchmarks 3 years later, Crysis 3 needs no introduction. Crytek’s DX11 masterpiece, Crysis 3’s Very High settings still punish even the best of video cards, never mind the rest. Along with its high performance requirements, Crysis 3 is a rather balanced game in terms of power consumption and vendor optimizations. As a result it can give us a good look at how our video cards stack up on average, and later on in this article how power consumption plays out.

Crysis 3 - 3840x2160 - Very High Quality + FXAA

Crysis 3 - 2560x1440 - Very High Quality + FXAA

Crysis 3 - 1920x1080 - Very High Quality + FXAA

This being the first cycle we’ve used the Very High settings, it’s humorous to see a $700 video card getting 35fps on a 3 year old game. Very High settings give Crysis 3 a level of visual quality many games still can’t match, but the tradeoff is that it obliterates most video cards. We’re probably still 3-4 years out from a video card that can run at 4K with 4x MSAA at 60fps, never mind accomplishing that without the MSAA.

The GTX 1080 does however at least get the distinction of being the one and only card to crack 30fps at 4K. Though 30fps is not suggested for Crysis, it can legitimately claim to be the only card that can even handle the game at 4K with a playable framerate at this time. Otherwise if we turn down the resolution, the GTX 1080 is now the only card to crack 60fps at 1440p. Very close to that mark though is the GTX 1070, which at 58.1fps is a small overclock away from 60fps.

Looking at the generational comparisons, GTX 1080 and GTX 1070 lead by a bit less than usual, at 62% and 51% respectively. The GTX 1080/1070 gap on the other hand is pretty typical, with the GTX 1080 leading by 27% at 4K, 23% at 1440p, and 21% at 1080p.

Battlefield 4 The Witcher 3
Comments Locked

200 Comments

View All Comments

  • Ranger1065 - Thursday, July 21, 2016 - link

    Your unwavering support for Anandtech is impressive.

    I too have a job that keeps me busy, yet oddly enough I find the time to browse (I prefer that word to "trawl") a number of sites.

    I find it helps to form objective opinions.

    I don't believe in early adoption, but I do believe in getting the job done on time, however if you are comfortable with a 2 month delay, so be it :)

    Interesting to note that architectural deep dives concern your art and media departments so closely in their purchasing decisions. Who would have guessed?

    It's true (God knows it's been stated here often enough) that
    Anandtech goes into detail like no other, I don't dispute that.
    But is it worth the wait? A significant number seem to think not.

    Allow me to leave one last issue for you to ponder (assuming you have the time in your extremely busy schedule).

    Is it good for Anandtech?
  • catavalon21 - Thursday, July 21, 2016 - link

    Impatient as I was at the first for benchmarks, yes, I'm a numbers junkie, since it's evident precious few of us will have had a chance to buy one of these cards yet (or the 480), I doubt the delay has caused anyone to buy the wrong card. Can't speak for the smart phone review folks are complaining about being absent, but as it turns out, what I'm initially looking for is usually done early on in Bench. The rest of this, yeah, it can wait.
  • mkaibear - Saturday, July 23, 2016 - link

    Job, house, kids, church... more than enough to keep me sufficiently busy that I don't have the time to browse more than a few sites. I pick them quite carefully.

    Given the lifespan of a typical system is >5 years I think that a 2 month delay is perfectly reasonable. It can often take that long to get purchasing signoff once I've decided what they need to purchase anyway (one of the many reasons that architectural deep dives are useful - so I can explain why the purchase is worthwhile). Do you actually spend someone else's money at any point or are you just having to justify it to yourself?

    Whether or not it's worth the wait to you is one thing - but it's clearly worth the wait to both Anandtech and to Purch.
  • razvan.uruc@gmail.com - Thursday, July 21, 2016 - link

    Excellent article, well deserved the wait!
  • giggs - Thursday, July 21, 2016 - link

    While this is a very thorough and well written review, it makes me wonder about sponsored content and product placement.
    The PG279Q is the only monitor mentionned, making sure the brand appears, and nothing about competing products. It felt unnecessary.
    I hope it's just a coincidence, but considering there has been quite a lot of coverage about Asus in the last few months, I'm starting to doubt some of the stuff I read here.
  • Ryan Smith - Thursday, July 21, 2016 - link

    "The PG279Q is the only monitor mentionned, making sure the brand appears, and nothing about competing products."

    There's no product placement or the like (and if there was, it would be disclosed). I just wanted to name a popular 1440p G-Sync monitor to give some real-world connection to the results. We've had cards for a bit that can drive 1440p monitors at around 60fps, but GTX 1080 is really the first card that is going to make good use of higher refresh rate monitors.
  • giggs - Thursday, July 21, 2016 - link

    Fair enough, thank you for responding promptly. Keep up the good work!
  • arh2o - Thursday, July 21, 2016 - link

    This is really the gold standard of reviews. More in-depth than any site on the internet. Great job Ryan, keep up the good work.
  • Ranger1065 - Thursday, July 21, 2016 - link

    This is a quality article.
  • timchen - Thursday, July 21, 2016 - link

    Great article. It is pleasant to read more about technology instead of testing results. Some questions though:

    1. higher frequency: I am kind of skeptical that the overall higher frequency is mostly enabled by FinFET. Maybe it is the case, but for example when Intel moved to FinFET we did not see such improvement. RX480 is not showing that either. It seems pretty evident the situation is different from 8800GTX where we first get frequency doubling/tripling only in the shader domain though. (Wow DX10 is 10 years ago... and computation throughput is improved by 20x)

    2. The fastsync comparison graph looks pretty suspicious. How can Vsync have such high latency? The most latency I can see in a double buffer scenario with vsync is that the screen refresh just happens a tiny bit earlier than the completion of a buffer. That will give a delay of two frame time which is like 33 ms (Remember we are talking about a case where GPU fps>60). This is unless, of course, if they are testing vsync at 20hz or something.

Log in

Don't have an account? Sign up now