Crysis 3

Still one of our most punishing benchmarks, Crysis 3 needs no introduction. With Crysis 3, Crytek has gone back to trying to kill computers and still holds “most punishing shooter” title in our benchmark suite. Only in a handful of setups can we even run Crysis 3 at its highest (Very High) settings, and that’s still without AA. Crysis 1 was an excellent template for the kind of performance required to drive games for the next few years, and Crysis 3 looks to be much the same for 2014.

Crysis 3 - 3840x2160 - High Quality + FXAA

Crysis 3 - 3840x2160 - Low Quality + FXAA

Crysis 3 - 2560x1440 - High Quality + FXAA

Crysis 3 - 1920x1080 - High Quality + FXAA

Meanwhile delta percentage performance is extremely strong here. Everyone, including the GTX 980, is well below 3%.

Always a punishing game, Crysis 3 ends up being one of the only games the GTX 980 doesn’t take a meaningful lead on over the GTX 780 Ti. To be clear the GTX 980 wins in most of these benchmarks, but not in all of them, and even when it does win the GTX 780 Ti is never far behind. For this reason the GTX 980’s lead over the GTX 780 Ti and the rest of our single-GPU video cards is never more than a few percent, even at 4K. Otherwise at 1440p we’re looking at the tables being turned, with the GTX 980 taking a 3% deficit. This is the only time the GTX 980 will lose to NVIDIA’s previous generation consumer flagship.

As for the comparison versus AMD’s cards, NVIDIA has been doing well in Crysis 3 and that extends to the GTX 980 as well. The GTX 980 takes a 10-20% lead over the R9 290XU depending on the resolution, with its advantage shrinking as the resolution grows. During the launch of the R9 290 series we saw that AMD tended to do better than NVIDIA at higher resolutions, and while this pattern has narrowed some, it has not gone away. AMD is still the most likely to pull even with the GTX 980 at 4K resolutions, despite the additional ROPS available to the GTX 980.

This will also be the worst showing for the GTX 980 relative to the GTX 680. GTX 980 is still well in the lead, but below 4K that lead is just 44%. NVIDIA can’t even do 50% better than the GTX 680 in this game until we finally push the GTX 680 out of its comfort zone at 4K.

All of this points to Crysis 3 being very shader limited at these settings. NVIDIA has significantly improved their CUDA core occupancy on Maxwell, but in these extreme situations GTX 980 will still struggle with the CUDA core deficit versus GK110, or the limited 33% increase in CUDA cores versus GTX 680. Which is a feather in Kepler’s cap if anything, showing that it’s not entirely outclassed if given a workload that maps well to its more ILP-sensitive shader architecture.

Crysis 3 - Delta Percentages

Crysis 3 - Surround/4K - Delta Percentages

The delta percentage story continues to be unremarkable with Crysis 3. GTX 980 does technically fare a bit worse, but it’s still well under 3%. Keep in mind that delta percentages do become more sensitive at higher framerates (there is less absolute time to pace frames), so a slight increase here is not unexpected.

Battlefield 4 Crysis: Warhead
Comments Locked

274 Comments

View All Comments

  • jmunjr - Friday, September 19, 2014 - link

    Wish you had done a GTX 970 review as well like many other sites since way more of us care about that card than the 980 since it is cheaper.
  • Gonemad - Friday, September 19, 2014 - link

    Apparently, if I want to run anything under the sun in 1080p cranked to full at 60fps, I will need to get me one GTX 980 and a suitable system to run with it, and forget mid-ranged priced cards.

    That should put an huge hole in my wallet.

    Oh yes, the others can run stuff at 1080p, but you have to keep tweaking drivers, turning AA on, turning AA off, what a chore. And the milennar joke, yes it RUNS Crysis, at the resolution I'd like.

    Didn't, by any chance, the card actually benefit of being fabricated at 28nm, by spreading its heat over a larger area? If the whole thing, hipothetically, just shrunk to 14nm, wouldn't all that 165W of power would be dissipated over a smaller area (1/4 area?), and this thing would hit the throttle and stay there?

    Or by being made smaller, it would actually dissipate even less heat and still get faster?
  • Yojimbo - Friday, September 19, 2014 - link

    I think that it depends on the process. If Dennard scaling were to be in effect, then it should dissipate proportionally less heat. But to my understanding, Dennard scaling has broken down somewhat in recent years, and so I think heat density could be a concern. However, I don't know if it would be accurate to say that the chip benefited from the 28nm process, since I think it was originally designed with the 20nm process in mind, and the problem with putting the chip on that process had to do with the cost and yields. So, presumably, the heat dissipation issues were already worked out for that process..?
  • AnnonymousCoward - Friday, September 26, 2014 - link

    The die size doesn't really matter for heat dissipation when the external heat sink is the same size; the thermal resistance from die to heat sink would be similar.
  • danjw - Friday, September 19, 2014 - link

    I would love to see these built on Intel's 14nm process or even the 22nm. I think both Nvidia and AMD aren't comfortable letting Intel look at their technology, despite NDAs and firewalls that would be a part of any such agreement.

    Anyway, thanks for the great review Ryan.
  • Yojimbo - Friday, September 19, 2014 - link

    Well, if one goes by Jen-Hsun Huang's (Nvidia's CEO) comments of a year or two ago, Nvidia would have liked Intel to manufacture their SOCs for them, but it seems Intel was unwilling. I don't see why they would be willing to have them manufacture SOCs and not GPUs being that at that time they must have already had the plan to put their desktop GPU technology into their SOCs, unless the one year delay between the parts makes a difference.
  • r13j13r13 - Friday, September 19, 2014 - link

    hasta que no salga la serie 300 de AMD con soporte nativo para directx 12
  • Arakageeta - Friday, September 19, 2014 - link

    No interpretation of the compute graphs whatsoever? Could you at least report the output of CUDA's deviceQuery tool?
  • texasti89 - Friday, September 19, 2014 - link

    I'm truly impressed with this new line of GPUs. To be able to acheive this leap on efficiency using the same transistor feature size is a great incremental achievement. Bravo TSMC & Nvidia. I feel comfortable to think that we will soon get this amazing 980 performance level on game laptops once we scale technology to the 10nm process. Keep up the great work.
  • stateofstatic - Friday, September 19, 2014 - link

    Spoiler alert: Intel is building a new fab in Hillsboro, OR specifically for this purpose...

Log in

Don't have an account? Sign up now