Crysis 3

Still one of our most punishing benchmarks, Crysis 3 needs no introduction. With Crysis 3, Crytek has gone back to trying to kill computers and still holds the “most punishing shooter” title in our benchmark suite. Only in a handful of setups can we even run Crysis 3 at its highest (Very High) settings, and that’s still without AA. Crysis 1 was an excellent template for the kind of performance required to drive games for the next few years, and Crysis 3 looks to be much the same for 2015.

Crysis 3 - 3840x2160 - High Quality + FXAA

Crysis 3 - 3840x2160 - Low Quality + FXAA

Crysis 3 - 2560x1440 - High Quality + FXAA

Under Crysis 3 the R9 Fury once again has the lead, though there is a clear amount of variation in that lead depending on the resolution. At 4K it’s 14% or so, but at 1440p it’s just 5%. This is consistent with the general trend for AMD and NVIDIA cards, which is that AMD sees better performance scaling at higher resolutions, and is a big part of the reason why AMD is pushing 4K for the R9 Fury X and R9 Fury. Still, based on absolute performance, the R9 Fury’s performance probably makes it better suited for 1440p.

Meanwhile the R9 Fury cards once again consistently trail the R9 Fury X by no more than 7%. Crysis 3 is generally more sensitive to changes in shader throughput, so it’s interesting to see that the performance gap is as narrow as it is here. These kinds of results imply that the R9 Fury X’s last 512 stream processors aren’t being put to very good use, since most of the performance difference can be accounted for in the clockspeed difference.

Battlefield 4 Middle Earth: Shadow of Mordor
Comments Locked

288 Comments

View All Comments

  • Midwayman - Friday, July 10, 2015 - link

    I'd love to see these two go at it again once dx12 games start showing up.
  • Mugur - Saturday, July 11, 2015 - link

    Bingo... :-). I bet the whole Fury lineup will gain a lot with DX12, especially the X2 part (4 + 4 GB won't equal 4 as in current CF). The are clearly CPU limited at this point.
  • squngy - Saturday, July 11, 2015 - link

    I don't know...

    Getting dx12 performance at the cost of dx11 performance sounds like a stupid idea this soon before dx12 games even come out.

    By the time a good amount of dx12 games come out there will probably be new graphics cards available.
  • thomascheng - Saturday, July 11, 2015 - link

    They will probably circle around and optimize things for 1080p and dx11, once dx12 and 4k is at a good place.
  • akamateau - Tuesday, July 14, 2015 - link

    DX12 games are out now. DX12 does not degrade DX11 performance. In fact Radeon 290x is 33% faster than 980 Ti in DX12. Fury X just CRUSHES ALL nVIDIA silicon with DX12 and there is a reason for it.

    Dx11 can ONLY feed data to the GPU serially and sequencially. Dx12 can feed data Asynchronously, the CPU send the data down the shader pipeline WHEN it is processed. Only AMD has this IP.
  • @DoUL - Sunday, July 19, 2015 - link

    Kindly provide link to a single DX12 game that is "out now".

    In every single review of the GTX 980 Ti there is this slide of DX12 feature set that the GTX 980 Ti supports and in that slide in all the reviews "Async Compute" is right there setting in the open, so I'm not really sure what do you mean by "Only AMD has this IP"!

    I'd strongly recommend that you hold your horses till DX12 games starts to roll out, and even then, don't forget the rocky start of DX11 titles!

    Regarding the comparison you're referring to, that guy is known for his obsession with mathematical calculations and synthetic benchmarking, given the differences between real-world applications and numbers based on mathematical calculations, you shouldn't be using/taking his numbers as a factual baseline for what to come.
  • @DoUL - Sunday, July 19, 2015 - link

    My Comment was intended as a reply to @akanateau
  • OldSchoolKiller1977 - Sunday, July 26, 2015 - link

    You are an idiotic person, wishful think and dreams don't make you correct. As stated please provide a link to these so called DX12 games and your wonderful "Fury X just CRUCHES ALL NVidia" statement.
  • Michael Bay - Sunday, July 12, 2015 - link

    As long as there is separate RAM in PCs, memory argument is moot, as contents are still copied and executed on in two places.
  • akamateau - Tuesday, July 14, 2015 - link

    Negative. Once Graphic data is processed and sent to the shaders it next goes to VRAM or video ram.

    System ram is what the CPU uses to process object draws. Once the objects are in the GPU pipes system ram is irrelevant.

    IN fact that is one of AMD's stacked memory patents. AMD will be putting HBM on APU's to not only act as CPU cache but HBM video ram as well. They have patents for programmable HBM using FPGA's and reconfigurable cache memory HBM as well.

    Stacked memory HBM can also be on the cpu package as a replacement for system ram. Can you imagine how your system would fly with 8-16gb of HBM instead of system ram?

Log in

Don't have an account? Sign up now