Crysis 3

Still one of our most punishing benchmarks, Crysis 3 needs no introduction. With Crysis 3, Crytek has gone back to trying to kill computers and still holds the “most punishing shooter” title in our benchmark suite. Only in a handful of setups can we even run Crysis 3 at its highest (Very High) settings, and that’s still without AA. Crysis 1 was an excellent template for the kind of performance required to drive games for the next few years, and Crysis 3 looks to be much the same for 2015.

Crysis 3 - 3840x2160 - High Quality + FXAA

Crysis 3 - 3840x2160 - Low Quality + FXAA

Crysis 3 - 2560x1440 - High Quality + FXAA

A pure and strenuous DirectX 11 test, Crysis 3 in this case is a pretty decent bellwether for the overall state of the R9 Fury X. Once again the card trails the GTX 980 Ti, but not by quite as much as we saw in Battlefield 4. In this case the gap is 6-7% at 4K, and 12% at 1440p, not too far off of 4% and 10% respectively. This test hits the shaders pretty hard, so of our tried and true benchmarks I was expecting this to be one of the better games for AMD, so the results in a sense do end up as surprising.

In any case, on an absolute basis this is also a good example of the 4K quality tradeoff. R9 Fury X is fast enough to deliver 1440p at high quality settings over 60fps, or 4K with reduced quality settings over 60fps. Otherwise if you want 4K with high quality settings, the performance hit means a framerate average in just the 30s.

Otherwise the gains over the R9 290XU are quite good. The R9 Fury X picks up 38-40% at 4K, and 36% at 1440p. This trends relatively close to our 40% expectations for the card, reinforcing just how big of a leap the card is for AMD.

Battlefield 4 Middle Earth: Shadow of Mordor
Comments Locked

458 Comments

View All Comments

  • looncraz - Friday, July 3, 2015 - link

    75MHz on a factory low-volting GPU is actually to be expected. If the voltage scaled automatically, like nVidia's, there is no telling where it would go. Hopefully someone cracks the voltage lock and gets to cranking of the hertz.
  • chizow - Friday, July 3, 2015 - link

    North of 400W is probably where we'll go, but I look forward to AMD exposing these voltage controls, it makes you wonder why they didn't release them from the outset given they made the claims the card was an "Overclocker's Dream" despite the fact it is anything but this.
  • Refuge - Friday, July 3, 2015 - link

    It isn't unlocked yet, so nobody has overclocked it yet.
  • chizow - Monday, July 6, 2015 - link

    But but...AMD claimed it was an Overclocker's Dream??? Just another good example of what AMD says and reality being incongruent.
  • Thatguy97 - Thursday, July 2, 2015 - link

    would you say amd is now the "geforce fx 5800"
  • sabrewings - Thursday, July 2, 2015 - link

    That wasn't so much due to ATI's excellence. It had a lot to do with NVIDIA dropping the ball horribly, off a cliff, into a black hole.

    They learned their lessons and turned it around. I don't think either company "lost" necessarily, but I will say NVIDIA won. They do more with less. More performance with less power, less transistors, less SPs, and less bandwidth. Both cards perform admirably, but we all know the Fury X would've been more expensive had the 980 Ti not launched where it did. So, to perform arguably on par, AMD is living with smaller margins on probably smaller volume while Nvidia has plenty of volume with the 980 Ti and their base cost is less as they're essentially using Titan X throw away chips.
  • looncraz - Thursday, July 2, 2015 - link

    They still had to pay for those "Titan X throw away chips" and they cost more per chip to produce than AMD's Fiji GPU. Also, nVidia apparently had to not cut down the GPU as much as they were planning as a response to AMD's suspected performance. Consumers win, of course, but it isn't like nVidia did something magical, they simply bit the bullet and undercut their own offerings by barely cutting down the Titan X to make the 980Ti.

    That said, it is very telling that the AMD GCN architecture is less balanced in relation to modern games than the nVidia architecture, however the GCN architecture has far more features that are going unused. That is one long-standing habit ATi and, now, AMD engineers have had: plan for the future in their current chips. It's actually a bad habit as it uses silicon and transistors just sitting around sucking up power and wasting space for, usually, years before the features finally become useful... and then, by that time, the performance level delivered by those dormant bits is intentionally outdone by the competition to make AMD look inferior.

    AMD had tessellation years before nVidia, but it went unused until DX11, by which time nVidia knew AMD's capabilities and intentionally designed a way to stay ahead in tessellation. AMD's own technology being used against it only because it released it so early. HBM, I fear, will be another example of this. AMD helped to develop HBM and interposer technologies and used them first, but I bet nVidia will benefit most from them.

    AMD's only possible upcoming saving grace could be that they might be on Samsung's 14nm LPP FinFet tech at GloFo and nVidia will be on TSMC's 16nm FinFet tech. If AMD plays it right they can keep this advantage for a couple generations and maximize the benefits that could bring.
  • vladx - Thursday, July 2, 2015 - link

    Afaik, even though TSMC's GinFet will be 16nm it's a superior process overall to GloFo's 14nm FF so I dount AMD will gain any advantage.
  • testbug00 - Sunday, July 5, 2015 - link

    TSMC's FinFET 16nm process might be better than GloFo's own canceled 14XM or whatever they called it.

    Better than Samsung's 14nm? Dubious. Very unlikely.
  • chizow - Sunday, July 5, 2015 - link

    Why is it dubious? What's the biggest chip Samsung has fabbed? If they start producing chips bigger than the 100mm^2 chips for Apple, then we can talk but as much flak as TSMC gets flak over delays/problems, they still produce what are arguably the world's most advanced seminconductors, right there next to Intel's biggest chips in size and complexity.

Log in

Don't have an account? Sign up now