Crysis: Warhead

Up next is our legacy title for 2014, Crysis: Warhead. The stand-alone expansion to 2007’s Crysis, at over 5 years old Crysis: Warhead can still beat most systems down. Crysis was intended to be future-looking as far as performance and visual quality goes, and it has clearly achieved that. We’ve only finally reached the point where single-GPU cards have come out that can hit 60fps at 1920 with 4xAA, never mind 2560 and beyond.

Crysis: Warhead - 3840x2160 - Gamer Quality

Crysis: Warhead - 2560x1440 - Enthusiast Quality + 4x MSAA

Crysis: Warhead - 1920x1080 - Enthusiast Quality + 4x MSAA

At the launch of the GTX 680, Crysis: Warhead was rather punishing of the GTX 680’s decreased memory bandwidth versus GTX 580. The GTX 680 was faster than the GTX 580, but the gains weren’t as great as what we saw elsewhere. For this reason the fact that the GTX 980 can hold a 60% lead over the GTX 680 is particularly important because it means that NVIDIA’s 3rd generation delta color compression is working and working well. This has allowed NVIDIA to overcome quite a bit of memory bandwidth bottlenecking in this game and push performance higher.

That said, since GTX 780 Ti has a full 50% more memory bandwidth, it’s telling that GTX 780 Ti and GTX 980 are virtually tied in this benchmark. Crysis: Warhead will gladly still take what memory bandwidth it can get from NVIDIA cards.

Otherwise against AMD cards this is the other game where GTX 980 can’t cleanly defeat R9 290XU. These cards are virtually tied, with AMD edging out NVIDIA in two of three tests. Given their differing architectures I’m hesitant to say this is a memory bandwidth factor as well, but if it were then R9 290XU has a very big memory bandwidth advantage going into this.

Crysis: Warhead - Min. Frame Rate - 3840x2160 - Gamer Quality

Crysis: Warhead - Min. Frame Rate - 2560x1440 - Enthusiast Quality + 4x MSAA

Crysis: Warhead - Min. Frame Rate - 1920x1080 - Enthusiast Quality + 4x MSAA

When it comes to minimum framerates the story is much the same, with the GTX 980 and AMD trading places. Though it’s interesting to note that the GTX 980 is doing rather well against the GTX 680 here; that memory bandwidth advantage would appear to really be paying off with minimum framterates.

Crysis 3 Total War: Rome 2
Comments Locked

274 Comments

View All Comments

  • garadante - Thursday, September 25, 2014 - link

    Yeah. To be honest nobody except ardent Nvidia fanboys would've believed Nvidia would release cards as performance and price competitive as they did, especially the 970. The 980 is honestly a little overpriced compared to a few generations ago as they'll slap a $200 premium on it for Big Maxwell but $330 MSRP for the 970 (if I remember correctly) wasn't bad at all, for generally what, 290/780/290X performance?
  • tuxRoller - Friday, September 26, 2014 - link

    It's not too surprising as we saw what the 750ti was like.
    What is disappointing, though, is that I thought nvidia had made some fundamental breakthrough in their designs where, instead, it looks as though they "simply" enabled a better governor.
  • garadante - Friday, September 26, 2014 - link

    It'll be interesting to see how the efficiency suffers once nvidia releases a proper compute die with area dedicated to double precision FP. I have to keep in mind that when factoring in the stripped down die compared to AMD's 290/290X cards, the results aren't as competition. Lowing as they first seem. But if AMD can't counter these cards with their own stripped down gaming only cards then nvidia took the win this generation.
  • tuxRoller - Friday, September 26, 2014 - link

    That's an excellent point. I take it you already read the tomshardware review? They're compute performance/W is still good, but not so unbelievable as their gaming performance, but I'm not sure it's b/c this is a gaming only card. Regardless, though, amd needs to offer something better than what's currently available. Unfortunately, I don't think they will be able to do it. There was a lot of driver work than went into making these maxwell cards hum
  • garadante - Friday, September 26, 2014 - link

    One thing that really bothers me though is how Anandtech keeps testing the 290/290X with reference cards. Those cards run at 95 C due to the fan control profile in the BIOS and I remember seeing that when people ran those cards with decent nonreference cooling in the 70 C range that power consumption was 15-20+ watts lower. So an AMD die that sacrifices FP64 performance to focus on FP32(gaming, some compute) performance as well as decreasing die size due to the lack of FP64 resources seems like it could be a lot more competitive with Maxwell than people are making it out to be. I have this feeling that the people saying how badly Maxwell trounces AMD's efficiency and that AMD can't possibly hope to catch up are too biased in their thinking.
  • tuxRoller - Saturday, September 27, 2014 - link

    Do you have a link to those reviews that show non-reference fans make gpus more efficient? I don't know how that could be possible. Given the temps we're looking at the effects on the conductors should be very, very small.
    Regarding the reduction in fp performance and gaming efficiency, that's a good point. That may indeed be part of the reason why nvidia has the gaming/compute split (aside from the prices they can charge).
  • garadante - Sunday, September 28, 2014 - link

    Here's an example of a card with liquid cooling. Factor in the overclock that the nonreference card has and that it draws something like 20 watts less in Furmark and the same in 3Dmark. I could be mistaken on the improved power usage but I do recall seeing shortly after the 290X launch that nonreference coolers helped immensely, and power usage dropped as well. Sadly I don't believe Anandtech ever reviewed a nonreference 290X... which is mind boggling to consider, considering how much nonreference cooling helped that card, even outside of any potential power usage decreases.
  • garadante - Sunday, September 28, 2014 - link

    http://www.tomshardware.com/reviews/lcs-axr9-290x-... Whoops, forgot the link.
  • jman9295 - Friday, September 26, 2014 - link

    I wonder why they still give these cards these boring numbered names like GTX 980. Except for the Titan, these names kinda suck. Why not at least name it the Maxwell 980 or for AMD's R( 290 series the Hawaii 290. That sounds a lot cooler than GTX or R9. Also, for the last several generations, AMD and Nvidia's numbering system seems to be similar up until AMD ended that with the R9/R7 200 series. Before that, they had the GTX 700 and HD 7000 series, the GTX 600 and HD 6000 series and so on. Then, as soon as AMD changed it up, Nvidia decides to skip the GTX 800's for retail desktop GPUs and jump right up to the 900 series. Maybe they will come up with a fancier name for their next gen cards besides the GTX 1000's.
  • AnnonymousCoward - Saturday, September 27, 2014 - link

    Naw, names are much harder to keep track of than numbers that inherently describe relative performance.

Log in

Don't have an account? Sign up now