Crysis: Warhead

Up next is our legacy title for 2014, Crysis: Warhead. The stand-alone expansion to 2007’s Crysis, at over 5 years old Crysis: Warhead can still beat most systems down. Crysis was intended to be future-looking as far as performance and visual quality goes, and it has clearly achieved that. We’ve only finally reached the point where single-GPU cards have come out that can hit 60fps at 1920 with 4xAA, never mind 2560 and beyond.

Crysis: Warhead - 2560x1440 - Enthusiast Quality + 4x MSAA

Crysis: Warhead - 1920x1080 - Enthusiast Quality + 4x MSAA

Crysis: Warhead - 1920x1080 - E Shaders/G Quality

Crysis Warhead is one of the few cases where R9 285 tends to regress, with the R9 280 leading the R9 285 by around 5% at our highest settings. Interestingly, the tables turn at the lower quality settings and R9 285 regains the lead, but overall this is mostly a wash.

Crysis: Warhead - Min. Frame Rate - 2560x1440 - Enthusiast Quality + 4x MSAA

Crysis: Warhead - Min. Frame Rate - 1920x1080 - Enthusiast Quality + 4x MSAA

Crysis: Warhead - Min. Frame Rate - 1920x1080 - E Shaders/G Quality

Crysis 3 Total War: Rome 2
Comments Locked

86 Comments

View All Comments

  • chizow - Thursday, September 11, 2014 - link

    If Tonga is a referendum on Mantle, it basically proves Mantle is a failure and will never succeed. This pretty much shows most of what AMD said about Mantle is BS, that it takes LESS effort (LMAO) on the part of the devs to implement than DX.

    If Mantle requires both an application update (game patch) from devs AFTER the game has already run past its prime shelf-date AND also requires AMD to release optimized drivers every time a new GPU is released, then there is simply no way Mantle will ever succeed in a meaningful manner with that level of effort. Simply put, no one is going to put in that kind of work if it means re-tweaking every time a new ASIC or SKU is released. Look at BF4, its already in the rear-view mirror from DICE's standpoint, and no one even cares anymore as they are already looking toward the next Battlefield#
  • TiGr1982 - Thursday, September 11, 2014 - link

    Please stop calling GPUs ASICs - this looks ridiculous.
    Please go to Wikipedia and read what "ASIC" is.
  • chizow - Thursday, September 11, 2014 - link

    Is this a joke or are you just new to the chipmaking industry? Maybe you should try re-reading the Wikipedia entry to understand GPUs are ASICs despite their more recent GPGPU functionality. GPU makers like AMD and Nvidia have been calling their chips ASICs for decades and will continue to do so, your pedantic objections notwithstanding.

    But no need to take my word for it, just look at their own internal memos and job listings:

    https://www.google.com/#q=intel+asic
    https://www.google.com/#q=amd+asic
    https://www.google.com/#q=nvidia+asic
  • TiGr1982 - Thursday, September 11, 2014 - link

    OK, I accept your arguments, but I still don't like this kind of terminology. To me, one may call things like fixed-function video decoder "ASIC" (for example UVD blocks inside Radeon GPUs), but not GPU as a whole, because people do GPGPU for a number of years on GPUs, and "General Purpose" in GPGPU contradicts with "Aplication Specific" in ASIC, isn't it?
    So, overall it's a terminology/naming issue; everyone uses the naming whatever he wants to use.
  • chizow - Thursday, September 11, 2014 - link

    I think you are over-analyzing things a bit. When you look at the entire circuit board for a particular device, you will see each main component or chip is considered an ASIC, because each one has a specific application.

    For example, even the CPU is an ASIC even though it handles all general processing, but its specific application for a PC mainboard is to serve as the central processing unit. Similarly, a southbridge chip handles I/O and communications with peripheral devices, Northbridge handles traffic to/from CPU and RAM and so on and so forth.
  • TiGr1982 - Thursday, September 11, 2014 - link

    OK, then according to this (broad) understanding, every chip in silicon industry may be called ASIC :)
    Let it be.
  • chizow - Friday, September 12, 2014 - link

    Yes, that is why everyone in the silicon industry calls their chips that have specific applications ASICs. ;)

    Something like a capacitor, or resistor etc. would not be as they are of common commodity.
  • Sabresiberian - Thursday, September 11, 2014 - link

    I reject the notion that we should be satisfied with a slower rate of GPU performance increase. We have more use than ever before for a big jump in power. 2560x1440@144Hz. 4K@60Hz.

    Of course it's all good for me to say that without being a micro-architecture design engineer myself, but I think it's time for a total re-think. Or if the companies are holding anything back - bring it out now, please! :)
  • Stochastic - Thursday, September 11, 2014 - link

    Process node shrinks are getting more and more difficult, equipment costs are rising, and the benefits of moving to a smaller node are also diminishing. So sadly I think we'll have to adjust to a more sedate pace in the industry.
  • TiGr1982 - Thursday, September 11, 2014 - link

    I'm a longstanding AMD Radeon user for more than 10 years, but after reading this R9 285 review I can't help but think that, based on results of smaller GM107 in 750 Ti, GM204 in GTX 970/980 may offer much better performance/Watt/die area (at least for gaming tasks) in comparison to the whole AMD GPU lineup. Soon we'll see whether or not this will be the case.

Log in

Don't have an account? Sign up now