Crysis: Warhead

It’s been over 2 years since the release of the original Crysis, and “but can it run Crysis?” is still a common question even today. With a mix of dense foliage, long draw distances, and high quality textures, Crysis is still a game that can bring any machine to its knees, and is our first choice for testing any video card.

Crysis Warhead

Crysis Warhead

Crysis Warhead

As far as this first game is concerned, things are looking so-so for NVIDIA. For the GTX 480, it’s in a solid 10-12% lead over the 5870, and unsurprisingly losing to the 5970. For the GTX 470 things are less rosy; it basically is breaking even with the 5850. Furthermore there’s an interesting pattern in our averages: the gap between the GTX 400 series and the Radeon 5000 series shrinks with resolution. Keep an eye on this, it’s going to be a repeating pattern.

Crysis Warhead - Minimum Frame Rate

Crysis Warhead - Minimum Frame Rate

Crysis Warhead - Minimum Frame Rate

We’ve also gone ahead and recorded the minimum framerates for Crysis, as in our testing we’ve found the minimums to be very reliable. And in doing so, we have some data even more interesting than the averages. The GTX 400 series completely tramples the 5000 series when it comes to minimum framerates, far more than we would have expected. At 2560 Crysis is approaching a video RAM limitation in our 1GB and under cards, which gives the GTX 480 cards a clear lead at those resolutions. But even at lower resolutions where we’re not video RAM limited, the GTX 480 still enjoys a 33% lead in the minimum framerate, and the GTX 470 is well ahead of the 5850 and even slightly ahead of the 5870.

For whatever reason AMD can’t seem to keep up with NVIDIA when it comes to the minimum framerate, even at lower resolutions. Certainly it’s obvious when the 1GB cards are video RAM limited at 2560, but if we didn’t have this data we would have never guessed the minimum framerates were this different at lower resolutions.

Finally we have a quick look at SLI/CF performance. CF seems to exacerbate the video RAM limitations of the 5000 series, resulting in the GTX 480SLI coming in even farther ahead of the 5870CF. Even at lower resolutions SLI seems to be scaling better than CF.

The Test BattleForge: DX10
Comments Locked

196 Comments

View All Comments

  • palladium - Saturday, March 27, 2010 - link

    clock for clock, the 920 is faster than the 860 thanks to its triple channel memory - the 860 is faster because of its aggressive turbo mode. X58 is definitely the route to go, espeacially if you're benchmarking SLI/CF setups (dual PCIe x16).
  • randfee - Sunday, March 28, 2010 - link

    go ahead and try Crysis with 3,33GHz and 4,x, minimum fps scale strangely with the CPU.
  • palladium - Saturday, March 27, 2010 - link

    shit double post, sry
  • palladium - Saturday, March 27, 2010 - link

    Clock for clock, the 920 is faster than the 860 (860 is faster because of its aggressive turbo mode). Using the P55/860 would limit cards to PCIe x8 bandwidth when benchmarking SLI/CF (unless of course you get a board with nF200 chip), which can be more significant (espeacially with high-end cards) than a OC-ing a CPU from 3.33GHz to 4GHz.
  • Roland00 - Saturday, March 27, 2010 - link

    It doesn't really add to the framerates, and having a 4ghz cpu could in theory bring stability issues.

    http://www.legionhardware.com/articles_pages/cpu_s...">http://www.legionhardware.com/articles_...scaling_...
  • B3an - Friday, March 26, 2010 - link

    You're good at making yourself look stupid.

    A 920 will reach 4GHz easy. I've got one to 4.6GHz. And a 920 is for the superior X58 platform and will have Tri-Channel memory.
  • Makaveli - Friday, March 26, 2010 - link

    I have to agree with that guy.

    Your post is silly everyone knows the X58 platform is the superior chipset in the intel line up. Secondly do you honestly think 3.33Ghz vs 4Ghz is going to make that much of a difference at those high resolutions?

  • randfee - Friday, March 26, 2010 - link

    sorry guys but I know what I'm talking about, using Crysis for instance, I found that minimum fps scale quite nicely with CPU clock whereas the difference a quad core makes is not so big (only 2 threads in the game afaik). FarCry 2, huge improvements with higher end (=clocked) cpus. The Core i7 platform has a clear advantage, yes, but the clock counts quite a bit.

    As I said... no offense intended and no, not arguing against my favorite site anandtech ;). Just stating what I and others have observed. I'd just always try and minimize other possible bottlenecks.
  • randfee - Friday, March 26, 2010 - link

    well.... why not test using the 920 @ 4.xGHz, why possibly bottleneck the System at the CPU by using "only" 3,3?

    No offense intended but I find it a valid question. Some games really are CPU bound, even at high settings.
  • Ph0b0s - Friday, March 26, 2010 - link

    These new cards from ATI and Nvidia are very nice and for a new PC build it is a no brainer, to pick up one of these cards. But for those like me with decent cards from the last generation (GTX285 SLI) I don't really feel a lot of pressure to upgrade.

    Most current PC games are Directx 9 360 ports that last gen cards can handle quite well. Even Directx 10 games are not too slow. The real driver for these cards are Directx 11 games, the amount of which I can count on one hand and not very many upcomming.

    Those that are out don't really bring much over DX10 so I don't really feel like I am missing anything yet. I think Crysis 2 may change this, but by it's release date there will probably be updated / shrunk versions of these new GPU's avaliable.

    Hence why Nvidia and ATI need really ecstatic reviews to convince us to buy their new cards when there is not a lot of software that (in my opinion) really needs them.

Log in

Don't have an account? Sign up now