Crysis: Warhead

It’s been over 2 years since the release of the original Crysis, and “but can it run Crysis?” is still a common question even today. With a mix of dense foliage, long draw distances, and high quality textures, Crysis is still a game that can bring any machine to its knees, and is our first choice for testing any video card.

Crysis Warhead

Crysis Warhead

Crysis Warhead

As far as this first game is concerned, things are looking so-so for NVIDIA. For the GTX 480, it’s in a solid 10-12% lead over the 5870, and unsurprisingly losing to the 5970. For the GTX 470 things are less rosy; it basically is breaking even with the 5850. Furthermore there’s an interesting pattern in our averages: the gap between the GTX 400 series and the Radeon 5000 series shrinks with resolution. Keep an eye on this, it’s going to be a repeating pattern.

Crysis Warhead - Minimum Frame Rate

Crysis Warhead - Minimum Frame Rate

Crysis Warhead - Minimum Frame Rate

We’ve also gone ahead and recorded the minimum framerates for Crysis, as in our testing we’ve found the minimums to be very reliable. And in doing so, we have some data even more interesting than the averages. The GTX 400 series completely tramples the 5000 series when it comes to minimum framerates, far more than we would have expected. At 2560 Crysis is approaching a video RAM limitation in our 1GB and under cards, which gives the GTX 480 cards a clear lead at those resolutions. But even at lower resolutions where we’re not video RAM limited, the GTX 480 still enjoys a 33% lead in the minimum framerate, and the GTX 470 is well ahead of the 5850 and even slightly ahead of the 5870.

For whatever reason AMD can’t seem to keep up with NVIDIA when it comes to the minimum framerate, even at lower resolutions. Certainly it’s obvious when the 1GB cards are video RAM limited at 2560, but if we didn’t have this data we would have never guessed the minimum framerates were this different at lower resolutions.

Finally we have a quick look at SLI/CF performance. CF seems to exacerbate the video RAM limitations of the 5000 series, resulting in the GTX 480SLI coming in even farther ahead of the 5870CF. Even at lower resolutions SLI seems to be scaling better than CF.

The Test BattleForge: DX10
Comments Locked

196 Comments

View All Comments

  • arjunp2085 - Friday, March 26, 2010 - link

    For dealing with suck fake geometry, Fermi has several new tricks.

    is that supposed to be such??

    850 Watts for SLI.. man Air Conditioning for my room does not consume that much electricity

    Might have to go for industrial connections to use such high Electricity consumptions lol

    Green Team NOT GREEN....
  • Leyawiin - Friday, March 26, 2010 - link

    Guess I'll keep my GTX 260 for a year or so more and hope for better days.
  • hangfirew8 - Friday, March 26, 2010 - link

    Launch FAIL.

    All this waiting and a paper launch. They couldn't even manage the 1/2 dozen cards per vendor at Newegg of some previous soft launches.

    All this waiting an a small incremental increase over existing card performance. High power draw and temps. High prices, at least they had the sense not to price it like the 8800Ultra-which was a game changer. It had a big leap in performance plus brought us a new DX level, DX10.

    I've been holding off buying until this launch, I really wanted nVidia to pull something off here. Oh, well.

  • softdrinkviking - Friday, March 26, 2010 - link

    so by the time a "full" gf100 is available, how close will we be the the next gen AMD card?
    and how low will be the prices on the 58XX series be?

    this article never made an explicit buying recommendation, but how many people out there are still waiting to buy a gf100?
    6 months is a long time.
    after xmas and the post holiday season, anybody on the fence about it (i.e. not loyal nvidia fans) probably just went for amd card.
    so the question (for a majority of potential buyers?) isn't "which card do i buy?", it's "do i need/want to upgrade from my 58xx amd card to a gf100?"


    also, i'm curious to find out if fermi can be scaled down into a low profile card and offer superior performance in a form factor that relies so heavily on low temps and low power consumption.
    the htpc market is a big money maker, and a bad showing for nvidia there could really hurt them.
    maybe they won't even try?

  • shin0bi272 - Friday, March 26, 2010 - link

    great review as usual here at Anandtech. I would have thought in your conclusions you would have mentioned that, in light of the rather lack luster 5% performance crown that they now hold, that it wasnt the best idea for them to disable 6% of their cores on the thing after all.

    Why make a 512 core gpu then disable 32 of them and end up with poorer performance when youre already 6 months behind the competition, sucking up more juice, have higher temps and fan noise, and a higher price tag? That's like making the bugatti veyron and then disabling 2 of its 16 cylinders!

    That will probably be what nvidia does when amd releases their super cypress to beat the 480. They'll release the 485 with all 512 cores and better i/o for the ram.
  • blyndy - Saturday, March 27, 2010 - link

    "Fermi is arranged as 16 clusters of 32 shaders, and given that it is turning off 64 shaders, it looks like the minimum granularity it can fuse off is a single cluster of 32. This means it is having problems getting less than two unrecoverable errors per die, not a good sign."

    from: http://www.semiaccurate.com/2009/12/21/nvidia-cast...">http://www.semiaccurate.com/2009/12/21/nvidia-cast...
  • shin0bi272 - Saturday, March 27, 2010 - link

    dont quote semi accurate to me. If you wanna call 1 in 100 claims being correct as Semi accurate then fine you can... me I call it a smear. Especially since the guy who wrote that article is a known liar and hack. If you google for gtx480 and click on the news results and click on semi accurate you will see its listed as satire.
  • Jamahl - Friday, March 26, 2010 - link

    the same Ryan Smith who panned the 5830 for being a "paper launch" even though it was available one day later?

    What's wrong this time Ryan? Maybe there are so many bad things to say about Fermi, being "paper launched" was well down the pecking order of complaints?
  • AnandThenMan - Friday, March 26, 2010 - link

    I was thinking the same thing. The 5830 got slammed for being a paper launch even though it wasn't, but Fermi gets a pass? Why? This isn't even a launch at all despite what Nvidia says. Actual cards will be available in what, 17 days? That's assuming the date doesn't change again.
  • jeffrey - Saturday, March 27, 2010 - link

    I'll third that notion.

    Even though Ryan Smith mentioned that Fermi was paper launched today, the tone and way that the article read was much harsher on AMD/ATI. That is ridiculous considering that Ryan had to eat his own words with an "Update" on the 5830's availability.

    To be tougher on AMD/ATI, when they did in fact launch the 5830 that day and have hard-launched, to the best of their ability, the entire 5XX0 stack gives an impression of bias.

    A paper launch with availability at least two and a half weeks out for a product six months late is absurd!

Log in

Don't have an account? Sign up now