Crysis: Warhead

Kicking things off as always is Crysis: Warhead, still one of the toughest game in our benchmark suite. Even 3 years since the release of the original Crysis, “but can it run Crysis?” is still an important question, and for 3 years the answer was “no.” However as we’ll see the 6990 changes that: full Enthusiast settings at a playable framerate is finally in the grasp of a single card.

It should come as no surprise that with the 6990, AMD has hit a few different important marks on Crysis for a single card thanks to the card’s near-6970CF performance. As far as our traditional 2560 benchmark goes, the 6990 cracks 60fps, meaning we can finally play Crysis at a perfectly smooth framerate at 2560 with our tweaked settings on what is more or less a single video card. Perhaps more importantly however, performance is to the point where Crysis in full enthusiast mode is now a practical benchmark. Thanks in big part to the extra VRAM here, the tops the 5970 by nearly 30%, coming in at 42.8fps. This is still a bit low for a completely smooth framerate, but it is in fact playable, which is more than we can say for the 5970.

Overall Crysis does a good job setting the stage here for most of our benchmark suite: the performance of the card is consistently between the 6950CF and 6970CF, hovering much closer to the former. Compared to NVIDIA’s offerings the 6990 is solidly between the GTX 580 and GTX 580SLI, owing to the fact that NVIDIA doesn’t have a comparable card. The GTX 580SLI is faster, but the 580 is also still the fastest single-GPU card on the market, meaning it commands a significant price premium.

Overclocked to uber mode however only shows minimal gains, as the theoretical maximum gain is only 6% while the real world benefit is less; uber mode alone will never have a big payoff.

As far as minimum framerates are concerned the story is similar. For some reason the 6990 underperforms the 6950CF here by a frame or two per second, which given the 6990’s mostly superior specs leads us to believe that it’s a limitation of PCIe bus bandwidth.  Meanwhile we can clearly see the benefits of more than 1GB of VRAM per GPU here: the 6990 walks all over the 5970.

New Catalyst Control Center Features & The Test BattleForge
POST A COMMENT

130 Comments

View All Comments

  • Spazweasel - Tuesday, March 8, 2011 - link

    I've always viewed single-card dual-GPU cards as more of a packaging stunt than a product.

    They invariably are clocked a little lower than the single-GPU cards they re based upon, and short of a liquid cooling system are extremely noisy (unavoidable when you have twice as much heat that has to be dissipated by the same sized cooler as the single-GPU card). They also tend to not be a bargain price-wise; compare a dual-GPU card versus two of the single-GPU cards with the same GPU.

    Personally, I would much rather have discrete GPUs and be able to cool them without the noise. I'll spend a little more for a full-sized case and a motherboard with the necessary layout (two slots between PCI-16x slots) rather than deal with the compromises of the extra-dense packaging. If someone else needs quad SLI or quad Crossfire, well, fine... to each their own. But if dual GPUs is the goal, I truly don't see any advantage of a dual-GPU card over dual single-GPU cards, and plenty of disadvantages.

    Like I said... more of a stunt than a product. Cool that it exists, but less useful than advertised except for extremely narrow niches.
    Reply
  • mino - Tuesday, March 8, 2011 - link

    Even -2- years since the release of the original Crysis, “but can it run Crysis?” is still an important question, and for -3.5- years the answer was “no.”

    Umm, you sure bout both those time values?
    :)

    Nice review, BTW.
    Reply
  • MrSpadge - Tuesday, March 8, 2011 - link

    "With a 375W TDP the 6990 should consume less power than 2x200W 6950CF, but in practice the 6950CF setup consumes 21W less. Part of this comes down to the greater CPU load the 6990 can create by allowing for higher framerates, but this doesn’t completely explain the disparity."

    If it hasn't been mentioned before: guys, this is simple. The TDP for the HD6950 is just for the PowerTune limit. The "power draw under gaming" is specified at ~150 W, which is just what you'll find during gaming gaming tests.

    Furthermore Cayman is run at lower voltage (1.10 V) and clocks and with less units on HD6950, so it's only natural for 2 of these to consume less power than a HD6990. Summing it up one would expect 1.10^2/1.12^2 * 800/830 * 22/24 = 85,2% the power consumption of a Cayman on HD6990.

    MrS
    Reply
  • mino - Tuesday, March 8, 2011 - link

    You shall not hit them so hard next time. :)

    Numbers tend to hurt one's ego badly if properly thrown.
    Reply
  • geok1ng - Tuesday, March 8, 2011 - link

    The article points that the 6990 runs much closer to 6950CF than 6970CF.

    I assume that the author is talking about 2GB 6950, that can be shader unlocked, in a process much safer than flashing the card with a 6970 BIOS.

    It would be interesting to see CF numbers for unlocked 6950s.

    As it stands the 6990 is not a great product: it requires an expensive PSU, a big case full of fans, at price ponit higher than similar CF setups.

    Considering that there are ZERO enthuasiast mobos thah wont accept CF, the 6990 becomes a very hard sell.

    Even more troubling is the lack of a DL-DVI adapter in the bundle, scaring way 30" owners, precisely the group of buyers most interested in this video card.

    Why should a 30" step away from a 580m or SLI 580s, if the 6990 the same expensive PSU, the same BIG case full of fans and a DL-DVI adapter costs more than teh price gap to a SLI mobo?
    Reply
  • Thanny - Tuesday, March 8, 2011 - link

    This card looks very much like the XFX 4GB 5970 card. The GPU position and cooling setup is identical.

    I'd be very interested to see a performance comparison with that card, which operates at 5870 clock speeds and has the same amount of graphics memory (which is not "frame buffer", for those who keep misusing that term).
    Reply
  • JumpingJack - Wednesday, March 9, 2011 - link

    I (which is not "frame buffer", for those who keep misusing that term).


    :) Yep, I wished they would actually make it right.

    The frame buffer is the amount of memory to store the pixel and color depth info for a renderable frame of data, whereas graphics memory (or VRAM) is the total memory available for the card which consequently holds the frame buffer, command buffer, textures, etc etc. The frame buffer is just a small portion of the VRAM set aside and is the output target for the GPU. The frame buffer size is the same for every modern video card on the planet at fixed (same) resolution. I.e. a 1900x1200 res with 32 bit color depth has a frame buffer of ~9.2 MB (1900x1200x32 / 8), if double or tripled buffered, multiply by 2 or 3.

    Most every techno site misapplies the term "frame buffer", Anandtech, PCPer (big abuser), Techreport ... most everyone.
    Reply
  • Hrel - Wednesday, March 9, 2011 - link

    Anyone wanting to play at resolutions above 1080p should just buy two GTX560's for 500 bucks. Why waste the extra 200? There's no such thing as future proofing at these levels. Reply
  • wellortech - Wednesday, March 9, 2011 - link

    If the 560s are as noisy as the 570, I think I would rather try a pair of 6950s. Reply
  • HangFire - Wednesday, March 9, 2011 - link

    And you can't even bring yourself to mention Linux (non) support?

    You do realize there are high end Linux workstation users, with CAD, custom software, and OpenCL development projects that need this information?
    Reply

Log in

Don't have an account? Sign up now