Metro 2033

The next game on our list is 4A Games’ Metro 2033, their tunnel shooter released last year. In September the game finally received a major patch resolving some outstanding image quality issues with the game, finally making it suitable for use in our benchmark suite. At the same time a dedicated benchmark mode was added to the game, giving us the ability to reliably benchmark much more stressful situations than we could with FRAPS. If Crysis is a tropical GPU killer, then Metro would be its underground counterpart.

With single-GPU scores in Metro things are rather close, but with these dual-GPU cards scaling becomes a factor. As a result while the GTX 590 falls well behind the 6990 here, facing a sizable 15% gap in performance. The overclocked GTX 590 can just close the gap, but then the 6990 OC opens it back up just as quickly. In the meantime as shading performance is often the most critical factor in this benchmark, this explains why overclocking was so effective.

BattleForge HAWX
Comments Locked

123 Comments

View All Comments

  • RaistlinZ - Thursday, March 24, 2011 - link

    What about the 6950 2GB? It can be had for $245.00 after rebate and it's plenty powerful.
  • the_elvino - Thursday, March 24, 2011 - link

    Is it so hard to admit that the 6990's performance is better across the board? Multi-monitor setups were left out in order to make NVidia look good.

    Remember when the GTX 580 SLI review was published, AT didn't include a 5970 crossfire setup, because they sadly only had one 5970.

    Yes, the GTX 590 is less noisy, but then again you can underclock the 6990 to GTX 590 performance levels and it will be quieter too, not really an argument.

    The GTX 590 is slower (especially at super high resolutions) and draws more power than the 6990 at the same price, AMD wins! Simple!
  • softdrinkviking - Thursday, March 24, 2011 - link

    If the 590 can only drive 2 displays, is the reason it has 3 DVI ports is only for people who buy 2 cards and then you can run all three off of one card?
  • Ryan Smith - Friday, March 25, 2011 - link

    The individual GPUs can only drive 2 monitors each. NVIDIA is using the display capabilities of both GPUs together in order to drive 4 monitors.
  • softdrinkviking - Friday, March 25, 2011 - link

    ah, got it. i should read more carefully.
    thanks for answering. :)
  • The Jedi - Friday, March 25, 2011 - link

    Surely if each GPU can run two displays, two GPUs on one card can run four displays?
  • Soulkeeper - Thursday, March 24, 2011 - link

    Wow that is massive
    I wouldn't put that in my pc if someone else bought it for me.
  • hab82 - Friday, March 25, 2011 - link

    For me the gaming differences between AMD Nividia at this level would not get me to change allegiance. In my case I bought two EVGA classifieds but they may never see a game. I use them as low cost replacements to the Telsa cards. For Parallel processing I held off until we had powers of two ie 256 512 1024 cores. I did not like 240 432 480 alternatives. Just easier to work with powers of 2. The lower noise and lower power were big winners. Evan if the power supplies can handle them you still have to get the heat out of the room. My 4 nodes of 1024 cuda cores each are a pain to air condition in my den.
    I really love you gamers, you buy enough GPU's to keep the competition going and lowering the price for impressive computation power.
    Thanks all!

    Hubert
  • Calin - Friday, March 25, 2011 - link

    USA manages to stay just south of Canada
  • rs2 - Friday, March 25, 2011 - link

    I thought you guys learned your lesson the last time you pitted a factory-overclocked reference nVidia card against a stock AMD card. If you're going to use an overclocked card in an article that's meant to establish the performance baseline for a series of graphics cards (as this one does for the GTX 590 series), then you really should do at least one of:

    1. Downclock the overclocked "reference" card to stock levels.
    2. Overclock the competing AMD card by a comparable amount (or use a factory overclocked AMD card as the reference point)
    3. Publish a review using a card that runs at reference clocks, and then afterwards publish a separate review for the overclocked card.
    4. Overclock both cards to their maximum stable levels, and use that as the comparison point.

    Come on now, this is supposed to be Anandtech. It shouldn't be up to the readers to lecture you about journalistic integrity or the importance of providing proper "apples to apples" comparisons. Using a factory overclocked card to establish a performance baseline for a new series of GPU's is biased, no matter how small and insignificant the overclock may seem. Cut it out.

Log in

Don't have an account? Sign up now