STALKER: Call of Pripyat

The third game in the STALKER series continues to build on GSC Game World’s X-Ray Engine by adding DX11 support, tessellation, and more. This also makes it another one of the highly demanding games in our benchmark suite.

For every game that makes the GTX 590 glow like Civlizaiton V there is a game like STALKER that more than wipes out any kind of trend. The GeForce GTX lineup simply gets manhandled here, making the 6990 the easy victor. We’ve seen STALKER be both shader and memory bound in the past, and it’s likely that’s what’s happening here. This is the most conclusive proof yet that 1.5GB of RAM per GPU may come up a bit short for the GTX 590, in which case if NVIDIA ever does a 3GB GTX 590 performance here should improve. In the meantime even a hefty overclock can’t get the GTX 590 to within 10fps of the stock 6990.

Battlefield: Bad Company 2 DIRT 2
Comments Locked

123 Comments

View All Comments

  • RaistlinZ - Thursday, March 24, 2011 - link

    What about the 6950 2GB? It can be had for $245.00 after rebate and it's plenty powerful.
  • the_elvino - Thursday, March 24, 2011 - link

    Is it so hard to admit that the 6990's performance is better across the board? Multi-monitor setups were left out in order to make NVidia look good.

    Remember when the GTX 580 SLI review was published, AT didn't include a 5970 crossfire setup, because they sadly only had one 5970.

    Yes, the GTX 590 is less noisy, but then again you can underclock the 6990 to GTX 590 performance levels and it will be quieter too, not really an argument.

    The GTX 590 is slower (especially at super high resolutions) and draws more power than the 6990 at the same price, AMD wins! Simple!
  • softdrinkviking - Thursday, March 24, 2011 - link

    If the 590 can only drive 2 displays, is the reason it has 3 DVI ports is only for people who buy 2 cards and then you can run all three off of one card?
  • Ryan Smith - Friday, March 25, 2011 - link

    The individual GPUs can only drive 2 monitors each. NVIDIA is using the display capabilities of both GPUs together in order to drive 4 monitors.
  • softdrinkviking - Friday, March 25, 2011 - link

    ah, got it. i should read more carefully.
    thanks for answering. :)
  • The Jedi - Friday, March 25, 2011 - link

    Surely if each GPU can run two displays, two GPUs on one card can run four displays?
  • Soulkeeper - Thursday, March 24, 2011 - link

    Wow that is massive
    I wouldn't put that in my pc if someone else bought it for me.
  • hab82 - Friday, March 25, 2011 - link

    For me the gaming differences between AMD Nividia at this level would not get me to change allegiance. In my case I bought two EVGA classifieds but they may never see a game. I use them as low cost replacements to the Telsa cards. For Parallel processing I held off until we had powers of two ie 256 512 1024 cores. I did not like 240 432 480 alternatives. Just easier to work with powers of 2. The lower noise and lower power were big winners. Evan if the power supplies can handle them you still have to get the heat out of the room. My 4 nodes of 1024 cuda cores each are a pain to air condition in my den.
    I really love you gamers, you buy enough GPU's to keep the competition going and lowering the price for impressive computation power.
    Thanks all!

    Hubert
  • Calin - Friday, March 25, 2011 - link

    USA manages to stay just south of Canada
  • rs2 - Friday, March 25, 2011 - link

    I thought you guys learned your lesson the last time you pitted a factory-overclocked reference nVidia card against a stock AMD card. If you're going to use an overclocked card in an article that's meant to establish the performance baseline for a series of graphics cards (as this one does for the GTX 590 series), then you really should do at least one of:

    1. Downclock the overclocked "reference" card to stock levels.
    2. Overclock the competing AMD card by a comparable amount (or use a factory overclocked AMD card as the reference point)
    3. Publish a review using a card that runs at reference clocks, and then afterwards publish a separate review for the overclocked card.
    4. Overclock both cards to their maximum stable levels, and use that as the comparison point.

    Come on now, this is supposed to be Anandtech. It shouldn't be up to the readers to lecture you about journalistic integrity or the importance of providing proper "apples to apples" comparisons. Using a factory overclocked card to establish a performance baseline for a new series of GPU's is biased, no matter how small and insignificant the overclock may seem. Cut it out.

Log in

Don't have an account? Sign up now