STALKER: Call of Pripyat – A Peak at More DX11

For the 5970 launch, AMD sent over a special benchmark version of STALKER: Call of Pripyat, which has since then been made public at the GSC Game World STALKER website. STALKER is another one of AMD’s big DX11 titles, as it’s technically the 2nd DX11 title to launch and the first such title to implement most of the major DX11 features. The Russian version actually shipped back in October, and the German version shipped 2 weeks ago. However we’re told that these versions only had an early-stage implementation of the DX11 feature set, and that the demo is more representative of where the game is after patching and what it will be like when it finally launches in the rest of the world early next year.

Since it’s an unplayable demo, we’re going to forgo any competitive benchmarking (it’s not as if anyone else has a DX11 card anyhow) but we will look quickly at the performance impact of these features, since this is the closest thing we have to a playable game using them at this point in time.

STALKER makes use of 3 major DX11 features.

  1. High Definition Ambient Occlusion using compute shaders
  2. Tessellation
  3. Contact hardening shadows

We’ve already seen HDAO with Battleforge, and it’s no different here in STALKER. And we’ve covered tessellation in-depth in our look at DirectX 11.

So today, let’s talk about contact hardening shadows. Shadowing has been on a path of particularly noticeable evolution. The first real shadows, seen in such titles as Doom 3, had very hard edges. Stencil buffering was used to determine where a shadow would fall, and that was it. Diffusion was never taken into account. Newer games have since taken diffusion into account to generate soft shadows, but these shadows aren’t necessarily accurate. Currently soft shadows are implemented with a fixed degree of softness around the entire shadow, which isn’t how shadows really work.

With diffusion, the softness of a shadow increases with the distance of the casting object from the surface the shadow is being displayed on. AMD loves to use a light pole as an example, as the top of the shadow should be softer than the bottom. These shadows are referred to as contact hardening shadows, and the use of them in games has been limited by the expense of calculating them using the DX10 feature set. STALKER allows for contact hardening shadows with DX10.1 and DX11 mode.

Unfortunately a moving benchmark makes for a poor source to take screenshots, so we’re going to stick with AMD’s reference shots here. Only contact hardening shadows are particularly noticeable in the benchmark; tessellation and HDAO are there, but are virtually impossible to catch given the zoomed-out nature of the benchmark and the fact that it’s constantly in motion.

The benchmark comes with 4 different flybys, each under different environmental conditions: day, night, day with rain, and day with sun shafts. We’ve gone ahead and benchmarked the game during the “day” flyby, once with the DX11 feature set enabled, and once with it disabled. This means for DX11 mode tessellation, contact hardening shadows, and Ultra HDAO were enabled; and for DX10 tessellation and contact hardening shadows were disabled, and High HDAO was used.

STALKER: Call of Pripyat, Day Benchmark DX10 DX11
Average FPS 31.4 35.1
Minimum FPS 17.7 21.2

Enabling all of these features actually causes performance to rise, thanks to the more efficient implementation of HDAO as a compute shader as opposed to a pixel shader. Ultimately what this means is that unless HDAO is disabled entirely, STALKER is going to be faster on a DX11 card running the entire DX11 feature set than it will be when running the DX10 feature set.

The biggest performance hit, and the reason we’re not breaking 40fps here even with a 5970, is due to how anti-aliasing is implemented in STALKER. As it uses deferred rendering, the game does its own anti-aliasing. We used 4X MSAA here along with per-pixel alpha transparency testing (basically Adaptive/Transparancy AA). Disabling anti-aliasing improves performance dramatically.

The Card They Beg You to Overclock Radeon HD 5970 Eyefinity
Comments Locked

114 Comments

View All Comments

  • Ryan Smith - Wednesday, November 18, 2009 - link

    It's possible, but the 850TX is a very well regarded unit. If it can't run a 5970 overclocked, then I surmise that a lot of buyers are going to run in to the same problem. I don't have another comparable power supply on hand, so this isn't something I can test with my card.

    Anand has a 1K unit, and of course you know how his turned out.

    To be frank, we likely would have never noticed the throttling issue if it wasn't for the Distributed.net client. It's only after realizing that it was underperforming by about 10-20% that I decided to watch the Overdrive pane and saw it bouncing around. These guys could be throttling too, and just not realize it.
  • Silverforce11 - Wednesday, November 18, 2009 - link

    Seems iffy then since most reviews put it at 900 core and 5ghz + on the ram, with only a modest overvolt to 1.16. I would think ATI wouldnt bother putting in 3 high quality VRM and japanese capacitors if they didnt test it thoroughly at the specs they wanted it to OC at.

    My old PSU is the bigger bro of this guy being the 750 ver.
    http://anandtech.com/casecoolingpsus/showdoc.aspx?...">http://anandtech.com/casecoolingpsus/showdoc.aspx?...
    And had issues with the 4870x2. Got a better "single rail" PSU and it ran fine n OC well.
  • Silverforce11 - Wednesday, November 18, 2009 - link

    ATI went all out with building these 5970, the components are top notch. The chips are the best of the bunch. I'm surprised they did this, as they are essentially selling you 2x 5870 performance (IF your PSU is good) at $599 when 2x 5870 CF would cost $800. They have no competitor in the top, why do they not price this card higher or why even bother putting in quality parts to almost guarantee 5870 clocks?

    I believe its ATI's last nail on the nV coffin and they hammered it really hard.
  • ET - Wednesday, November 18, 2009 - link

    Too much discussion about adapters for the mini-displayport. The 27" iMac has such an input port and a resolution of 2560 x 1440, and it seems a sin to not test them together. (Not that I'm blaming Anandtech or anything, since I'm sure it's not that easy to get the iMac for testing.)
  • Taft12 - Wednesday, November 18, 2009 - link

    Why would they bother using a computer with attached monitor and instead use the larger, higher-res and CHEAPER Dell 3008WFP?
  • Raqia - Wednesday, November 18, 2009 - link

    Look at all the finger print smudges on the nice card! I've started to notice the hand models that corporations use to hold their products. The hands holding the ipods on the apple site? Flawless, perfect nails and cuticles. Same w/ the fingers grasping the Magny Cours chip.
  • NullSubroutine - Wednesday, November 18, 2009 - link

    Hilbert @ Guru3d got the overclocking working with 900Mhz core speed (though it reached 90c).

    http://www.guru3d.com/article/radeon-hd-5970-revie...">http://www.guru3d.com/article/radeon-hd-5970-revie...

    I was impressed with some of the crossfire benchmarks actually showing improvement. If Eyeinfinity works with 5970 does it work with the card in crossfire?
  • Ryan Smith - Wednesday, November 18, 2009 - link

    Bear in mind that it also took him 1.3v to get there; the AMD tool doesn't go that high. With my card, I strongly suspect the issue is the VRMs, so more voltage wouldn't help.

    And I'm still trying to get an answer to the Eyefinity + 5970CF question. The boys and girls at AMD went home for the night before we realized we didn't have an answer to that.
  • Lennie - Wednesday, November 18, 2009 - link

    I thought everyone knew about Furmark and ATi by now. It used to be like this on 4870 series too.

    It went like this, at first there were few reports of 4870(X2) cards dying when running Furmak. Further investigation showed that it was indeed Furmark causing VRM's to heat up to insane levels and eventually killing them. Word reached ATi from that point on ATi intentionally throttles their card when detecting Furmark to prevent the damage.

    Yeah in fact the amount of heat load Furmak puts on VRMs is unrealistic and no game is able to heat up the VRMs to the level Furmark does. OCCT used the same method (or maybe even integrated Furmark) to test for stability (in their own opinion ofc)

    So beware about Furmark and OCCT if you have HD4K or 5K.

    The term "Hardware Virus" is rightfully applicable to Furmark when it comes to HD4K (and 5K perhaps)
  • strikeback03 - Wednesday, November 18, 2009 - link

    The article stated that they encountered throttling in real games, not Furmark.

Log in

Don't have an account? Sign up now