STALKER: Call of Pripyat – A Peak at More DX11

For the 5970 launch, AMD sent over a special benchmark version of STALKER: Call of Pripyat, which has since then been made public at the GSC Game World STALKER website. STALKER is another one of AMD’s big DX11 titles, as it’s technically the 2nd DX11 title to launch and the first such title to implement most of the major DX11 features. The Russian version actually shipped back in October, and the German version shipped 2 weeks ago. However we’re told that these versions only had an early-stage implementation of the DX11 feature set, and that the demo is more representative of where the game is after patching and what it will be like when it finally launches in the rest of the world early next year.

Since it’s an unplayable demo, we’re going to forgo any competitive benchmarking (it’s not as if anyone else has a DX11 card anyhow) but we will look quickly at the performance impact of these features, since this is the closest thing we have to a playable game using them at this point in time.

STALKER makes use of 3 major DX11 features.

  1. High Definition Ambient Occlusion using compute shaders
  2. Tessellation
  3. Contact hardening shadows

We’ve already seen HDAO with Battleforge, and it’s no different here in STALKER. And we’ve covered tessellation in-depth in our look at DirectX 11.

So today, let’s talk about contact hardening shadows. Shadowing has been on a path of particularly noticeable evolution. The first real shadows, seen in such titles as Doom 3, had very hard edges. Stencil buffering was used to determine where a shadow would fall, and that was it. Diffusion was never taken into account. Newer games have since taken diffusion into account to generate soft shadows, but these shadows aren’t necessarily accurate. Currently soft shadows are implemented with a fixed degree of softness around the entire shadow, which isn’t how shadows really work.

With diffusion, the softness of a shadow increases with the distance of the casting object from the surface the shadow is being displayed on. AMD loves to use a light pole as an example, as the top of the shadow should be softer than the bottom. These shadows are referred to as contact hardening shadows, and the use of them in games has been limited by the expense of calculating them using the DX10 feature set. STALKER allows for contact hardening shadows with DX10.1 and DX11 mode.

Unfortunately a moving benchmark makes for a poor source to take screenshots, so we’re going to stick with AMD’s reference shots here. Only contact hardening shadows are particularly noticeable in the benchmark; tessellation and HDAO are there, but are virtually impossible to catch given the zoomed-out nature of the benchmark and the fact that it’s constantly in motion.

The benchmark comes with 4 different flybys, each under different environmental conditions: day, night, day with rain, and day with sun shafts. We’ve gone ahead and benchmarked the game during the “day” flyby, once with the DX11 feature set enabled, and once with it disabled. This means for DX11 mode tessellation, contact hardening shadows, and Ultra HDAO were enabled; and for DX10 tessellation and contact hardening shadows were disabled, and High HDAO was used.

STALKER: Call of Pripyat, Day Benchmark DX10 DX11
Average FPS 31.4 35.1
Minimum FPS 17.7 21.2

Enabling all of these features actually causes performance to rise, thanks to the more efficient implementation of HDAO as a compute shader as opposed to a pixel shader. Ultimately what this means is that unless HDAO is disabled entirely, STALKER is going to be faster on a DX11 card running the entire DX11 feature set than it will be when running the DX10 feature set.

The biggest performance hit, and the reason we’re not breaking 40fps here even with a 5970, is due to how anti-aliasing is implemented in STALKER. As it uses deferred rendering, the game does its own anti-aliasing. We used 4X MSAA here along with per-pixel alpha transparency testing (basically Adaptive/Transparancy AA). Disabling anti-aliasing improves performance dramatically.

The Card They Beg You to Overclock Radeon HD 5970 Eyefinity
Comments Locked

114 Comments

View All Comments

  • SJD - Wednesday, November 18, 2009 - link

    Thanks Anand,

    That kind of explains it, but I'm still confused about the whole thing. If your third monitor supported mini-DP then you wouldn't need an active adapter, right? Why is this when mini-DP and regular DP are the 'same' appart from the actual plug size. I thought the whole timing issue was only relevant when wanting a third 'DVI' (/HDMI) output from the card.

    Simon
  • CrystalBay - Wednesday, November 18, 2009 - link

    WTH is really up at TWSC ?
  • Jacerie - Wednesday, November 18, 2009 - link

    All the single game tests are great and all, but once I would love to see AT run a series of video card tests where multiple instances of games like EVE Online are running. While single instance tests are great for the FPS crowd, all us crazy high-end MMO players need some love too.
  • Makaveli - Wednesday, November 18, 2009 - link

    Jacerie the problem with benching MMO's and why you don't see more of them is all the other factors that come into play. You have to now deal with server latency, you also have no control of how many players are usually in the server at any given time when running benchmarks. There is just to many variables that would not make the benchmarks repeatable and valid for comparison purposes!
  • mesiah - Thursday, November 19, 2009 - link

    I think more what he is interested in is how well the card can render multiple instances of the game running at once. This could easily be done with a private server or even a demo written with the game engine. It would not be real world data, but it would give an idea of performance scaling when multiple instances of a game are running. Myself being an occasional "Dual boxer" I wouldn't mind seeing the data myself.
  • Jacerie - Thursday, November 19, 2009 - link

    That's exactly what I was trying to get at. It's not uncommon for me to be running at lease two instances of EVE with an entire assortment of other apps in the background. My current 3870X2 does the job just fine, but with 7 out and DX11 around the corner I'd like to know how much money I'm going to need to stash away to keep the same level of usability I have now with the newer cards.
  • Zool - Wednesday, November 18, 2009 - link

    The so fast is only becouse 95% of the games are dx9 xbox ports. Still crysis is the most demanding game out there quite a time (it need to be added that it has a very lazy engine). In Age of Conan the diference in dx9 and dx10 is more than half(with plenty of those efects on screen even1/3) the fps drop. Those advanced shader efects that they are showing in demos are actualy much more demanding on the gpu than the dx9 shaders. Its just the thing they dont mention it. It will be same with dx11. A full dx11 game with all those fancy shaders will be on the level of crysis.
  • crazzyeddie - Wednesday, November 18, 2009 - link

    ... after their first 40nm test chips came back as being less impressive than **there** 55nm and 65nm test chips were.
  • silverblue - Wednesday, November 18, 2009 - link

    Hehe, I saw that one too.
  • frozentundra123456 - Wednesday, November 18, 2009 - link

    Unfortunately, since playing MW2, my question is: are there enough games that are sufficiently superior on the PC to justify the inital expense and power usage of this card? Maybe thats where eyefinity for AMD and PhysX for nVidia come in: they at least differentiate the PC experience from the console.
    I hate to say it, but to me there just do not seem to be enough games optimized for the PC to justify the price and power usage of this card, that is unless one has money to burn.

Log in

Don't have an account? Sign up now